This library lets you run machine learning models and collect sensor data on Linux machines using Node.js. The SDK is open source and hosted on GitHub: edgeimpulse/edge-impulse-linux-cli.
See our Linux EIM executable guide to learn more about the .eim file format.
Add the library to your application via:
To setup the parameters of the Edge Impulse CLI, have a look at the helper:
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from the camera or microphone, follow the getting started guide for your development board.
To collect data from other sensors you'll need to write some code where you instantiate a DataForwarder
object, write data samples, and finally call finalize()
which uploads the data to Edge Impulse. Here's an end-to-end example.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
Audio - grabs data from the microphone and classifies it in realtime.
Camera - grabs data from a webcam and classifies it in realtime.
Custom data - classifies custom sensor data.
On Linux platforms without a GPU or neural accelerator your model is run using LiteRT (previously Tensorflow Lite). Not every model can be represented using native LiteRT (previously Tensorflow Lite) operators. For these models, 'Flex' ops are injected into the model. To run these models you'll need to have the flex delegate library installed on your Linux system. This is a shared library that you need to install once.
If your model contains flex ops you'll see this in a few places:
During deployment in the Studio (e.g. "WARN: This model contains ops that require flex delegates (FlexErf). You will need to install the flex delegates shared library to run this model.").
When running a model using the Linux CLI (e.g. "error while loading shared libraries: libtensorflowlite_flex_2.6.5.so. You will need to install the flex delegates shared library to run this model.").
To install the flex delegate library:
Download the shared library for your target architecture and operating system:
macOS, x86 (also runs on M1/M2 using Rosetta)
Linux, armv7 (most 32-bits Arm-based Linux systems, e.g. Raspberry Pi 4 running 32-bits Raspbian)
Linux, aarch64 (most 64-bits Arm-based Linux systems, e.g. Jetson Nano)
Linux, x86_64 (Intel/AMD based Linux systems)
Place the libtensorflowlite_flex_2.6.5.so
(or .dylib
on macOS) file in /usr/lib
or /usr/local/lib
.
If your model has flex ops, and you're building using the Linux C++ SDK, then pass the LINK_TFLITE_FLEX_LIBRARY=1
flag when building the application.
When using the Node.js, Go or Python SDK then the .eim file already has the flex delegates library linked in.
This library lets you run machine learning models and collect sensor data on Linux machines using Python. The SDK is open source and hosted on GitHub: edgeimpulse/linux-sdk-python.
See our Linux EIM executable guide to learn more about the .eim file format.
Install a recent version of Python 3 (>=3.7).
Install the SDK
Raspberry Pi
Jetson Nano
It is possible you will need to install Cython for building numpy package:
After that proceed with installing Linux Python SDK:
Clone this repository to get the examples:
Windows Subsystem for Linux (WSL)
If you are using WSL, you will need to install the following npm packages, and may need to install audio / video dependencies for your machine:
e.g. audio dependencies for Ubuntu:
npm route for WSL use administrator user permissions:
Then proceed with installing the Linux Python SDK:
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from the camera or microphone, follow the getting started guide) for your development board.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
Install the Edge Impulse for Linux CLI.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
Audio - grabs data from the microphone and classifies it in realtime.
Camera - grabs data from a webcam and classifies it in realtime.
Custom data - classifies custom sensor data.
If you see this error you can re-install portaudio via:
This error shows when you want to gain access to the camera or the microphone on macOS from a virtual shell (like the terminal in Visual Studio Code). Try to run the command from the normal Terminal.app.
This error is due to the length of the results output.
To fix this, you can overwrite this line in the class ImpulseRunner
from the runner.py
.
with:
Edge Impulse for Linux is the easiest way to build Machine Learning solutions on real embedded hardware. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
This is a list of development boards that are fully supported by Edge Impulse for Linux. Follow the instructions to get started:
Mac.
Different development board? Probably no problem! You can use the Linux x86_64 getting started guide to set up the Edge Impulse for Linux CLI tool, and you can run your impulse on any x86_64, ARMv7 or AARCH64 Linux target. For support please head to the forums.
This is a list of AI accelerators that are fully supported by Edge Impulse of Linux. Follow the instructions to get started:
To build your own applications, or collect data from new sensors, you can use the high-level language SDKs. These use full hardware acceleration, and let you integrate your Edge Impulse models in a few lines of code:
Edge Impulse for Linux models are delivered in .eim
format. This is an executable that contains your signal processing and ML code, compiled with optimizations for your processor or GPU (e.g. NEON instructions on ARM cores) plus a very simple IPC layer (over a Unix socket). See our Linux EIM executable guide to learn more.
This library lets you run machine learning models and collect sensor data on Linux machines using Go. The SDK is open source and hosted on GitHub: edgeimpulse/linux-sdk-go.
See our Linux EIM executable guide to learn more about the .eim file format.
Install Go 1.15 or higher.
Clone this repository:
Find the example that you want to build and run go build
:
Run the example:
And follow instructions.
This SDK is also published to pkg.go.dev, so you can pull the package from there too.
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from the camera or microphone, follow the getting started guide for your development board.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
Audio - grabs data from the microphone and classifies it in realtime.
Camera - grabs data from a webcam and classifies it in realtime.
Custom data - classifies custom sensor data.
Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).
Clone this repository and initialize the submodules:
If you want to use the audio or camera examples, you'll need to install libasound2 and OpenCV, you can do so via:
Linux
macOS
Note that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
This repository comes with four classification examples:
To build an application:
Train an impulse.
Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.
Build the application via:
Replace APP_CUSTOM=1
with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.
The application is in the build directory:
For many targets, there is hardware acceleration available.
Raspberry Pi 4 (and other Armv7l Linux targets)
Build with the following flags:
NVIDIA Jetson Orin / NVIDIA Jetson Nano (and other AARCH64 targets)
Install Clang:
Build with the following flags:
Linux x86 targets
Build with the following flags:
Intel-based Macs
Build with the following flags:
Build with the following flags:
Note that this does build an x86 binary, but it runs very fast through Rosetta.
'NVIDIA Jetson' refers to the following devices:
NVIDIA Jetson Xavier NX Series, Jetson TX2 Series, Jetson AGX Xavier Series, Jetson Nano, Jetson TX1
'NVIDIA Jetson Orin' refers to the following devices:
NVIDIA Jetson AGX Orin Series, Jetson Orin NX Series, Jetson Orin Nano Series
'Jetson' refers to all NVIDIA Jetson devices.
On NVIDIA Jetson Orin and NVIDIA Jetson you can also build with support for TensorRT, this fully leverages the GPU on the jetson device. This is not available for SSD object detection models, but available for FOMO, YOLOv5 and TAO object detection models, and regular classification/regression models.
To build with TensorRT:
Go to the Deployment page in the Edge Impulse Studio.
Select the 'TensorRT library', and the 'float32' optimizations.
Build the library and copy the folders into this repository.
Build your application with:
NVIDIA Jetson Orin
NVIDIA Jetson
Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.
You can also build .eim files for high-level languages using TensorRT via:
NVIDIA Jetson:
NVIDIA Jetson:
Long warm-up time and under-performance
By default, the Jetson enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON THROUGH USB.
To enable maximum performance, run:
The model will be placed in build/model.eim
and can be used directly by your application.
This library lets you run machine learning models and collect sensor data on machines using C++. The SDK is open source and hosted on GitHub: .
To collect data from the camera or microphone, follow the for your development board.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. that you can build via:
- classify custom sensor data (APP_CUSTOM=1
).
- realtime audio classification (APP_AUDIO=1
).
- realtime image classification (APP_CAMERA=1
).
- builds an .eim file to be used from Node.js, Go or Python (APP_EIM=1
).
See the section below for information on enabling GPUs. To build with hardware extensions for running on the CPU:
To build Edge Impulse for Linux models () that can be used by the Python, Node.js or Go SDKs build with APP_EIM=1
:
A troubleshooting guide, e.g. to deal with "Failed to allocate TFLite arena" or "Make sure you apply/link the Flex delegate before inference" is listed in the docs for .