Recommended reading
- On your desktop computer - how to run your impulse on your desktop computer will give you a good overview of how to use the C++ library before you start with the Linux SDK.
Installation guide
- Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).
-
Clone this repository and initialize the submodules:
-
If you want to use the audio or camera examples, you’ll need to install libasound2 and OpenCV, you can do so via:
Linux
macOSNote that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.
Collecting data
Before you can classify data you’ll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.Collecting data from the camera or microphone
To collect data from the camera or microphone, follow the getting started guide for your development board.Collecting data from other sensors
To collect data from other sensors you’ll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here’s an end-to-end example that you can build via:Classifying data
This repository comes with four classification examples:- custom - classify custom sensor data (
APP_CUSTOM=1
). - audio - realtime audio classification (
APP_AUDIO=1
). - camera - realtime image classification (
APP_CAMERA=1
). - .eim model - builds an .eim file to be used from Node.js, Go or Python (
APP_EIM=1
).
- Train an impulse.
- Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.
-
Build the application via:
Replace
APP_CUSTOM=1
with the application you want to build. See ‘Hardware acceleration’ below for the hardware specific flags. You probably want these. -
The application is in the build directory:
Hardware acceleration
For many targets, there is hardware acceleration available. Raspberry Pi 4 (and other Armv7l Linux targets) Build with the following flags:-
Install Clang:
-
Build with the following flags:
M1-based Macs
Build with the following flags:TensorRT
‘NVIDIA Jetson’ refers to the following devices:On NVIDIA Jetson Orin and NVIDIA Jetson you can also build with support for TensorRT, this fully leverages the GPU on the jetson device. This is not available for SSD object detection models, but available for FOMO, other object detection models, and regular classification/regression models. To build with TensorRT:‘NVIDIA Jetson Orin’ refers to the following devices:
- NVIDIA Jetson Xavier NX Series, Jetson TX2 Series, Jetson AGX Xavier Series, Jetson Nano, Jetson TX1
‘Jetson’ refers to all NVIDIA Jetson devices.
- NVIDIA Jetson AGX Orin Series, Jetson Orin NX Series, Jetson Orin Nano Series
- Go to the Deployment page in the Edge Impulse Studio.
- Select the ‘TensorRT library’, and the ‘float32’ optimizations.
- Build the library and copy the folders into this repository.
-
Build your application with:
-
NVIDIA Jetson Orin
-
NVIDIA Jetson
-
NVIDIA Jetson Orin
Building .eim files
To build Edge Impulse for Linux models (eim files) that can be used by the Python, Node.js or Go SDKs build withAPP_EIM=1
:
build/model.eim
and can be used directly by your application.