Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an Mbed OS application to classify sensor data.
Knowledge required
This tutorial assumes that you're familiar with Mbed OS, and have installed Mbed CLI. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for the ST IoT Discovery Kit has that. See edgeimpulse/firmware-st-b-l475e-iot01a.
Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also install the following software:
Mbed CLI - make sure mbed
is in your PATH.
GNU ARM Embedded Toolchain 9 - make sure arm-none-eabi-gcc
is in your PATH.
We created an example repository which contains a small Mbed OS application, which takes the raw features as an argument, and prints out the final classification. Import this repository using Mbed CLI:
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library and click Build to create the library.
Your project will be downloaded as a .zip
file. Extract your-project.zip and place the contents of your project in the 'example-standalone-inferencing-mbed' folder (which you downloaded above). Your final folder structure, should look like this:
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
Then build and flash the application to your development board with Mbed CLI:
To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board (e.g. by pressing the black button on the ST B-L475E-IOT01A. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output:
Which matches the values we just saw in the studio. You now have your impulse running on your Mbed-enabled development board!
A demonstration on how to plug sensor values into the classifier can be found here: Data forwarder - classifying data (Mbed OS).
The provided methods package all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally.
Impulses can be deployed as a C++ library. The library does not have any external dependencies and can be built with any C++11 compiler, see Running your impulse as a C++ library.
We have end-to-end guides for:
We also have tutorials for:
Running your impulse on a Linux system with our C++, Node.js, Python or Go SDKs.
These tutorials show you how to run your impulse, but you'll need to hook in your sensor data yourself. We have a number of examples on how to do that in the Data forwarder documentation, or you can use the full firmware for any of the fully supported development boards as a starting point - they have everything (including sensor integration) already hooked up. Or keep reading for documentation about the sensor format and inputs that we expect.
Did you know?
You can build binaries for supported development boards straight from the studio. These will include your full impulse. See Edge Impulse Firmwares
The input to the run_classifier
function is always a signal_t
structure with raw sensor values. This structure has two properties:
total_length
- the total number of values. This should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
(from model_metadata.h
). E.g. if you have 3 sensor axes, 100Hz sensor data, and 2 seconds of data this should be 600.
get_data
- a function that retrieves slices of data required by the DSP process. This is used in some DSP algorithms (like all audio-based ones) to page in the required data, and thus saves memory. Using this function you can store (f.e.) the raw data in flash or external RAM, and page it in when required.
F.e. this is how you would page in data from flash:
If you have your data already in RAM you can use the signal_from_buffer function to construct the signal:
The get_data
function expects floats to be returned, but you can use the int8_to_float and int16_to_float helper functions if your own buffers are int8_t
or int16_t
(useful to save memory). E.g.:
Signals are always a flat buffer, so if you have multiple sensor data you'll need to flatten it. E.g. for sensor data with three axes:
The signal for image data is also flattened, starting with row 1, then row 2 etc. And every pixel is a single value in HEX format (RRGGBB). E.g.:
We do have an end-to-end example on constructing a signal from a frame buffer in RGB565 format, which is easily adaptable to other image formats, see: example-signal-from-rgb565-frame-buffer.
If you're doing image classification and have a quantized model, the data is automatically quantized when reading the data from the signal to save memory. This is automatically enabled when you call run_impulse
. To control the size of the buffer that's used to read from the signal in this case you can set the EI_DSP_IMAGE_BUFFER_STATIC_SIZE
macro (which also allocates the buffer statically).
To statically allocate the neural network model, set this macro:
EI_CLASSIFIER_ALLOCATION_STATIC=1
Additionally we support full static allocation for quantized image models. To do so set this macro:
EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024
Static allocation is not supported for other DSP blocks at the moment.
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an application for the Himax WE-I Plus development board to classify sensor data.
Knowledge required
This tutorial assumes that you're familiar with building applications for the Himax WE-I Plus. If you're unfamiliar with either of these you build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for the Himax WE-I Plus has that. See edgeimpulse/firmware-himax-we-i-plus.
Make sure you've followed one of the tutorials and have a trained impulse. Also install the following software:
Edge Impulse CLI - to flash the firmware.
A build toolchain, either:
Or, the GNU Toolchain for DesignWare ARC processors, and make sure you have arc-elf32-gcc
in your PATH (Linux only).
Or, the DesignWare ARC MetaWare Toolkit - including a valid license, and make sure you have ccac
in your PATH.
If you're building with the GNU or DesignWare toolchains, also install:
We created an example repository which contains a small application for the Himax WE-I Plus, which takes the raw features as an argument, and prints out the final classification. Download the application here, or import this repository using Git:
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library and click Build to create the library.
Download the .zip
file and extract the directories in the 'example-standalone-inferencing-himax' folder. Make sure to not replace CMakeLists.txt
in this folder. Your final folder structure should look like this:
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open main.cc
and paste the raw features inside the static const float features[]
definition, for example:
Then build and flash the application to your development board:
Build the container:
Then set up your build environment:
And build and link the application:
There are instructions in the README.md file on how to build with the Metaware toolkit under Docker.
Create a build directory and initialize CMake:
Build and link the application:
Create a build directory and initialize CMake:
Build and link the application:
You'll need the Edge Impulse CLI v1.10 or higher. Then flash the binary with:
To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output:
Which matches the values we just saw in the studio. You now have your impulse running on your Himax WE-I Plus development board!
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an application for Raspberry Pi Pico (RP2040) development board to classify sensor data.
Knowledge required
This tutorial assumes that you're familiar with building applications with C/C++ Pico-SDK for Raspberry Pi Pico (RP2040). If you're unfamiliar with either of these you build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for Raspberry Pi Pico (RP2040) has that. See edgeimpulse/firmware-pi-rp2040.
Make sure you've followed one of the tutorials and have a trained impulse. For the purpose of this tutorial, we’ll assume you trained a Continuous motion recognition model. Also install the following software:
The below instructions assume you are using Debian-based Linux distribution. Alternative instructions for those using Microsoft Windows or Apple macOS are provided in the Getting started with Pico guide (Sections 9.1 and 9.2).
To build the project, you will need the pico-sdk, CMake, a cross-platform tool used to build the software, and the GNU Embedded Toolchain for Arm. In Debian-based OS, you can install both these via apt from the command line.
Note: Ubuntu and Debian users might additionally need to also install libstdc++-arm-none-eabi-newlib
.
You'll need the PICO SDK to compile the firmware. You can obtain it from https://github.com/raspberrypi/pico-sdk and then specify PICO_SDK_PATH environmental variable, that would point to exact PICO SDK location on your system. E.g.
We created an example repository which contains a small application for Raspberry Pi Pico (RP2040), which takes the raw features as an argument, and prints out the final classification. Download the application as a .zip, or import this repository using Git:
Head over to your Edge Impulse project, and go to the Deployment tab. From here you can create the full library which contains the impulse and all required libraries. Select C++ library and click Build to create the library.
Download the .zip file and extract the directories in the example-standalone-inferencing-pico folder. Your final folder structure should look like this:
With the project ready, it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the Copy to clipboard button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happens.
Open ei_main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
Build the application by calling make
in the build directory of the project:
The fastest method to load firmware onto a RP2040-based board for the first time is by mounting it as a USB Mass Storage Device. Doing this allows you to drag a file onto the board to program the flash. Connect the Raspberry Pi Pico to your computer using a micro-USB cable, making sure that you hold down the BOOTSEL button as you do so, to force it into USB Mass Storage Mode. Drag the ei_rp2040_firmware.uf2
file from the build folder to the newly appeared USB Mass Storage device.
To see the output of the impulse, connect to the development board over a serial port on baud rate 115200
and reset the board. You can do this with your favorite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output, for example:
Which matches the values we just saw in the studio. You now have your impulse running on your Raspberry Pi Pico development board
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build a Zephyr RTOS application for the nRF52840 DK / nRF5340 DK / nRF9160DK / Thingy:91 development board to classify sensor data.
A working Zephyr RTOS build environment is required
This tutorial assumes that you're already familiar with building applications for the nRF52840DK or other Zephyr RTOS supported board, and that you have your environment set up to compile applications for this platform. For this tutorial, you can use the nRF Connect SDK v1.6.0 or higher.
Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also, make sure you have a working Zephyr build environment, including the following tools:
Either the nRF Connect SDK which includes Zephyr and all its dependencies (v1.6.0 or higher), or a manual installation of the Zephyr build environment.
Optional: The nRF comand line tools and Segger J-Link tools. These command line tools are required if you use the west
command line interface to upload firmware to your target board.
We created an example repository which contains a small application that compliments the Continuous motion recognition tutorial. This application can take raw, hard-coded inputs as an argument, and print out the final classification to the serial port so it can be read from your development computer. You can either download the application or import the repository using Git:
Fully featured open source repos are also available
If you are looking for sample projects showcasing all sensors and features supported by Edge Impulse out of the box, we have public firmware repos available for the Nordic Semiconductor nRF52840, nRF5340 and nRF9160 development kits as well as for the Thingy:91. See edgeimpulse/firmware-nrf52840-5340-dk, edgeimpulse/firmware-nrf-91 or edgeimpulse/firmware-nordic-thingy91.
Head over to your Edge Impulse project, and go to the Deployment page. From here you can obtain a packaged library containing the Edge Impulse C++ SDK, your impulse, and all required external dependencies. Select C++ library and click Build to create the library.
Download the .zip
file and place the contents in the 'example-standalone-inferencing-zephyr' folder (which you downloaded above). Your final folder structure should look like this:
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification in the project you created for the continuous motion recognition tutorial, then load a testing sample, and click on a row under 'Detailed result'.
To verify that the Zephyr application performs the same classification when running locally on your board, we need to use the same raw inputs as those provided to the Live classification for any given timestamp. To do so, click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Next, open src/main.cpp
in the example directory and paste the raw features inside the static const float features[]
definition. For example:
And use west or your usual method to build the application:
Invalid choice: 'build'
If you try to build the application but it throws an 'invalid choice' error like:
You'll need to set up your environment variables correctly (more info). You can do so by opening a command prompt or terminal window and running the commands below from the zephyr parent directory: On Windows
On macOS / Linux
If you have set up the Segger J-LINK tools, you can also flash this application with:
otherwise if your board shows up as a mass storage device, you can find the build/zephyr/zephyr.bin
file and drag it to the JLINK
USB mass-storage device in the same way you do with a USB flash drive.
For the nRF9160DK, you also have to make sure the board controller has been flashed at least once.
To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will show you the output of the signal processing pipeline and the results of the classification:
The output should match the values you just saw in the studio. If it does, you now have your impulse running on your Zephyr development board!
Connecting live sensors?
Now that you have verified that the impulse works with hard-coded inputs, you should be ready to plug live sensors from your board. A demonstration on how to plug sensor values into the classifier can be found here: Data forwarder - classifying data (Zephyr).
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build a desktop application to classify sensor data.
Even though this is a C++ library you can link to it from C applications. See 'Using the library from C' below.
Knowledge required
This tutorial assumes that you know how to build C++ applications, and works on macOS, Linux and Windows. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Note: This tutorial provides the instructions necessary to build the C++ SDK library locally on your desktop. If you would like a full explanation of the Makefile and how to use the library, please see the deploy your model as a C++ library tutorial.
Looking for examples that integrate with sensors? See the Edge Impulse C++ SDK for Linux.
Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also install the following software:
macOS, Linux
GNU Make - to build the application. make
should be in your PATH.
A modern C++ compiler. The default LLVM version on macOS works, but on Linux upgrade to LLVM 9 (installation instructions).
Windows
MinGW-W64 which includes both GNU Make and a compiler. Make sure mingw32-make
is in your PATH.
We created an example repository which contains a Makefile and a small CLI example application, which takes the raw features as an argument, and prints out the final classification. Clone or download this repository at example-standalone-inferencing.
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library, and click Build to create the library.
Download the .zip
file and place the contents in the 'example-standalone-inferencing' folder (which you downloaded above). Your final folder structure should look like this:
To get inference to work, we need to add raw data from one of our samples to main.cpp. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'. Make a note of the classification results, as we want our local application to produce the same numbers from inference.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open source/main.cpp in an editor of your choice. Find the following line:
Paste in your raw sample data where you see // Copy raw features here
:
Note: the raw features will likely be longer than what I listed here (the ...
won't compile--I just wanted to demonstrate where the features would go).
In a real application, you would want to make the features[]
buffer non-const. You would fill it with samples from your sensor(s) and call run_classifier()
or run_classifier_continuous()
. See deploy your model as a C++ library tutorial for more information.
Save and exit.
Open a terminal or command prompt, and build the project:
macOS, Linux
Windows
This will first build the inferencing engine, and then build the complete application. After building succeeded you should have a binary in the build/ directory.
Then invoke the local application by calling the binary name:
macOS, Linux
Windows
This will run the signal processing pipeline using the values you provided in the features[]
buffer and then give you the classification output:
Which matches the values we just saw in the studio. You now have your impulse running locally!
Even though the impulse is deployed as a C++ application, you can link to it from C applications. This is done by compiling the impulse as a shared library with the EIDSP_SIGNAL_C_FN_POINTER=1
and EI_C_LINKAGE=1
macros, then link to it from a C application. The run_classifier
can then be invoked from your application. An end-to-end application that demonstrates this and can be used with this tutorial is under example-standalone-inferencing-c.
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an application for Espressif ESP-EYE (ESP32) development board to classify sensor data using ESP IDF development framework.
Knowledge required
This tutorial assumes that you're familiar with building applications with ESP IDF development framework for ESP-EYE (ESP32). If you're unfamiliar ESP-IDF, you can download a ready-to-flash binary compatible with the ESP32-EYE or download the generated Arduino library directly from the Deployment page in the studio.
Note: Are you looking for an example that has sensors included? The Edge Impulse firmware for Espressif ESP-EYE (ESP32) has that. See edgeimpulse/espressif-esp32
Make sure you've followed one of the tutorials and have a trained impulse. For the purpose of this tutorial, we’ll assume you trained an Continuous motion recognition model. Also install the following software:
We created an example repository which contains a small application for Espressif ESP32, which takes the raw features as an argument, and prints out the final classification. Download the application as a .zip, or import this repository using Git:
Head over to your Edge Impulse project, and go to the Deployment tab. From here you can create the full library which contains the impulse and all required libraries. Select C++ library and click Build to create the library.
Download the .zip file and unzip the deployed C++ library from your Edge Impulse project and copy only the folders to the root directory of this repository example-standalone-inferencing-espressif-esp32 folder. Your final folder structure should look like this:
With the project ready, it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
In your case, since you might pick a different sample, the values and classification results might be different from the screenshot above. The important thing is that classification result in Studio matches the one from the device - which we will be checking a bit later.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the Copy to clipboard button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happens.
Open ei_main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
Build the application with ESP IDF in the project directory:
To flash the project, in the project directory execute:
To see the output of the impulse, connect to the development board over a serial port on baud rate 115200
and reset the board. You can do this with your favorite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output, for example:
Which matches the values you just saw in the studio. You now have your impulse running on your Espressif ESP32 development board.
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an application for development board to classify sensor data.
Knowledge required
This tutorial assumes that you're familiar with building applications for Sony's Spresense. If you're unfamiliar with either of these you build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for Sony's Spresense has that. See .
Make sure you've followed one of the tutorials and have a trained impulse. Also install the following software:
.
.
- make sure arm-none-eabi-gcc
is in your PATH.
We created an example repository which contains a small application for Sony's Spresense, which takes the raw features as an argument, and prints out the final classification. Download the application , or import this repository using Git:
Head over to your Edge Impulse project, and go to the Deployment tab. From here you can create the full library which contains the impulse and all required libraries. Select C++ library and click Build to create the library.
Download the .zip
file and extract the directories in the example-standalone-inferencing-spresense/edge_impulse/
folder. Your final folder structure should look like this:
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the Copy to clipboard button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open ei_main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
Then build and flash the application to your development board:
make
)Build the application by calling make in the root directory of the project:
Connect the board to your computer using USB.
Flash the board:
Build the Docker image:
Build the application by running the container as follows:
Windows
Linux, macOS
Connect the board to your computer using USB.
Flash the board:
Or if you don't have make
installed:
To see the output of the impulse, connect to the development board over a serial port on baud rate 115200 and reset the board. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output, for example:
Which matches the values we just saw in the studio. You now have your impulse running on your Spresense development board!
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an impulse using the Texas Instruments SimpleLink SDK for the CC1352P LaunchPad and Sensors BoosterPack.
Knowledge required
This tutorial assumes that you're familiar with building applications using the Texas Instruments SimpleLink SDK as well as ARM GCC toolchains. You will also need make
set up in your environment. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Make sure you followed the tutorial, and have a trained impulse.
Clone the repository to your working directory.
Install
Install the desktop version for your operating system
Add the installation directory to your PATH
See for more details
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C/C++ Library and click Build to create the library. Then download and extract the .zip
file.
To add the impulse to your firmware project, paste the edge-impulse-sdk/
, model-parameters
and tflite-model
directories from the downloaded '.zip' file into the edge_impulse/
directory of the repository. Make sure to overwrite any existing files in the edge_impulse/
directory.
This standalone example project contains minimal code required to run the imported impulse within the SimpleLink SDK. This code is located in ei_main.cpp
. In this minimal code example, inference is run from a static buffer of input feature data. To verify that our embedded model achieves the exact same results as the model trained in Studio, we want to copy the same input features from Studio into the static buffer in ei_main.cpp
.
To do this, first head back to the studio and click on the Live classification tab. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same result, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw input values from this validation file, before any signal processing or inferencing happened.
In ei_main.cpp
paste the raw features inside the static const float features[]
definition, for example:
The project will repeatedly run inference on this buffer of raw features once built. This will show that the inference result is identical to the Live classification tab in Studio. From this starting point, the example project is fully compatible with existing SimpleLink SDK plugins, drivers or custom firmware. Use new sensor data collected in real time on the device to fill a buffer. From there, follow the same code used in ei_main.cpp
to run classification on live data.
There are two ways to build the project. The first uses the included Docker environment, pre-configured with the correct SimpleLink SDK version and ARM GCC toolchain. The other option is to build the project locally. This will require installing dependencies and making minor modifications to the makefile
Run the Docker Desktop executable, or start the docker daemon from a terminal as shown below:
Build the application by running the container as follows:
Windows
Linux, macOS
Connect the board to your computer using USB.
If you are building locally, You will first need to install the following dependencies. This guide assumes these are installed into the same working directory as the cloned standalone example repo.
Next you will need to open the gcc/makefile
file in the standalone example repository, and define custom paths to your installed dependencies.
Remove the SIMPLELINK_CC13X2_26X2_SDK_INSTALL_DIR
on line 2 of the makefile, and add the following definitions at the top of the makefile
If you installed the dependencies to another directory, modify the paths as needed.
Now you should be ready to build, from the gcc/
folder of the standalone firmware repo, run:
If the UniFlash CLI is added to your PATH, run:
If the UniFlash CLI is added to not added to your PATH, the install scripts will fail. To fix this, add the installation directory of UniFlash (example /Applications/ti/uniflash_6.4.0
on macOS) to your PATH on:
If during flashing you encounter issues after UniFlash is added to PATH, ensure:
The device is properly connected and/or the cable is not damaged.
You have the proper permissions to access the USB device and run scripts. On macOS you can manually approve blocked scripts by clicking the System Preferences->Security Settings->Unlock Icon (Bottom Left)
and then approving the blocked script.
If on Linux you may want to try copying tools/71-ti-permissions.rules
to /etc/udev/rules.d/
. Then re-attach the USB cable and try again.
Impulses can be deployed as an optimized Syntiant NDP 101/120 library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and run the application on the Syntiant TinyML Board or Arduino Nicla Voice to control GPIO pins when the keyword 'go' or 'stop' is uttered, or if a circular motion is detected.
Download/clone the firmware source code for your hardware:
Make sure you followed the or tutorial, have a trained impulse, and can load code on your board.
Naming your classes
The NDP chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise
and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset
(we prefix it with 'z' in order to get this class last in the list).
Go to the Deployment page of your project and select the Syntiant library option for either NDP101 (Syntiant TinyML) or NDP120 (Arduino Nicla Voice):
Unzip the archive and copy the model-parameters content into the src/model-parameters/ folder of the firmware source code.
The export also creates an ei_model synpkg or bin file that we will use later on to flash the board.
You can add your custom logic to the main Arduino sketch by customizing the on_classification_changed() function. By default this function contains the following code to activate LEDs based on "stop" and "go" classes:
Open the src/ei_syntiant_ndp120.cpp file and look at the match_event() function. We can customize the code as follows to activate LEDs based on "stop" and "go" classes:
You will also need to disable the default LED activation in the ei_main() function:
Once you've added your own logic, to compile and flash the firmware run:
Windows
update_libraries_windows.bat
(Syntiant TinyML only)
arduino-win-build.bat --build
for audio support (add --with-imu
flag for IMU support)
arduino-win-build.bat --flash
Linux and Mac
./arduino-build.sh --build
for audio support (add --with-imu
flag for IMU support)
./arduino-build.sh --flash
Once you've compiled the Arduino firmware:
Take the .bin file output by Arduino and rename it to firmware.ino.bin.
Replace the ei_model*.bin file in our default firmware by the one from the Syntiant library.
Launch the script for your OS to flash the board
Once you've compiled the Arduino firmware:
Take the .elf file output by Arduino and rename it to firmware.ino.elf.
Replace the ei_model.synpkg file in our default firmware by the one from the Syntiant library.
Launch the script for your OS to flash the board
Great work! You've captured data, trained a model, and deployed it to your board. You can now control LEDs, activate actuators, or send a message to the cloud whenever you say a keyword or detect some motion!
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an application in Simplicity Studio to classify sensor data on the development board.
Knowledge required
This tutorial assumes that you're familiar with compiling applications with Simplicity Studio. If you're unfamiliar with this you can build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for the SiLabs Thunderboard Sense 2 has that. See .
Make sure you followed the [Continuous motion recognition] (tutorials/continuous-motion-recognition.md) tutorial, and have a trained impulse. Also install the following software:
.
Python 3.6.8 or higher.
Java 64 bit JVM 11 or higher: - available at or .
Alternatively you can build this application from the command line or through Docker, see the build instructions in .
We created an example repository which contains a small Simplicity Studio application, which takes the raw features as an argument, and prints out the final classification. You can either or import this repository using Git:
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library and click Build to create the library.
Download the .zip
file and place the contents in the 'example-standalone-inferencing-silabs-tb-sense-2/ei-workspace/edgeimpulse' folder (which you downloaded above).
With the model downloaded you can import the project into Simplicity Studio.
Generate the Simplicity Studio project by opening a command prompt or terminal, navigating to the 'example-standalone-inferencing-silabs-tb-sense-2
folder and running:
Open Simplicity IDE and install the Gecko SDK 3.2.x.
Create a new project via File > New > Silicon Labs Project Wizard...
In the New Project Wizard select Simplicity Studio > Silicon Labs MCU Project and click Next
Under 'board' select Thunderboard Sense 2.
Select the correct SDK you installed in #1 and click Next.
Select Empty C++ Program and click Next.
Name the project example-standalone-inferencing-silabs-tb-sense-2
(exactly this) and make sure Copy contents is selected before clicking Finish.
Under 'Project Explorer' select all files, except for Includes and delete them:
Then, navigate to the example-standalone-inferencing-silabs-tb-sense-2/ei-workspace
folder (in this repository), and drag all files and folders into the 'Project explorer' window in Simplicity Studio. When prompted select Copy files and folders for this operation.
Then close, and reopen the project via: Project > Close Project, then Project > Open Project.
Double-click on example-standalone-inferencing-silabs-tb-sense-2.slcp
to show the Simplicity Configurator.
Edit 'Project Generators' and disable 'IAR EMBEDDED WORKBENCH PROJECT' (if it's listed):
Click Force Generation to regenerate all links and include paths.
The project is now imported!
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
In the example directory open main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
With everything set up, you can now build the application using Simplicity Studio, the command line, or with Docker.
In Simplicity Studio v5, select Project > Build Project to build the firmware.
Then, right click on the development board in the Debug adapters section of Simplicity Studio and select Upload application.
Under Application image path select the GNU ARM v10.2.1 - Default/example-standalone-inferencing-silabs-tb-sense-2.bin
file and click OK to flash.
(Alternatively you can drag and drop the GNU ARM v10.2.1 - Default/example-standalone-inferencing-silabs-tb-sense-2.bin
file onto the TB004
mass-storage device to flash the binary.
To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output:
Which matches the values we just saw in the studio. You now have your impulse running on your Thunderboard Sense 2 development board!
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and optimized learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an impulse into a custom application using either or for your Ensemble device.
Knowledge required
This tutorial assumes that you're familiar with building applications using Alif development tools and drivers, as well as Makefile based projects. You will need make
set up in your environment. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Make sure you followed the , and have a trained impulse from one of the listed tutorials.
Clone the repository to your working directory.
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select Ethos u55 Library and click Build to create the library. Then download and extract the .zip
file.
To add the impulse to your firmware project, paste the edge-impulse-sdk/
, model-parameters
and tflite-model
directories from the downloaded '.zip' file into the source/
directory of the repository. Make sure to overwrite any existing files in the source/
directory.
This standalone example project contains minimal code required to run the imported impulse on the E7 device. This code is located in ei_main.cpp
. In this minimal code example, inference is run from a static buffer of input feature data. To verify that our embedded model achieves the exact same results as the model trained in Studio, we want to copy the same input features from Studio into the static buffer in ei_main.cpp
.
To do this, first head back to the studio and click on the Live classification tab. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same result, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw input values from this validation file, before any signal processing or inferencing happened.
In ei_main.cpp
paste the raw features inside the static const float features[]
definition, for example:
The project will repeatedly run inference on this buffer of raw features once built. This will show that the inference result is identical to the Live classification tab in Studio. From this starting point, the example project is fully compatible with existing SimpleLink SDK plugins, drivers or custom firmware. Use new sensor data collected in real time on the device to fill a buffer. From there, follow the same code used in ei_main.cpp
to run classification on live data.
There are three ways to build the project. The first uses the included Docker environment, pre-configured with the ARM GCC toolchain. The other options are to build the project locally with either GCC or ARMCLANG.
When building projects for the Ensemble E7 kit, you have the option to deploy to the 'high efficiency' or 'high performance' cores. For all build options, the core is selected via the -DTARGET_SUBSYSTEM
parameter when building. The commands below all default to the high performance core, but you can easily switch cores by swapping any -DTARGET_SUBSYSTEM=HP
parameter to -DTARGET_SUBSYSTEM=HE
Run the Docker Desktop executable, or start the docker daemon from a terminal as shown below:
Build the application by copying the following command to build inside the container:
Windows
Linux, macOS
The compiled app.axf
will now be available in the build/bin
directory
With the ARMCLANG compiler set up, you can build the project via:
With the GCC set up, you can build the project via:
Grab the app.axf
from the build/bin
directory, and note whether you built the application for the high performance or high efficiency core
Connect your flash programmer to your debugger of choice, and configure it to select
Flash and run app.axf
To see the output of the impulse over UART2, connect to the development board over a serial port on baud rate 115,200 and reset the board. You can do this with your favourite serial monitor or with the Edge Impulse CLI:
This will run the signal processing pipeline, and then classify the output:
Timing calculations are performed in ei_classifier_porting.cpp and make use of an interrupt attached to SysTick.
An RTOS may take over this interrupt handler, in which case you should re-implement ei_read_timer_us
and _ms
.
The default calculation is based on the default clock rates of the Alif dev kit (400 MHz for HP core, 160 MHz for HE core). If you change this, redefine EI_CORE_CLOCK_HZ
.
Alif M55 processors have a private fast DTCM, and also access to a larger, but slower, chip global SRAM.
For armclang
the linker file attempts to place as much as possible in DTCM, and overflows into SRAM if needed.
With debugger attached, my device boots up directly into Bus_Fault (or possibly another fault). This can especially happen when you entered Hard Fault before your last reset.
Power cycle your board and reload your program
If you are building with Docker, you will need to have installed.
From the directory, build the Docker image:
version simplelink_cc13x2_26x2_sdk_5.20.00.52
version 9-2019-q4-major
and then
Alternatively, the gcc/build/edge-impulse-standalone.out
binary file may be flashed to the LaunchPad using the UniFlash GUI or web-app. See the for more info.
Replace the firmware.ino.bin from our default firmware (our default firmware can be downloaded from )
Replace the firmware.ino.elf from our default firmware (our default firmware can be downloaded from )
If you are building with Docker, you will need to have installed.
From the directory, build the Docker image:
If you see errors when building, read through the section
Connect the board to your computer. Refer back to the for how to do this.
If you are developing your application in or , you may have an ARMCLANG license and wish to develop in that environment. To build this makefile project with ARMCLANG, first make sure you have followed to enable and authenticate your compiler
The compiled app.axf
will now be available in the build/bin
directory, and you can
If you see errors when building, first check that your ARMCLANG compiler is properly set up and authenticated, and then read through the section below.
To build locally with GCC, first download the , version 10.2 (2020 q4) or later. Follow the installation instructions and make sure this is the primary arm-gcc compiler in your path.
If you see errors when building, first check that the ARM GCC compiler is correctly added to your path, and then read through the section below.
The compiled app.axf
will now be available in build/bin
, and you can
For or , see Alif instructions in .
For , create a new project with the following device settings. Make sure to choose the correct core based on your build settings:
Alternatively, Alif provides a Secure Enclave
to manage secure firmware storage and bootup in production environments. Alif provides documentation on converting .axf files for use with their secure enclave, and then programming the resulting binary regions to the secure enclave in .
For gcc
, the linker is unable to auto place based on size. If you get an error during link, see and un-comment the line that places the model in SRAM (instead of DTCM). This will only slow down DSP, as the U55 has to use the SRAM bus to access the model regardless of placement.
When your entire program can't fit into DTCM, sometimes customizing placement of objects can improve performance. See for example placement commands.
While Edge Impulse supports a number of boards that make gathering data and deploying your model easy, we know that many people will want to run edge machine learning on their own board. This tutorial will show you how to export an impulse and how to include it as a C++ library.
An impulse is a combination of any preprocessing code necessary to extract features from your raw data along with inference using your trained machine learning model. You provide the library with raw data from your sensor, and it will return the output of your model. It performs feature extraction and inference, just as you configured in the Edge Impulse Studio!
We recommend working through the steps in this guide to see how to run an impulse on a full operating system (macOS, Linux, or Windows) first. Once you understand how to include the impulse as a C++ library, you can port it to any build system or integrated development environment (IDE) you wish.
Knowledge required
This guide assumes you have some familiarity with C and the GNU Make build system. We will demonstrate how to run an impulse (e.g. inference) on Linux, macOS, or Windows using a C program and Make. We want to give you a starting point for porting the C++ library to your own build system.
This API Reference details the available macros, structs, variables, and functions for the C++ SDK library.
A working demonstration of this project can be found here.
You will need a C compiler, a C++ compiler, and Make installed on your computer.
Install gcc, g++, and GNU Make. If you are using a Debian-based system, this can be done with:
Install LLVM and GNU Make. If you are using Homebrew, you can run the following commands:
Install MinGW-w64, which comes with GNU Make and the necessary compilers. You will need to add the mingw64\bin folder to your Path.
You are welcome to download a C++ library from your own project, but you can also follow along using this public project. If you use the public project, you will need to click Clone this project in the upper-right corner to clone the project to your own account.
Head to the Deployment page for your project. Select C++ library. Scroll down, and click Build. Note that you must have a fully trained model in order to download any of the deployment options.
Your impulse will download as a C++ library in a .zip file.
The easiest way to test the impulse library is to use raw features from one of your test set samples. When you run your program, it should print out the class probabilities that match those of the test sample in the Studio.
Create a directory to hold your project (e.g. my-motion). Unzip the C++ library file into the project directory. Your directory structure should look like the following:
Note: You can write in C or C++ for your main application. Because portions of the impulse library are written in C++, you must use a C++ compiler for your main application (see this FAQ for more information). A more advanced option would be to use bindings for your language of choice (e.g. calling C++ functions from Python). We will stick to C for this demonstration. We highly recommend keeping your main file as a .cpp or .cc file so that it will compile as as C++ code.
The CMakeLists.txt file is used as part of the CMake build system generation process. We won’t use CMake in this demonstration, but see here for such an example.
edge-impulse-sdk/ contains the full software development kit (SDK) required to run your impulse along with various optimizations (e.g. ARM’s CMSIS) for supported platforms. edge-impulse-sdk/classifier/ei_run_classifier.h contains the important public functions that you will want to call. Of the functions listed in that file, you will likely only need a few:
run_classifier() - Basic inference: we pass it the raw features and it returns the classification results.
run_classifier_init() - Initializes necessary static variables prior to running continuous inference. You must call this function prior to calling run_classifier_continuous()
run_classifier_continuous() - Retains a sliding window of features so that inference may be performed on a continuous stream of data. We will not explore this option in this tutorial.
Both run_classifier()
and run_classifier_continuous()
expect raw data to be passed in through a signal_t struct. The definition of signal_t
can be found in edge-impulse-sdk/dsp/numpy_types.h. This struct has two properties:
total_length
- total number of values, which should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
(from model-parameters/model_metadata.h). For example, if you have an accelerometer with 3 axes sampling at 100 Hz for 2 seconds, total_length
would be 600.
get_data
- a callback function that retrieves slices of data as required by the preprocessing (DSP) step. Some DSP algorithms (e.g. computing MFCCs for keyword spotting) page raw features in one slice at a time to save memory. This function allows you to store the raw data in other locations (e.g. internal RAM, external RAM, flash) and page it in when required. We will show how to configure this callback function later in the tutorial.
If you already have your data in RAM, you can use the C++ function numpy::signal_from_buffer()
(found in edge-impulse-sdk/dsp/numpy.h to construct the signal_t
for you.
model-parameters/ contains the settings for preprocessing your data (in dsp_blocks.h) and for running the trained machine learning model. In that directory, model_metadata.h defines the many settings needed by the impulse. In particular, you’ll probably care about the following:
EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE - Number of raw elements in the array expected by the pre-processor input
EI_CLASSIFIER_FREQUENCY - Sampling frequency of the sensor
EI_CLASSIFIER_LABEL_COUNT - Number of classifier labels
model_variables.h holds some additional information about the model and preprocessing steps. Most importantly, you might want ei_classifier_inferencing_categories[]
if you need the labels for your categories in string form.
tflite-model/ contains the actual trained model stored in an array. You should not need to access any of the variables or functions in these files, as inference is handled by the impulse library.
Raw data data being passed to run_classifier()
or run_classifier_continuous()
is known as a "signal" and is passed in through a signal_t struct. Signals are always a flat buffer, so you must flatten any sensor data to a 1-dimensional array.
Time-series data with multiple axes are flattened so that the value from each axis is listed from each time step before moving on to the next time step. For example, here is how sensor data with 3 axes would be flattened:
Image data is flattened by listing row 1, row 2, etc. Each pixel is given in HEX format (0xRRGGBB). For example:
It's possible to convert other image formats into this expected signal format. See here for an example that converts RGB565 into a flat signal buffer.
By default, the trained model resides mostly in ROM and is only pulled into RAM as needed. You can force a static allocation of the model by defining:
If you are doing image classification with a quantized model, the data is automatically quantized when read from the signal. This is automatically enabled when you call run_impulse
. If you want to adjust the size of the buffer that is used to read from the signal in this case, you can set EI_DSP_IMAGE_BUFFER_STATIC_SIZE
, which also allocates the buffer statically. For example, you might set:
Open main.cpp in your editor of choice. Paste in the following code:
We’re going to copy raw features from one of our test samples. This process allows us to test that preprocessing and inference works without needing to connect a real sensor to our computer or board.
Go back to your project in the Edge Impulse Studio. Click on Model testing. Find a sample (I’ll use one of the samples labeled “wave”), click the three dots (kebab menu) next to the sample, and click Show classification.
A new tab will open, and you can see a visualization of the sample along with the raw features, expected outcome (ground truth label), and inference results. Feel free to slide the window to any point in the test sample to get the raw features from that window. The raw features are the actual values that are sent to the impulse for preprocessing and inference.
I’ll leave the window at the front of the sample for this example. Click the Copy features button next to the Raw features. This will copy only the raw features under the given window to your clipboard. Additionally, make a note of the highlighted Detailed result. We will want to compare our local inference output to these values (e.g. wave should be close to 0.99 and the other labels should be close to 0.0). Some rounding error is expected.
Paste the list of raw feature values into the input_buf
array. Note that this buffer is constant for this particular program. However, it demonstrates how you can fill an array with floating point values from a sensor to pass to the impulse SDK library.
For performing inference live, you would want to fill the features[]
array with values from a connected sensor.
Important! Make sure that the length of the array matches the expected length for the preprocessing block in the impulse library. This value is given by EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (which is 200 values * 3 axes = 600 total values for the given window in our case). Also note how the values are stored: {x0, y0, z0, x1, y1, z1, …}. You will need to construct a similar array if you are sampling live data from a sensor.
Save your main.cpp.
Before moving on to the Makefile, let’s take a look at the important code sections in our application.
To use the C++ library, we really only need to include one header file to use the impulse SDK:
The ei_run_classifier.h
file includes any other files we might need from the library and gives us access to the necessary functions.
The run_classifier() function expects a signal_t struct as an input. So, we set the members here:
signal.total_length
is the number of array elements in the input buffer. For our case, it should match the expected total number of elements (EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
).
signal.get_data
must be set to a callback function. run_classifier()
will use this callback function to grab data from our buffer as needed. It is up to you to create this function. Let’s take a look at the simplest form of this callback:
This function copies data from our input buffer (a static global array) plus a memory offset into an array provided by the caller. We don’t know exactly what offset
and length
will be for any given call, but we must be ready with valid data. We do know that this function will not attempt to index beyond the provided signal.total_length
amount.
The callback structure is used here so that data can be paged in from any location (e.g. RAM or ROM), which means we don't necessarily need to save the entire sample in RAM. This process helps save precious RAM space on resource-constrained devices.
With our signal_t
struct configured, we can call our inference function:
run_classifier()
will perform any necessary preprocessing steps (such as computing the power spectral density) prior to running inference. The inference results are stored in the second argument (of type ei_impulse_result_t). The third parameter is debug
, which is used to print out internal states of the preprocessing and inference steps. We leave debugging disabled in this example. The function should return a value equal to EI_IMPULSE_OK
if everything ran without error.
We print out the time it took (in milliseconds) to perform preprocessing (“dsp”), classification, and any anomaly detection we had enabled:
Finally, we print inference results to the console:
We can access the individual classification results for each class with result.classification[i].value
where i
is the index of our label. Labels are stored in alphabetical order in ei_classifier_inferencing_categories[]
. Each prediction value must be between 0.0 and 1.0. Additionally, thanks to the softmax function at the end of our neural network, all of the predictions should add up to 1.0.
If your model has anomaly detection enabled, EI_CLASSIFIER_HAS_ANOMALY will be set to 1. We can access the anomaly value via result.anomaly
. Additionally, if you are using an object detection impulse, EI_CLASSIFIER_OBJECT_DETECTION will be set to 1, and bounding box information will be an array stored in result.bounding_boxes[].
The C++ Inference SDK library relies on several functions to allocate memory, delay the processor, read current execution time, and print out debugging information. The SDK library provides the necessary declarations in edge-impulse-sdk/porting/ei_classifier_porting.h.
Throughout the library, you will find these functions being called. However, no definitions are provided because every platform is different in how these functions are implemented. For example, you may want to print debugging information to a console (stdout) or over a UART serial port.
By default, Edge Impulse defines these functions for several popular platforms and operating systems, which you can see here. In the example throughout this guide, we include the definitions for POSIX and MinGW (refer to the Makefile section to see how these definitions are included in the build process).
If you were to try to build this project for another platform (e.g. a microcontroller), the process would fail, as you are missing these definitions. If your platform is supported by the Edge Impulse C++ Inference SDK, you may include that folder in your C++ sources. A Makefile example of including support for TI implementations might be:
If your platform is not supported or you would like to create custom definitions, you may do so in your own code. The following functions must be defined for your platform (the reference guide linked to by each function provides several examples on possible implementations):
Due to the number of files we must include from the library, it can be quite difficult to call the compiler and linker manually. As a result, we will use a Makefile script and the Make tool to compile all the necessary source code, link the object files, and generate a single executable file for us.
Copy the following into your Makefile:
Save your Makefile. Ensure that it is in the top level directory (for this particular project).
This Makefile should serve as an example of how to import and compile the impulse SDK library. The particular build system or IDE for your platform may not use Make, so I recommend reading the next section to see what files and flags must be included. You can use this information to configure your own build system.
We’ll look at the important lines in our example Makefile. If you are not familiar with Make, we recommend taking a look at this guide. It will walk you through the basics of creating a Makefile and what many of the commands do.
Near the top, we define where the compiler(s) can find the necessary header files:
We need to point this -I
flag to the directory that holds edge-impulse-sdk/, model-parameters/, and tflite-model/ so that the build system can find the required header files. If you unzipped your C++ library into a lib/ folder, for example, this flag should be -Ilib/
.
We then define a number of compiler flags that are set by both the C and the C++ compiler. What each of these do has been commented in the script:
Some of the functions in the library use lambda functions. As a result, we must support C++11 or later. The C++14 standard is recommended, so we set that in our C++ flags:
The SDK relies on the math and stdc++ libraries, which come with most GNU C/C++ installations. We need to tell the linker to include them from the standard libraries on our system:
In addition to including the header files, we also need to tell the compiler(s) where to find source code. To do that, we create separate lists of all the .c, .cpp, and .cc files:
edge-impulse-sdk/porting/posix/*.c*
and edge-impulse-sdk/porting/mingw32/*.c*
point to C++ files that provide implementations for the Functions That Require Definition. If you are using something other than a POSIX-based system or MinGW, you will want to change these files to one of the other supported platforms or to your own custom definitions for those functions.
Note the directory locations given in these lists. Many IDEs will ask you for the location of source files to include in the build process. You will want to include these directories (such as edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/, etc.).
If you unzipped the C++ library into a different location (e.g. into a separate lib/ directory), then all of these source locations should be updated to reflect that. For example, tflite-model/*.cpp
would become lib/tflite-model/*.cpp
.
To use pure C++ for inference on almost any target with the SDK library, we can use TensorFlow Lite for Microcontrollers (TFLM). TFLM comes bundled with the downloaded library. All we need to do is include it. Once again, note the compiler flag and source files that are added to the lists:
TFLM is efficient and works with almost any microcontroller or microprocessor target. However, it does not include all of the features and functions found in TensorFlow Lite (TFLite). If you are deploying to a single board computer, smartphone, etc. with TFLite support and you wish to use such functionality, you can enable full TFLite support in the build (as opposed to TFLM).
While TFLM is a great generic package for many target platforms, it is not as efficient as TFLite for some, such as Linux and Android. As a result, you will likely see a performance boost if you use TFLite (instead of TFLM) on Linux.
You can also use TensorRT to optimize inference for NVIDIA GPUs on boards such as the NVIDIA Jetson Nano.
To enable either TFLite or TensorRT (instead of TFLM), see this Makefile. You will need to include different source files and flags. Note that for TensorRT, you will need to install a third-party library from NVIDIA.
The rest of the Makefile compiles each of the source files to object files (.o) before combining and linking them into a standalone executable file. This particular Makefile places the executable (app) in the build/ directory.
At this point, you’re ready to build your application and run it! Open a terminal (MinGW Shell, if you’re on Windows), navigate to your project directory, and run the make
command. You can use the -j [jobs]
command to have Make use multiple threads to speed up the build process (especially if you have multiple cores in your CPU):
This may take a few minutes, so be patient. When the build process is done, run your application:
Note that this may be build/app.exe
on Windows.
Take a look at the output predictions–they should match the predictions we saw earlier in the Edge Impulse Studio!
This guide should hopefully act as a starting point to use your trained machine learning models on nearly any platform (as long as you have access to C and C++ compilers).
The easiest method of running live inference is to fill features[]
with your raw sensor data, ensure it’s the correct length and format (e.g. float), and call run_classifier()
. However, we did not cover use cases where you might need to run inference on a sliding window of data. Instead of retaining a large window in memory and calling run_classifier()
for each new slice of data (which will re-compute features for the whole window), you can use run_classifier_continuous()
. This function will remember features from one call to the next so you just need to provide the new data. See this tutorial for a demonstration on how to run your impulse continuously.
We recognize that the embedded world is full of different build systems and IDEs. While we can’t support every single IDE, we hope that this guide showed how to include the required header and source files to build your project. Additionally, here are some IDE-specific guides for popular platforms to help you run your impulse locally.