Knowledge requiredThis tutorial assumes that you know how to build C++ applications, and works on macOS, Linux and Windows. If you’re unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Prerequisites
Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also install the following software: macOS, Linux- GNU Make - to build the application.
make
should be in your PATH. - A modern C++ compiler. The default LLVM version on macOS works, but on Linux upgrade to LLVM 9 (installation instructions).
- MinGW-W64 which includes both GNU Make and a compiler. Make sure
mingw32-make
is in your PATH.
Cloning the base repository
We created an example repository which contains a Makefile and a small CLI example application, which takes the raw features as an argument, and prints out the final classification. Clone or download this repository at example-standalone-inferencing.Deploying your impulse
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library, and click Build to create the library. Download the.zip
file and place the contents in the ‘example-standalone-inferencing’ folder (which you downloaded above). Your final folder structure should look like this:
Add data sample to main.cpp
To get inference to work, we need to add raw data from one of our samples to main.cpp. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under ‘Detailed result’. Make a note of the classification results, as we want our local application to produce the same numbers from inference.
Selecting the row with timestamp '320' under 'Detailed result'.

Copying the raw features.
// Copy raw features here
:
...
won’t compile—I just wanted to demonstrate where the features would go).
In a real application, you would want to make the features[]
buffer non-const. You would fill it with samples from your sensor(s) and call run_classifier()
or run_classifier_continuous()
. See deploy your model as a C++ library tutorial for more information.
Save and exit.
Running the impulse
Open a terminal or command prompt, and build the project: macOS, Linuxfeatures[]
buffer and then give you the classification output:
Hardware AccelerationIf you have a device with a GPU, you can enable hardware acceleration, see the example-standalone-inferencing-linux repository for an example of how to do this. This will speed up the inferencing process significantly.
Using the library from C
Even though the impulse is deployed as a C++ application, you can link to it from C applications. This is done by compiling the impulse as a shared library with theEIDSP_SIGNAL_C_FN_POINTER=1
and EI_C_LINKAGE=1
macros, then link to it from a C application. The run_classifier
can then be invoked from your application. An end-to-end application that demonstrates this and can be used with this tutorial is under example-standalone-inferencing-c.