As a generic C++ library
While Edge Impulse supports a number of boards that make gathering data and deploying your model easy, we know that many people will want to run edge machine learning on their own board. This tutorial will show you how to export an impulse and how to include it as a C++ library.
An impulse is a combination of any preprocessing code necessary to extract features from your raw data along with inference using your trained machine learning model. You provide the library with raw data from your sensor, and it will return the output of your model. It performs feature extraction and inference, just as you configured in the Edge Impulse Studio!
We recommend working through the steps in this guide to see how to run an impulse on a full operating system (macOS, Linux, or Windows) first. Once you understand how to include the impulse as a C++ library, you can port it to any build system or integrated development environment (IDE) you wish.
Knowledge required
This guide assumes you have some familiarity with C and the GNU Make build system. We will demonstrate how to run an impulse (e.g. inference) on Linux, macOS, or Windows using a C program and Make. We want to give you a starting point for porting the C++ library to your own build system.
This API Reference details the available macros, structs, variables, and functions for the C++ SDK library.
A working demonstration of this project can be found here.

Prerequisites

You will need a C compiler, a C++ compiler, and Make installed on your computer.

Linux

Install gcc, g++, and GNU Make. If you are using a Debian-based system, this can be done with:
1
sudo apt update
2
sudo apt install build-essentials
Copied!

macOS

Install LLVM and GNU Make. If you are using Homebrew, you can run the following commands:
1
brew install llvm
2
brew install make
Copied!

Windows

Install MinGW-w64, which comes with GNU Make and the necessary compilers. You will need to add the mingw64\bin folder to your Path, which can be done by following the instructions on this page.

Download the C++ Library from Edge Impulse

You are welcome to download a C++ library from your own project, but you can also follow along using this public project. If you use the public project, you will need to click Clone this project in the upper-right corner to clone the project to your own account.
Head to the Deployment page for your project. Select C++ library. Scroll down, and click Build. Note that you must have a fully trained model in order to download any of the deployment options.
Your impulse will download as a C++ library in a .zip file.

Create a Project

The easiest way to test the impulse library is to use raw features from one of your test set samples. When you run your program, it should print out the class probabilities that match those of the test sample in the Studio.
Create a directory to hold your project (e.g. my-motion). Unzip the C++ library file into the project directory. Your directory structure should look like the following:
1
my-motion/
2
|-- edge-impulse-sdk/
3
|-- model-parameters/
4
|-- tflite-model/
5
|-- CMakeLists.txt
6
|-- Makefile
7
|-- main.cpp
Copied!
Note: You can write in C or C++ for your main application. Because portions of the impulse library are written in C++, you must use a C++ compiler for your main application (see this FAQ for more information). A more advanced option would be to use bindings for your language of choice (e.g. calling C++ functions from Python). We will stick to C for this demonstration. We highly recommend keeping your main file as a .cpp or .cc file so that it will compile as as C++ code.

Explanation of C++ Library

The CMakeLists.txt file is used as part of the CMake build system generation process. We won’t use CMake in this demonstration, but see here for such an example.
edge-impulse-sdk/ contains the full software development kit (SDK) required to run your impulse along with various optimizations (e.g. ARM’s CMSIS) for supported platforms. edge-impulse-sdk/classifier/ei_run_classifier.h contains the important public functions that you will want to call. Of the functions listed in that file, you will likely only need a few:
  • run_classifier() - Basic inference: we pass it the raw features and it returns the classification results.
  • run_classifier_init() - Initializes necessary static variables prior to running continuous inference. You must call this function prior to calling run_classifier_continuous()
  • run_classifier_continuous() - Retains a sliding window of features so that inference may be performed on a continuous stream of data. We will not explore this option in this tutorial.
Both run_classifier() and run_classifier_continuous() expect raw data to be passed in through a signal_t struct. The definition of signal_t can be found in edge-impulse-sdk/dsp/numpy_types.h. This struct has two properties:
  • total_length - total number of values, which should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (from model-parameters/model_metadata.h). For example, if you have an accelerometer with 3 axes sampling at 100 Hz for 2 seconds, total_length would be 600.
  • get_data - a callback function that retrieves slices of data as required by the preprocessing (DSP) step. Some DSP algorithms (e.g. computing MFCCs for keyword spotting) page raw features in one slice at a time to save memory. This function allows you to store the raw data in other locations (e.g. internal RAM, external RAM, flash) and page it in when required. We will show how to configure this callback function later in the tutorial.
If you already have your data in RAM, you can use the C++ function numpy::signal_from_buffer() (found in edge-impulse-sdk/dsp/numpy.h to construct the signal_t for you.
model-parameters/ contains the settings for preprocessing your data (in dsp_blocks.h) and for running the trained machine learning model. In that directory, model_metadata.h defines the many settings needed by the impulse. In particular, you’ll probably care about the following:
model_variables.h holds some additional information about the model and preprocessing steps. Most importantly, you might want ei_classifier_inferencing_categories[] if you need the labels for your categories in string form.
tflite-model/ contains the actual trained model stored in an array. You should not need to access any of the variables or functions in these files, as inference is handled by the impulse library.

Signal Structure

Raw data data being passed to run_classifier() or run_classifier_continuous() is known as a "signal" and is passed in through a signal_t struct. Signals are always a flat buffer, so you must flatten any sensor data to a 1-dimensional array.
Time-series data with multiple axes are flattened so that the value from each axis is listed from each time step before moving on to the next time step. For example, here is how sensor data with 3 axes would be flattened:
1
Input data:
2
Axis 1: 9.8, 9.7, 9.6
3
Axis 2: 0.3, 0.4, 0.5
4
Axis 3: -4.5, -4.6, -4.8
5
6
Signal: 9.8, 0.3, -4.5, 9.7, 0.4, -4.6, 9.6, 0.5, -4.8
Copied!
Image data is flattened by listing row 1, row 2, etc. Each pixel is given in HEX format (0xRRGGBB). For example:
1
Input data (3x2 pixel image):
2
BLACK RED RED
3
GREEN BLUE WHITE
4
5
Signal: 0x000000, 0xFF0000, 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFFFF
Copied!
It's possible to convert other image formats into this expected signal format. See here for an example that converts RGB565 into a flat signal buffer.

Static Allocation

By default, the trained model resides mostly in ROM and is only pulled into RAM as needed. You can force a static allocation of the model by defining:
1
EI_CLASSIFIER_ALLOCATION_STATIC=1
Copied!
If you are doing image classification with a quantized model, the data is automatically quantized when read from the signal. This is automatically enabled when you call run_impulse. If you want to adjust the size of the buffer that is used to read from the signal in this case, you can set EI_DSP_IMAGE_BUFFER_STATIC_SIZE, which also allocates the buffer statically. For example, you might set:
1
EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024
Copied!

Create an Application

Open main.cpp in your editor of choice. Paste in the following code:
1
#include <stdio.h>
2
3
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"
4
5
// Callback function declaration
6
static int get_signal_data(size_t offset, size_t length, float *out_ptr);
7
8
// Raw features copied from test sample (Edge Impulse > Model testing)
9
static float input_buf[] = {
10
/* Paste your raw features here! */
11
};
12
13
int main(int argc, char **argv) {
14
15
signal_t signal; // Wrapper for raw input buffer
16
ei_impulse_result_t result; // Used to store inference output
17
EI_IMPULSE_ERROR res; // Return code from inference
18
19
// Calculate the length of the buffer
20
size_t buf_len = sizeof(input_buf) / sizeof(input_buf[0]);
21
22
// Make sure that the length of the buffer matches expected input length
23
if (buf_len != EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE) {
24
printf("ERROR: The size of the input buffer is not correct.\r\n");
25
printf("Expected %d items, but got %d\r\n",
26
EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE,
27
(int)buf_len);
28
return 1;
29
}
30
31
// Assign callback function to fill buffer used for preprocessing/inference
32
signal.total_length = EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE;
33
signal.get_data = &get_signal_data;
34
35
// Perform DSP pre-processing and inference
36
res = run_classifier(&signal, &result, false);
37
38
// Print return code and how long it took to perform inference
39
printf("run_classifier returned: %d\r\n", res);
40
printf("Timing: DSP %d ms, inference %d ms, anomaly %d ms\r\n",
41
result.timing.dsp,
42
result.timing.classification,
43
result.timing.anomaly);
44
45
// Print the prediction results (object detection)
46
#if EI_CLASSIFIER_OBJECT_DETECTION == 1
47
printf("Object detection bounding boxes:\r\n");
48
for (uint32_t i = 0; i < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; i++) {
49
ei_impulse_result_bounding_box_t bb = result.bounding_boxes[i];
50
if (bb.value == 0) {
51
continue;
52
}
53
printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n",
54
bb.label,
55
bb.value,
56
bb.x,
57
bb.y,
58
bb.width,
59
bb.height);
60
}
61
62
// Print the prediction results (classification)
63
#else
64
printf("Predictions:\r\n");
65
for (uint16_t i = 0; i < EI_CLASSIFIER_LABEL_COUNT; i++) {
66
printf(" %s: ", ei_classifier_inferencing_categories[i]);
67
printf("%.5f\r\n", result.classification[i].value);
68
}
69
#endif
70
71
// Print anomaly result (if it exists)
72
#if EI_CLASSIFIER_HAS_ANOMALY == 1
73
printf("Anomaly prediction: %.3f\r\n", result.anomaly);
74
#endif
75
76
return 0;
77
}
78
79
// Callback: fill a section of the out_ptr buffer when requested
80
static int get_signal_data(size_t offset, size_t length, float *out_ptr) {
81
for (size_t i = 0; i < length; i++) {
82
out_ptr[i] = (input_buf + offset)[i];
83
}
84
85
return EIDSP_OK;
86
}
Copied!
We’re going to copy raw features from one of our test samples. This process allows us to test that preprocessing and inference works without needing to connect a real sensor to our computer or board.
Go back to your project in the Edge Impulse Studio. Click on Model testing. Find a sample (I’ll use one of the samples labeled “wave”), click the three dots (kebab menu) next to the sample, and click Show classification.
A new tab will open, and you can see a visualization of the sample along with the raw features, expected outcome (ground truth label), and inference results. Feel free to slide the window to any point in the test sample to get the raw features from that window. The raw features are the actual values that are sent to the impulse for preprocessing and inference.
I’ll leave the window at the front of the sample for this example. Click the Copy features button next to the Raw features. This will copy only the raw features under the given window to your clipboard. Additionally, make a note of the highlighted Detailed result. We will want to compare our local inference output to these values (e.g. wave should be close to 0.99 and the other labels should be close to 0.0). Some rounding error is expected.
Paste the list of raw feature values into the input_buf array. Note that this buffer is constant for this particular program. However, it demonstrates how you can fill an array with floating point values from a sensor to pass to the impulse SDK library.
For performing inference live, you would want to fill the features[] array with values from a connected sensor.
Important! Make sure that the length of the array matches the expected length for the preprocessing block in the impulse library. This value is given by EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (which is 200 values * 3 axes = 600 total values for the given window in our case). Also note how the values are stored: {x0, y0, z0, x1, y1, z1, …}. You will need to construct a similar array if you are sampling live data from a sensor.
Save your main.cpp.

Explanation of Main Application

Before moving on to the Makefile, let’s take a look at the important code sections in our application.
To use the C++ library, we really only need to include one header file to use the impulse SDK:
1
#include "edge-impulse-sdk/classifier/ei_run_classifier.h"
Copied!
The ei_run_classifier.h file includes any other files we might need from the library and gives us access to the necessary functions.
The run_classifier() function expects a signal_t struct as an input. So, we set the members here:
1
// Assign callback function to fill buffer used for processing/inference
2
signal.total_length = EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE;
3
signal.get_data = &get_signal_data;
Copied!
signal.total_length is the number of array elements in the input buffer. For our case, it should match the expected total number of elements (EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE).
signal.get_data must be set to a callback function. run_classifier() will use this callback function to grab data from our buffer as needed. It is up to you to create this function. Let’s take a look at the simplest form of this callback:
1
// Callback: fill a section of the out_ptr buffer when requested
2
static int get_signal_data(size_t offset, size_t length, float *out_ptr) {
3
for (size_t i = 0; i &lt; length; i++) {
4
out_ptr[i] = (input_buf + offset)[i];
5
}
6
return EIDSP_OK;
7
}
Copied!
This function copies data from our input buffer (a static global array) plus a memory offset into an array provided by the caller. We don’t know exactly what offset and length will be for any given call, but we must be ready with valid data. We do know that this function will not attempt to index beyond the provided signal.total_length amount.
The callback structure is used here so that data can be paged in from any location (e.g. RAM or ROM), which means we don't necessarily need to save the entire sample in RAM. This process helps save precious RAM space on resource-constrained devices.
With our signal_t struct configured, we can call our inference function:
1
// Perform DSP preprocessing and inference
2
res = run_classifier(&signal, &result, false);
Copied!
run_classifier() will perform any necessary preprocessing steps (such as computing the power spectral density) prior to running inference. The inference results are stored in the second argument (of type ei_impulse_result_t). The third parameter is debug, which is used to print out internal states of the preprocessing and inference steps. We leave debugging disabled in this example. The function should return a value equal to EI_IMPULSE_OK if everything ran without error.
We print out the time it took (in milliseconds) to perform preprocessing (“dsp”), classification, and any anomaly detection we had enabled:
1
printf("Timing: DSP %d ms, classification %d ms, anomaly %d ms\r\n",
2
result.timing.dsp,
3
result.timing.classification,
4
result.timing.anomaly);
Copied!
Finally, we print inference results to the console:
1
printf("Predictions:\r\n");
2
for (uint16_t i = 0; i &lt; EI_CLASSIFIER_LABEL_COUNT; i++) {
3
printf(" %s: ", ei_classifier_inferencing_categories[i]);
4
printf("%.5f\r\n", result.classification[i].value);
5
}
Copied!
We can access the individual classification results for each class with result.classification[i].value where i is the index of our label. Labels are stored in alphabetical order in ei_classifier_inferencing_categories[]. Each prediction value must be between 0.0 and 1.0. Additionally, thanks to the softmax function at the end of our neural network, all of the predictions should add up to 1.0.
If your model has anomaly detection enabled, EI_CLASSIFIER_HAS_ANOMALY will be set to 1. We can access the anomaly value via result.anomaly. Additionally, if you are using an object detection impulse, EI_CLASSIFIER_OBJECT_DETECTION will be set to 1, and bounding box information will be an array stored in result.bounding_boxes[].

Functions That Require Definition

The C++ Inference SDK library relies on several functions to allocate memory, delay the processor, read current execution time, and print out debugging information. The SDK library provides the necessary declarations in edge-impulse-sdk/porting/ei_classifier_porting.h.
Throughout the library, you will find these functions being called. However, no definitions are provided because every platform is different in how these functions are implemented. For example, you may want to print debugging information to a console (stdout) or over a UART serial port.
By default, Edge Impulse defines these functions for several popular platforms and operating systems, which you can see here. In the example throughout this guide, we include the definitions for POSIX and MinGW (refer to the Makefile section to see how these definitions are included in the build process).
If you were to try to build this project for another platform (e.g. a microcontroller), the process would fail, as you are missing these definitions. If your platform is supported by the Edge Impulse C++ Inference SDK, you may include that folder in your C++ sources. A Makefile example of including support for TI implementations might be:
1
CXXSOURCES += $(wildcard edge-impulse-sdk/porting/ti/*.c*)
Copied!
If your platform is not supported or you would like to create custom definitions, you may do so in your own code. The following functions must be defined for your platform (the reference guide linked to by each function provides several examples on possible implementations):

Create a Makefile

Due to the number of files we must include from the library, it can be quite difficult to call the compiler and linker manually. As a result, we will use a Makefile script and the Make tool to compile all the necessary source code, link the object files, and generate a single executable file for us.
Copy the following into your Makefile:
1
# Tool macros
2
CC ?= gcc
3
CXX ?= g++
4
5
# Settings
6
NAME = app
7
BUILD_PATH = ./build
8
9
# Location of main.cpp (must use C++ compiler for main)
10
CXXSOURCES = main.cpp
11
12
# Search path for header files (current directory)
13
CFLAGS += -I.
14
15
# C and C++ Compiler flags
16
CFLAGS += -Wall # Include all warnings
17
CFLAGS += -g # Generate GDB debugger information
18
CFLAGS += -Wno-strict-aliasing # Disable warnings about strict aliasing
19
CFLAGS += -Os # Optimize for size
20
CFLAGS += -DNDEBUG # Disable assert() macro
21
CFLAGS += -DEI_CLASSIFIER_ENABLE_DETECTION_POSTPROCESS_OP # Add TFLite_Detection_PostProcess operation
22
23
# C++ only compiler flags
24
CXXFLAGS += -std=c++14 # Use C++14 standard
25
26
# Linker flags
27
LDFLAGS += -lm # Link to math.h
28
LDFLAGS += -lstdc++ # Link to stdc++.h
29
30
# Include C source code for required libraries
31
CSOURCES += $(wildcard edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/*.c) \
32
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/CommonTables/*.c) \
33
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/BasicMathFunctions/*.c) \
34
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/ComplexMathFunctions/*.c) \
35
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/FastMathFunctions/*.c) \
36
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/SupportFunctions/*.c) \
37
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/MatrixFunctions/*.c) \
38
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/StatisticsFunctions/*.c)
39
40
# Include C++ source code for required libraries
41
CXXSOURCES += $(wildcard tflite-model/*.cpp) \
42
$(wildcard edge-impulse-sdk/dsp/kissfft/*.cpp) \
43
$(wildcard edge-impulse-sdk/dsp/dct/*.cpp) \
44
$(wildcard edge-impulse-sdk/dsp/memory.cpp) \
45
$(wildcard edge-impulse-sdk/porting/posix/*.c*) \
46
$(wildcard edge-impulse-sdk/porting/mingw32/*.c*)
47
CCSOURCES +=
48
49
# Use TensorFlow Lite for Microcontrollers (TFLM)
50
CFLAGS += -DTF_LITE_DISABLE_X86_NEON=1
51
CSOURCES += edge-impulse-sdk/tensorflow/lite/c/common.c
52
CCSOURCES += $(wildcard edge-impulse-sdk/tensorflow/lite/kernels/*.cc) \
53
$(wildcard edge-impulse-sdk/tensorflow/lite/kernels/internal/*.cc) \
54
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/kernels/*.cc) \
55
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/*.cc) \
56
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/memory_planner/*.cc) \
57
$(wildcard edge-impulse-sdk/tensorflow/lite/core/api/*.cc)
58
59
# Include CMSIS-NN if compiling for an Arm target that supports it
60
ifeq (${CMSIS_NN}, 1)
61
62
# Include CMSIS-NN and CMSIS-DSP header files
63
CFLAGS += -Iedge-impulse-sdk/CMSIS/NN/Include/
64
CFLAGS += -Iedge-impulse-sdk/CMSIS/DSP/PrivateInclude/
65
66
# C and C++ compiler flags for CMSIS-NN and CMSIS-DSP
67
CFLAGS += -Wno-unknown-attributes # Disable warnings about unknown attributes
68
CFLAGS += -DEI_CLASSIFIER_TFLITE_ENABLE_CMSIS_NN=1 # Use CMSIS-NN functions in the SDK
69
CFLAGS += -D__ARM_FEATURE_DSP=1 # Enable CMSIS-DSP optimized features
70
CFLAGS += -D__GNUC_PYTHON__=1 # Enable CMSIS-DSP intrisics (non-C features)
71
72
# Include C source code for required CMSIS libraries
73
CSOURCES += $(wildcard edge-impulse-sdk/CMSIS/NN/Source/ActivationFunctions/*.c) \
74
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/BasicMathFunctions/*.c) \
75
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/ConcatenationFunctions/*.c) \
76
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/ConvolutionFunctions/*.c) \
77
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/FullyConnectedFunctions/*.c) \
78
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/NNSupportFunctions/*.c) \
79
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/PoolingFunctions/*.c) \
80
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/ReshapeFunctions/*.c) \
81
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/SoftmaxFunctions/*.c) \
82
$(wildcard edge-impulse-sdk/CMSIS/NN/Source/SVDFunctions/*.c)
83
endif
84
85
# Generate names for the output object files (*.o)
86
COBJECTS := $(patsubst %.c,%.o,$(CSOURCES))
87
CXXOBJECTS := $(patsubst %.cpp,%.o,$(CXXSOURCES))
88
CCOBJECTS := $(patsubst %.cc,%.o,$(CCSOURCES))
89
90
# Default rule
91
.PHONY: all
92
all: app
93
94
# Compile library source code into object files
95
$(COBJECTS) : %.o : %.c
96
$(CXXOBJECTS) : %.o : %.cpp
97
$(CCOBJECTS) : %.o : %.cc
98
%.o: %.c
99
$(CC) $(CFLAGS) -c $^ -o [email protected]
100
%.o: %.cc
101
$(CXX) $(CFLAGS) $(CXXFLAGS) -c $^ -o [email protected]
102
%.o: %.cpp
103
$(CXX) $(CFLAGS) $(CXXFLAGS) -c $^ -o [email protected]
104
105
# Build target (must use C++ compiler)
106
.PHONY: app
107
app: $(COBJECTS) $(CXXOBJECTS) $(CCOBJECTS)
108
ifeq ($(OS), Windows_NT)
109
if not exist build mkdir build
110
else
111
mkdir -p $(BUILD_PATH)
112
endif
113
$(CXX) $(COBJECTS) $(CXXOBJECTS) $(CCOBJECTS) -o $(BUILD_PATH)/$(NAME) $(LDFLAGS)
114
115
# Remove compiled object files
116
.PHONY: clean
117
clean:
118
ifeq ($(OS), Windows_NT)
119
del /Q $(subst /,\,$(patsubst %.c,%.o,$(CSOURCES))) >nul 2>&1 || exit 0
120
del /Q $(subst /,\,$(patsubst %.cpp,%.o,$(CXXSOURCES))) >nul 2>&1 || exit 0
121
del /Q $(subst /,\,$(patsubst %.cc,%.o,$(CCSOURCES))) >nul 2>&1 || exit 0
122
else
123
rm -f $(COBJECTS)
124
rm -f $(CCOBJECTS)
125
rm -f $(CXXOBJECTS)
126
endif
Copied!
Save your Makefile. Ensure that it is in the top level directory (for this particular project).
This Makefile should serve as an example of how to import and compile the impulse SDK library. The particular build system or IDE for your platform may not use Make, so I recommend reading the next section to see what files and flags must be included. You can use this information to configure your own build system.

Explanation of the Makefile

We’ll look at the important lines in our example Makefile. If you are not familiar with Make, we recommend taking a look at this guide. It will walk you through the basics of creating a Makefile and what many of the commands do.
Near the top, we define where the compiler(s) can find the necessary header files:
1
# Search path for header files (current directory)
2
CFLAGS += -I.
Copied!
We need to point this -I flag to the directory that holds edge-impulse-sdk/, model-parameters/, and tflite-model/ so that the build system can find the required header files. If you unzipped your C++ library into a lib/ folder, for example, this flag should be -Ilib/.
We then define a number of compiler flags that are set by both the C and the C++ compiler. What each of these do has been commented in the script:
1
# C and C++ Compiler flags
2
CFLAGS += -Wall # Include all warnings
3
CFLAGS += -g # Generate GDB debugger information
4
CFLAGS += -Wno-strict-aliasing # Disable warnings about strict aliasing
5
CFLAGS += -Os # Optimize for size
6
CFLAGS += -DNDEBUG # Disable assert() macro
7
CFLAGS += -DEI_CLASSIFIER_ENABLE_DETECTION_POSTPROCESS_OP # Add TFLite_Detection_PostProcess operation
Copied!
Some of the functions in the library use lambda functions. As a result, we must support C++11 or later. The C++14 standard is recommended, so we set that in our C++ flags:
1
# C++ only compiler flags
2
CXXFLAGS += -std=c++14 # Use C++14 standard
Copied!
The SDK relies on the math and stdc++ libraries, which come with most GNU C/C++ installations. We need to tell the linker to include them from the standard libraries on our system:
1
# Linker flags
2
LDFLAGS += -lm # Link to math.h
3
LDFLAGS += -lstdc++ # Link to stdc++.h
Copied!
In addition to including the header files, we also need to tell the compiler(s) where to find source code. To do that, we create separate lists of all the .c, .cpp, and .cc files:
1
# Include C source code for required libraries
2
CSOURCES += $(wildcard edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/*.c) \
3
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/CommonTables/*.c) \
4
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/BasicMathFunctions/*.c) \
5
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/ComplexMathFunctions/*.c) \
6
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/FastMathFunctions/*.c) \
7
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/SupportFunctions/*.c) \
8
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/MatrixFunctions/*.c) \
9
$(wildcard edge-impulse-sdk/CMSIS/DSP/Source/StatisticsFunctions/*.c)
10
11
# Include C++ source code for required libraries
12
CXXSOURCES += $(wildcard tflite-model/*.cpp) \
13
$(wildcard edge-impulse-sdk/dsp/kissfft/*.cpp) \
14
$(wildcard edge-impulse-sdk/dsp/dct/*.cpp) \
15
$(wildcard edge-impulse-sdk/dsp/memory.cpp) \
16
$(wildcard edge-impulse-sdk/porting/posix/*.c*) \
17
$(wildcard edge-impulse-sdk/porting/mingw32/*.c*)
18
CCSOURCES +=
Copied!
edge-impulse-sdk/porting/posix/*.c* and edge-impulse-sdk/porting/mingw32/*.c* point to C++ files that provide implementations for the Functions That Require Definition. If you are using something other than a POSIX-based system or MinGW, you will want to change these files to one of the other supported platforms or to your own custom definitions for those functions.
Note the directory locations given in these lists. Many IDEs will ask you for the location of source files to include in the build process. You will want to include these directories (such as edge-impulse-sdk/CMSIS/DSP/Source/TransformFunctions/, etc.).
If you unzipped the C++ library into a different location (e.g. into a separate lib/ directory), then all of these source locations should be updated to reflect that. For example, tflite-model/*.cpp would become lib/tflite-model/*.cpp.
To use pure C++ for inference on almost any target with the SDK library, we can use TensorFlow Lite for Microcontrollers (TFLM). TFLM comes bundled with the downloaded library. All we need to do is include it. Once again, note the compiler flag and source files that are added to the lists:
1
# Use TensorFlow Lite for Microcontrollers (TFLM)
2
CFLAGS += -DTF_LITE_DISABLE_X86_NEON=1
3
CSOURCES += edge-impulse-sdk/tensorflow/lite/c/common.c
4
CCSOURCES += $(wildcard edge-impulse-sdk/tensorflow/lite/kernels/*.cc) \
5
$(wildcard edge-impulse-sdk/tensorflow/lite/kernels/internal/*.cc) \
6
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/kernels/*.cc) \
7
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/*.cc) \
8
$(wildcard edge-impulse-sdk/tensorflow/lite/micro/memory_planner/*.cc) \
9
$(wildcard edge-impulse-sdk/tensorflow/lite/core/api/*.cc)
Copied!
TFLM is efficient and works with almost any microcontroller or microprocessor target. However, it does not include all of the features and functions found in TensorFlow Lite (TFLite). If you are deploying to a single board computer, smartphone, etc. with TFLite support and you wish to use such functionality, you can enable full TFLite support in the build (as opposed to TFLM).
While TFLM is a great generic package for many target platforms, it is not as efficient as TFLite for some, such as Linux and Android. As a result, you will likely see a performance boost if you use TFLite (instead of TFLM) on Linux.
You can also use TensorRT to optimize inference for NVIDIA GPUs on boards such as the NVIDIA Jetson Nano.
To enable either TFLite or TensorRT (instead of TFLM), see this Makefile. You will need to include different source files and flags. Note that for TensorRT, you will need to install a third-party library from NVIDIA.
The rest of the Makefile compiles each of the source files to object files (.o) before combining and linking them into a standalone executable file. This particular Makefile places the executable (app) in the build/ directory.

Build and Run

At this point, you’re ready to build your application and run it! Open a terminal (MinGW Shell, if you’re on Windows), navigate to your project directory, and run the make command. You can use the -j [jobs] command to have Make use multiple threads to speed up the build process (especially if you have multiple cores in your CPU):
1
cd my-motion/
2
make -j 4
Copied!
This may take a few minutes, so be patient. When the build process is done, run your application:
1
./build/app
Copied!
Note that this may be build/app.exe on Windows.
Take a look at the output predictions–they should match the predictions we saw earlier in the Edge Impulse Studio!

Going Further

This guide should hopefully act as a starting point to use your trained machine learning models on nearly any platform (as long as you have access to C and C++ compilers).
The easiest method of running live inference is to fill features[] with your raw sensor data, ensure it’s the correct length and format (e.g. float), and call run_classifier(). However, we did not cover use cases where you might need to run inference on a sliding window of data. Instead of retaining a large window in memory and calling run_classifier() for each new slice of data (which will re-compute features for the whole window), you can use run_classifier_continuous(). This function will remember features from one call to the next so you just need to provide the new data. See this tutorial for a demonstration on how to run your impulse continuously.
We recognize that the embedded world is full of different build systems and IDEs. While we can’t support every single IDE, we hope that this guide showed how to include the required header and source files to build your project. Additionally, here are some IDE-specific guides for popular platforms to help you run your impulse locally.