In the advanced inferencing tutorials section, you will discover useful techniques to leverage our inferencing libraries or how you can use the inference results in your application logic:
Once you successfully trained or imported a model, you can use Edge Impulse to download a C++ library that bundles both your signal processing and your machine learning model. Until recently, we could only run one impulse on MCUs.
Feature under development
Please note that this method is still under integration in the studio and has not yet been fully tested on all targets. This tutorial is for advanced users only. Thus, we will provide limited support on the forum until the integration is completed. If you are interested in using it for an enterprise project, please sign up for our FREE Enterprise Trial and our solution engineers can work with you on the integration.
In this tutorial, we will see how to run multiple impulses using the downloaded C++ libraries of two different projects.
We have put together a custom deployment block that will automate all the processes and provide a C++ library that can be compiled and run as a standalone.
In this page, we will explain the high level concepts of how to merge two impulses. Feel free to look at the code to gain a deeper understanding. Alternatively, when we first wrote this tutorial, we explained how to merge two impulses manually; we will kept this process in the Manual procedure section but due to recent changes in our C++ SDK, some files and functions may have been renamed.
Multi-impulse vs multi-model vs sensor fusion
Running multi-impulse refers to running two separate projects (different data, different DSP blocks and different models) on the same target. It will require modifying some files in the EI-generated SDKs.
Running multi-model refers to running two different models (same data, same DSP block but different tflite models) on the same target. See how to run a motion classifier model and an anomaly detection model on the same device in this tutorial.
Sensor fusion refers to the process of combining data from different types of sensors to give more information to the neural network. See how to use sensor fusion in this tutorial.
Also see this video (starting min 13):
Make sure you have at least two impulses fully trained.
As an example, we will build an intrusion detection system. We will use a first model to detect glass-breaking sounds, if we detected this sound, we will then classify an image to see if there is a person or not in the image. In this tutorial, we will use the following public projects:
The deployment block can be found here. To add it to your organization, head to this page: Edge Impulse Studio -> Organizations -> Custom blocks -> Deployment blocks.
Please note that the script works with EON compiled projects only and anomaly detection blocks have not been tested.
If you have a look at the generate.py
script, it streamline the process of generating a C++ library from multiple impulses through several steps:
Library Download and Extraction:
If the script detects that the necessary projects are not already present locally, it initiates the download of C++ libraries required for edge deployment. These libraries are fetched using API keys provided by the user.
Libraries are downloaded and extracted into a temporary directory. If the user specifies a custom temporary directory, it's used; otherwise, a temporary directory is created.
Customization of Files:
For each project's library, the script performs several modifications:
At the file name level:
It adds a project-specific suffix to certain patterns in compiled files within the tflite-model
directory. This customization ensures that each project's files are unique.
Renamed files are then copied to a target directory, mainly the first project's directory.
At the function name level:
It edits model_variables.h
functions by adding the project-specific suffix to various patterns. This step ensures that model parameters are correctly associated with each project.
Merging the projects
model_variables.h
is merged into the first project's directory to consolidate model information.
The script saves the intersection of lines between trained_model_ops_define.h
files for different projects, ensuring consistency.
Copying Templates:
The script copies template files from a templates
directory to the target directory. The template available includes files with code structures and placeholders for customization. It's adapted from the example-standalone-inferencing example available on Github.
Generating Custom Code:
The script retrieves impulse IDs from model_variables.h
for each project. Impulses are a key part of edge machine learning models.
Custom code is generated for each project, including functions to get signal data, define raw features, and run the classifier.
This custom code is inserted into the main.cpp
file of each project at specific locations.
Archiving for Deployment:
Finally, the script archives the target directory, creating a zip file ready for deployment. This zip file contains all the customized files and code necessary for deploying machine learning models on edge devices.
Now to test the library generated:
Download and unzip your Edge Impulse C++ multi-impulse library into a directory
Copy a test sample's raw features into the features[]
array in source/main.cpp
Enter make -j
in this directory to compile the project. If you encounter any OOM memory error try make -j4
(replace 4 with the number of cores available)
Enter ./build/app
to run the application
Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio
Want to add your own business logic?
You can change the template you want to use in step 4 to use another compilation method, implement your custom sampling strategy and how to handle the inference results in step 5 (apply post-processing, send results somewhere else, trigger actions, etc.).
Some files and function names have changed
The general concepts remain valid but due to recent changes in our C++ inferencing SDK, some files and function names have changed.
Head to your projects' deployment pages and download the C++ libraries:
Make sure to select the same model versions (EON-Compiled enabled/disabled and int8/float32) for your projects.
Extract the two archive in a directory (multi-impulse
for example).
Rename the tflite model files:
Go to the tflite-model
directory in your extracted archives and rename the following files by post-fixing them with the name of the project:
for EON compiled projects: tflite_model_compiled.cpp
/tflite_model_compiled.h
.
for non-EON-compiled projects: tflite-trained.cpp
/tflite-trained.h
.
Original structure:
New structure after renaming the files:
Rename the variables (EON model functions, such as trained_model_input etc or tflite model array names) by post-fixing them with the name of the project.
e.g: Change the trained_model_compiled_audio.h
from:
to:
Tip: Use an IDE to use the "Find and replace feature.
Here is a list of the files that need to be modified (the names may change if not compiled with the EON compiler):
tflite-model/trained_model_compiled_<project1|2>.h
tflite-model/trained_model_compiled_<project1|2>.cpp
model-parameter/model_variables.h
Be careful here when using the "find and replace" from your IDE, NOT all variables looking like _model_
need to be replaced.
Example for the audio project:
Example for the image project:
Create a new directory (merged-impulse
for example). Copy the content of one project into this new directory (audio
for example). Copy the content of the tflite-model
directory from the other project (image
) inside the newly created merged-impulse/tflite-model
.
The structure of this new directory should look like the following:
Copy the necessary variables and structs from previously updated image/model_metadata.h
file content to the merged-impulse/model_metadata.h
.
To do so, include both of these lines in the #include
section:
The section that should be copied is from const char* ei_classifier_inferencing_categories...
to the line before const ei_impulse_t ei_default_impulse = impulse_<ProjectID>_<version>
.
Make sure to leave only one const ei_impulse_t ei_default_impulse = impulse_233502_3;
this will define which of your impulse is the default one.
Make sure the macros EI_TFLITE_DISABLE_...
are a COMBINATION of the ones present in two deployments.
For EON-compiled projects:
E.g. if #define EI_TFLITE_DISABLE_SOFTMAX_IN_U8 1
is present in one deployment and absent in the other, it should be ABSENT in the combined trained_model_ops_define.h
.
For non-EON-Compiled projects:
E.g. if resolver.AddFullyConnected();
is present in one deployment and absent in the other, it should be PRESENT in the combined tflite-resolver.h
. Remember to change the length of the resolver array if necessary.
In this example, here are the lines to deleted:
Clone this repository: https://github.com/edgeimpulse/example-standalone-inferencing-multi-impulse
Copy the content of the merged-impulse
directory to example-standalone-inferencing-multi-impulse
(replace the files and directory sharing the same).
Edit the source/main.cpp
file and replace the callback function names, the features buffers.
Note: The run_classifier takes the impulse pointer as a first argument
Enter make -j
in this directory to compile the project Enter ./build/app
to run the application Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio.
Enter rm -f build/app && make clean
to clean the project.
Congrats, you can now run multiple Impulse!!
The custom ML accelerator deployments are unlikely to work (TDA4VM, DRPAI, MemoryX, Brainchip).
The custom tflite kernels (ESP NN, Silabs MVP, Arc MLI) should work - but may require some additional work. I.e: for ESP32 you may need to statically allocate arena for the image model.
In general, running multiple impulses on an MCU can be challenging due to limited processing power, memory, and other hardware constraints. Make sure to thoroughly evaluate the capabilities and limitations of your specific MCU and consider the resource requirements of the impulses before attempting to run them concurrently.
If you see the following segmentation fault, make sure to subtract and merge the trained_model_ops_define.h or tflite_resolver.h
When you are classifying audio - for example to detect keywords - you want to make sure that every piece of information is both captured and analyzed, to avoid missing events. This means that your device need to capture audio samples and analyze them at the same time. In this tutorial you'll learn how to continuously capture audio data, and then use the continuous inferencing mode in the Edge Impulse SDK to classify the data.
This tutorial assumes that you've completed the tutorial, and have your impulse running on your device.
Continuous inference mode
Continuous inferencing is automatically enabled for any impulses that use audio. Build and flash a ready-to-go binary for your development board from the Deployment tab in the studio, then - from a command prompt or terminal window - run edge-impulse-run-impulse --continuous
.
An Arduino sketch that demonstrates continuous audio sampling is part of the Arduino library deployment option. After importing the library into the Arduino IDE, look under 'Examples' for 'nano_ble33_sense_audio_continuous'.
In the normal (non-continuous) inference mode when classifying data you sample data until you have a full window of data (e.g. 1 second for a keyword spotting model, see the Create impulse tab in the studio), you then classify this window (using the run_classifier
function), and a prediction is returned. Then you empty the buffer, sample new data, and run the inferencing again. Naturally this has some caveats when deploying your model in the real world: 1) you have a delay between windows, as classifying the window takes some time and you're not sampling then, making it possible to miss events. 2) there's no overlap between windows, thus if an event is at the very end of the window, not the full event might be captured - leading to a wrong classification.
To mitigate this we have added several new features to the Edge Impulse SDK.
Using continuous inferencing, smaller sampling buffers (slices) are used and passed to the inferencing process. In the inferencing process, the buffers are time sequentially placed in a FIFO (First In First Out) buffer that matches the model size. After each iteration, the oldest slice is removed at the end of the buffer and a new slice is inserted at the beginning. On each slice now, the inference is run multiple times (depending on the number of slices used for a model). For example, a 1-second keyword model with 4 slices (each 250 ms), will infer each slice 4 times. So if now the keyword is on 2 edges of the slice buffers, they're glued back together in the FIFO buffer and the keyword will be classified correctly.
Another advantage of this technique is that it filters out false positives. Take for instance a yes-no keyword spotting model. The word 'yesterday' should not be classified as a yes (or no). But if the 'yes-' is sampled in the first buffer and '-terday' in the next, there is a big chance that the inference step will classify the first buffer as a yes.
By running inference multiple times over the slices, continuous inferencing will filter out this false positive. When the 'yes' buffer enters the FIFO it will surely classify as a 'yes'. But as the rest of the word enters, the classified value for 'yes' will drop quickly. We just have to make sure that we don't react on peak values. Therefore a moving average filter averages the classified output and so flattens the peaks. To have a valid 'yes', we now need multiple high-rated classifications.
In the standard way of running the impulse, the steps of collecting data and running the inference are run sequentially. First, the audio is sampled, filling a block the size of the model. This block is sent to the inferencing part, where first the features are extracted and then the inference is run. Finally, the classified output is used in your application (by default the output will be printed over the serial connection).
In the continuous sampling method, audio is sampled in parallel with the inferencing and output steps. So while inference is running, audio sampling continues on a background process.
The embedded target needs to support running of multiple processes in parallel. This can either be achieved by an operating system; 1 low priority thread will run inferencing and 1 high priority thread will collect sample data. Or the processor should support processor offloading. This is usually done by the audio peripheral or DMA (Direct Memory Access). Here audio samples are collected in a buffer without involvement of the processor.
How do we know when new sample data is available? For this we use a double buffering mechanism. Hereby 2 sample buffers are used:
1 buffer for the audio sampling process, filling the buffer with new sample data
1 buffer for the inference process, get sample data out the buffer, extract the features and run inference
At start, the sampling process starts filling a buffer with audio samples. Meanwhile, the inference process waits until the buffer is full. When that happens, the sampling process passes the buffer to the inference process and starts sampling on the second buffer. Each iteration, the buffers will be switched so that there is always an empty buffer for sampling and a full buffer of samples for inferencing.
There are 2 constraints in this story: timing and memory. When switching the buffers there must be a 100% guarantee that the inference process is finished when the sampling process passes a full buffer. If not, the sampling process overruns the buffer and sampled data will get lost. When that happens on the ST B-L475E-IOT01A or the Arduino Nano 33 BLE Sense target, running the impulse is aborted and the following error is returned:
The EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
macro is used to set the number of slices to fill the complete model window. The more slices per model, the smaller the slice size, thereby the more inference cycles on the sampled data. Leading to more accurate results. The sampling process uses this macro for the buffer size. Where following rule applies: the bigger the buffer, the longer the sampling cycle. So on targets with lower processing capabilities, we can increase this macro to meet the timing constraint.
Increasing the slice size, increases the volatile memory uses times 2 (since we use double buffering). On a target with limited volatile memory this could be a problem. In this case you want the slice size to be small.
On both the ST B-L475E-IOT01A and Arduino Nano 33 BLE Sense targets the audio sampling process calls the audio_buffer_inference_callback()
function when there is data. Here the number of samples (inference.n_samples
) are stored in one of the buffers. When the buffer is full, the buffers are switched by toggling inference.buf_select
. The inference process is signaled by setting the flag inference.buf_ready
.
The inferencing process then sets the callback function on the signal_t
structure to reference the selected buffer:
We've implemented continuous audio sampling already on the and the targets (the firmware for both targets is open source), but here's a guideline to implementing this on your own targets.
Then is called which will take the slice of data, run the DSP pipeline over the data, stitch data together, and then classify the data.
The Edge Impulse object detection model (FOMO) is effective at classifying objects and very lightweight (can run on MCUs). It does not however have any object persistence between frames. One common use of computer vision is for object counting- in order to achieve this you will need to add in some extra logic when deploying.
This notebook takes you through how to count objects using the linux deployment block (and provides some pointers for how to achieve similar logic other firmware deployment options).
Relevant links:
Raw python files for the linux deployment example: https://github.com/edgeimpulse/object-counting-demo
An end-to-end demo for on-device deployment of object counting: https://github.com/edgeimpulse/conveyor-counting-data-synthesis-demo
To run your model locally, you need to deploy to a linux target in your project. First you need to enable all linux targets. Head to the deployment screen and click "Linux Boards" then in the following pop-up select "show all Linux deployment options on this page":
Then download the linux/mac target which is relevant to your machine:
Finally, follow the instructions shown as a pop-up to make your .eim file executable (for example for MacOS):
Open a terminal window and navigate to the folder where you downloaded this model.
Mark the model as executable: chmod +x path-to-model.eim
Remove the quarantine flag: xattr -d com.apple.quarantine ./path-to-model.eim
Ensure you have these libraries installed before starting:
(see next heading for running on a webcam)
This program will run object detection on an input video file and count the objects going upwards which pass a threshold (TOP_Y). The sensitivity can be tuned with the number of columns (NUM_COLS) and the DETECT_FACTOR which is the factor of width/height of the object used to determine object permanence between frames.
Ensure you have added the relevant paths to your model file and video file:
modelfile = '/path/to/modelfile.eim'
videofile = '/path/to/video.mp4'
This program will run object detection on a webcam port and count the objects going upwards which pass a threshold (TOP_Y). The sensitivity can be tuned with the number of columns (NUM_COLS) and the DETECT_FACTOR which is the factor of width/height of the object used to determine object permanence between frames.
Ensure you have added the relevant paths to your model file and video file:
modelfile = '/path/to/modelfile.eim'
[OPTIONAL] camera_port = '/camera_port'
While running object counting on linux hardware is fairly simple, it would be more useful to be able to deploy this to one of the firmware targets. This method varies per target but broadly speaking it is simple to add the object counting logic into existing firmware.
Here are the main steps:
This can be found on our github pages e.g. https://github.com/edgeimpulse/firmware-arduino-nicla-vision
You'll need to replace the "edge-impulse-sdk", "model-parameters" and "tflite-model" folders within the cloned firmware with the ones you've just downloaded for your model.
This will be in a .h or similar file somewhere in the firmware. Likely in the ei_image_nn.h file. It can be found by searching for these lines:
The following lines must be added into the logic in these files (For code itself see below, diff for clarity). Firstly these variables must be instantiated:
Then this logic must be inserted into the bounding box printing logic here:
Full code example for nicla vision (src/inference/ei_run_camera_impulse.cpp):
Follow the instructions in the README.md file for the firmware repo you have been working in.
Use the command below to see on-device inference (follow the local link to see bounding boxes and count output in the browser)