Multi-impulse
Last updated
Last updated
Once you successfully trained or imported a model, you can use Edge Impulse to download a C++ library that bundles both your signal processing and your machine learning model. Until recently, we could only run one impulse on MCUs.
Feature under development
Please note that this method is still under integration in the studio and has several limitations. This tutorial is for advanced users only. Thus, we will provide limited support on the forum until the integration is completed. If you have subscribed to an Enterprise Plan, you can contact our customer success or solution engineering team.
In this tutorial, we will see how to run multiple impulses using the downloaded C++ libraries of two different projects.
We have put together a custom deployment block that will automate all the processes and provide a C++ library that can be compiled and run as a standalone.
In this page, we will explain the high level concepts of how to merge two impulses. Feel free to look at the code to gain a deeper understanding.
Multi-impulse vs multi-model vs sensor fusion
Running multi-impulse refers to running two separate projects (different data, different DSP blocks and different models) on the same target. It will require modifying some files in the EI-generated SDKs.
Running multi-model refers to running two different models (same data, same DSP block but different tflite models) on the same target. See how to run a motion classifier model and an anomaly detection model on the same device in this tutorial.
Sensor fusion refers to the process of combining data from different types of sensors to give more information to the neural network. See how to use sensor fusion in this tutorial.
Also see this video (starting min 13):
Make sure you have at least two impulses fully trained.
You can use one of the following examples:
The source code and the generator script can be found here.
By default, the quantized version is used when downloading the C++ libraries. To use float32, add the option --float32
as an argument.
Similarly by default the EON compiled model is used, if you want to use full tflite then add the option --full-tflite
and be sure to include a recent version of tensorflow lite compiled for your device architecture in the root of your project in a folder named tensorflow-lite
If you need a mix of quantized and float32, you can look at the dzip.download_model
function call in generate.py and change the code accordingly.
By default, the block will download cached version of builds. You can force new builds using the --force-build
option.
Retrieve API Keys of your projects and run the generate.py command as follows:
python generate.py --out-directory output --api-keys ei_0b0e...,ei_acde... --quantization-map <0/1>,<0/1>
Build the container: docker build -t multi-impulse .
Then run: docker run --rm -it -v $PWD:/home multi-impulse --api-keys ei_0b0e...,ei_acde...
Initialize the custom block - select Deployment block and Library when prompted: edge-impulse-blocks init
Push the block: edge-impulse-blocks push
Then go your Organization and Edit the deployment block with:
CLI arguments: --api-keys ei_0b0e...,ei_acde...
Privaliged mode: Enabled
See Edge Impulse Studio -> Organizations -> Custom blocks -> Deployment blocks documentation for more details about custom deployment blocks.
If you have a look at the generate.py
script, it streamline the process of generating a C++ library from multiple impulses through several steps:
Library Download and Extraction:
If the script detects that the necessary projects are not already present locally, it initiates the download of C++ libraries required for edge deployment. These libraries are fetched using API keys provided by the user.
Libraries are downloaded and extracted into a temporary directory. If the user specifies a custom temporary directory, it's used; otherwise, a temporary directory is created.
Customization of Files:
For each project's library, the script performs several modifications:
At the file name level:
It adds a project-specific suffix to certain patterns in compiled files within the tflite-model
directory. This customization ensures that each project's files are unique.
Renamed files are then copied to a target directory, mainly the first project's directory.
At the function name level:
It edits model_variables.h
functions by adding the project-specific suffix to various patterns. This step ensures that model parameters are correctly associated with each project.
Merging the projects
model_variables.h
is merged into the first project's directory to consolidate model information.
The script saves the intersection of lines between trained_model_ops_define.h
files for different projects, ensuring consistency.
Copying Templates:
The script copies template files from a templates
directory to the target directory. The template available includes files with code structures and placeholders for customization. It's adapted from the example-standalone-inferencing example available on Github.
Generating Custom Code:
The script retrieves impulse IDs from model_variables.h
for each project. Impulses are a key part of edge machine learning models.
Custom code is generated for each project, including functions to get signal data, define raw features, and run the classifier.
This custom code is inserted into the main.cpp
file of each project at specific locations.
Archiving for Deployment:
Finally, the script archives the target directory, creating a zip file ready for deployment. This zip file contains all the customized files and code necessary for deploying machine learning models on edge devices.
When changing between projects and running generate.py
locally:
You may need to include the --force-build
option to ensure correctness of the combined library.
Now to test the library generated:
Download and unzip your Edge Impulse C++ multi-impulse library into a directory
Copy a test sample's raw features into the features[]
array in source/main.cpp
Enter make -j
in this directory to compile the project. If you encounter any OOM memory error try make -j4
(replace 4 with the number of cores available)
Enter ./build/app
to run the application
Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio
Want to add your own business logic?
You can change the template you want to use in step 4 to use another compilation method, implement your custom sampling strategy and how to handle the inference results in step 5 (apply post-processing, send results somewhere else, trigger actions, etc.).
General limitations:
The custom ML accelerator deployments are unlikely to work (TDA4VM, DRPAI, MemoryX, Brainchip).
The custom tflite kernels (ESP NN, Silabs MVP, Arc MLI) should work - but may require some additional work. I.e: for ESP32 you may need to statically allocate arena for the image model.
In general, running multiple impulses on an MCU can be challenging due to limited processing power, memory, and other hardware constraints. Make sure to thoroughly evaluate the capabilities and limitations of your specific MCU and consider the resource requirements of the impulses before attempting to run them concurrently.
Use case specific limitations:
The model_metadata.h
comes from the first API Key of your project. This means some #define
statement might be missing or conflicting.
Object detection: If you want to run at least one Object Detection project. Make sure to use this project API KEY first! This will set the #define EI_CLASSIFIER_OBJECT_DETECTION 1
and eventually the #define EI_HAS_FOMO 1
. Note that you can overwrite them manually but it requires an extra step.
Anomaly detection: If your anomaly detection model API Key is not in the first position, the model-parameter/anomaly_metadata.h
file will not be included.
Visual anomaly detection AND time-series anomaly detection (K-Means or GMM): It is currently not possible to combine two different anomaly detection models. The #define EI_CLASSIFIER_HAS_ANOMALY
statement expect ONLY one of the following argument:
If you see the following segmentation fault, make sure to subtract and merge the trained_model_ops_define.h or tflite_resolver.h
If you see an error like the following, you probably used twice the same API Key:
Make sure you use distinct projects.
When we first wrote this tutorial, we explained how to merge two impulses manually; This process is now deprecated due to recent changes in our C++ SDK, some files and functions may have been renamed.