Once you successfully trained or imported a model, you can use Edge Impulse to download a C++ library that bundles both your signal processing and your machine learning model. Until recently, we could only run one impulse on MCUs.
Feature under development
Please note that this method is still under integration in the studio and has not yet been fully tested on all targets. This tutorial is for advanced users only. Thus, we will provide limited support on the forum until the integration is completed. If you are interested in using it for an enterprise project, please sign up for our FREE Enterprise Trial and our solution engineers can work with you on the integration.
In this tutorial, we will see how to run multiple impulses using the downloaded C++ libraries of two different projects.
We have put together a custom deployment block that will automate all the processes and provide a C++ library that can be compiled and run as a standalone.
In this page, we will explain the high level concepts of how to merge two impulses. Feel free to look at the code to gain a deeper understanding. Alternatively, when we first wrote this tutorial, we explained how to merge two impulses manually; we will kept this process in the Manual procedure section but due to recent changes in our C++ SDK, some files and functions may have been renamed.
Multimodal vs Multi-impulse vs multi-model vs sensor fusion
Multimodal: When discussing multi-impulse, it's important to also understand multimodal models. These models integrate multiple types of data (modalities) such as text, images, audio, and video. By combining these diverse data sources, multimodal models can extract richer features and improve overall model performance. This is similar to sensor fusion but extends beyond sensor data to any type of data that can provide complementary information. This integration helps in creating more robust and versatile AI systems capable of understanding and predicting complex scenarios.
Running multi-impulse refers to running two separate projects (different data, different DSP blocks and different models) on the same target. It will require modifying some files in the EI-generated SDKs. Can be multimodal. Since it involves running multiple separate projects with different data and models, it can handle different types of data, making it potentially multimodal. See the multi-impulse tutorial
Running multi-model refers to running two different models (same data, same DSP block but different tflite models) on the same target. It can become multimodal if the models are handling different types of data. See how to run a motion classifier model and an anomaly detection model on the same device in this tutorial.
Sensor fusion refers to the process of combining data from different types of sensors to give more information to the neural network. To extract meaningful information from this data, you can use the same DSP block (like in this tutorial), multiples DSP blocks, or use neural networks embeddings. Sensor fusion can be considered a form of multimodal integration because it involves combining data from different sensors, which can be seen as different modalities within the sensor data domain. See an example of Sensor fusion in the following tutorial sensor fusion using Embeddings tutorial.
Also see this video (starting min 13):
Prerequisites
Make sure you have at least two impulses fully trained.
As an example, we will build an intrusion detection system. We will use a first model to detect glass-breaking sounds, if we detected this sound, we will then classify an image to see if there is a person or not in the image. In this tutorial, we will use the following public projects:
Please note that the script works with EON compiled projects only and anomaly detection blocks have not been tested.
Modifying the generated libraries and merging them into a single library
If you have a look at the generate.py script, it streamline the process of generating a C++ library from multiple impulses through several steps:
Library Download and Extraction:
If the script detects that the necessary projects are not already present locally, it initiates the download of C++ libraries required for edge deployment. These libraries are fetched using API keys provided by the user.
Libraries are downloaded and extracted into a temporary directory. If the user specifies a custom temporary directory, it's used; otherwise, a temporary directory is created.
Customization of Files:
For each project's library, the script performs several modifications:
At the file name level:
It adds a project-specific suffix to certain patterns in compiled files within the tflite-model directory. This customization ensures that each project's files are unique.
Renamed files are then copied to a target directory, mainly the first project's directory.
At the function name level:
It edits model_variables.hfunctions by adding the project-specific suffix to various patterns. This step ensures that model parameters are correctly associated with each project.
Merging the projects
model_variables.h is merged into the first project's directory to consolidate model information.
The script saves the intersection of lines between trained_model_ops_define.h files for different projects, ensuring consistency.
Copying Templates:
The script copies template files from a templates directory to the target directory. The template available includes files with code structures and placeholders for customization. It's adapted from the example-standalone-inferencing example available on Github.
Generating Custom Code:
The script retrieves impulse IDs from model_variables.h for each project. Impulses are a key part of edge machine learning models.
Custom code is generated for each project, including functions to get signal data, define raw features, and run the classifier.
This custom code is inserted into the main.cpp file of each project at specific locations.
Archiving for Deployment:
Finally, the script archives the target directory, creating a zip file ready for deployment. This zip file contains all the customized files and code necessary for deploying machine learning models on edge devices.
Compiling and running the multi-impulse library
Now to test the library generated:
Download and unzip your Edge Impulse C++ multi-impulse library into a directory
Copy a test sample's raw features into the features[] array in source/main.cpp
Enter make -j in this directory to compile the project. If you encounter any OOM memory error try make -j4 (replace 4 with the number of cores available)
Enter ./build/app to run the application
Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio
Want to add your own business logic?
You can change the template you want to use in step 4 to use another compilation method, implement your custom sampling strategy and how to handle the inference results in step 5 (apply post-processing, send results somewhere else, trigger actions, etc.).
Manual procedure
Some files and function names have changed
The general concepts remain valid but due to recent changes in our C++ inferencing SDK, some files and function names have changed.
Download the impulses from your projects
Head to your projects' deployment pages and download the C++ libraries:
Make sure to select the same model versions (EON-Compiled enabled/disabled and int8/float32) for your projects.
Extract the two archive in a directory (multi-impulse for example).
Rename the tflite model files
Rename the tflite model files:
Go to the tflite-model directory in your extracted archives and rename the following files by post-fixing them with the name of the project:
for EON compiled projects: tflite_model_compiled.cpp/tflite_model_compiled.h.
for non-EON-compiled projects: tflite-trained.cpp/tflite-trained.h.
Rename the variables in the tflite-model directory
Rename the variables (EON model functions, such as trained_model_input etc or tflite model array names) by post-fixing them with the name of the project.
e.g: Change the trained_model_compiled_audio.h from:
#ifndef trained_model_GEN_H
#define trained_model_GEN_H
#include "edge-impulse-sdk/tensorflow/lite/c/common.h"
// Sets up the model with init and prepare steps.
TfLiteStatus trained_model_init( void*(*alloc_fnc)(size_t,size_t) );
// Returns the input tensor with the given index.
TfLiteStatus trained_model_input(int index, TfLiteTensor* tensor);
// Returns the output tensor with the given index.
TfLiteStatus trained_model_output(int index, TfLiteTensor* tensor);
// Runs inference for the model.
TfLiteStatus trained_model_invoke();
//Frees memory allocated
TfLiteStatus trained_model_reset( void (*free)(void* ptr) );
// Returns the number of input tensors.
inline size_t trained_model_inputs() {
return 1;
}
// Returns the number of output tensors.
inline size_t trained_model_outputs() {
return 1;
}
#endif
to:
#include "edge-impulse-sdk/tensorflow/lite/c/common.h"
// Sets up the model with init and prepare steps.
TfLiteStatus trained_model_audio_init( void*(*alloc_fnc)(size_t,size_t) );
// Returns the input tensor with the given index.
TfLiteStatus trained_model_audio_input(int index, TfLiteTensor* tensor);
// Returns the output tensor with the given index.
TfLiteStatus trained_model_audio_output(int index, TfLiteTensor* tensor);
// Runs inference for the model.
TfLiteStatus trained_model_audio_invoke();
//Frees memory allocated
TfLiteStatus trained_model_audio_reset( void (*free)(void* ptr) );
// Returns the number of input tensors.
inline size_t trained_model_audio_inputs() {
return 1;
}
// Returns the number of output tensors.
inline size_t trained_model_audio_outputs() {
return 1;
}
#endif
Tip: Use an IDE to use the "Find and replace feature.
Here is a list of the files that need to be modified (the names may change if not compiled with the EON compiler):
Create a new directory (merged-impulse for example). Copy the content of one project into this new directory (audio for example). Copy the content of the tflite-model directory from the other project (image) inside the newly created merged-impulse/tflite-model.
The structure of this new directory should look like the following:
The section that should be copied is from const char* ei_classifier_inferencing_categories... to the line before const ei_impulse_t ei_default_impulse = impulse_<ProjectID>_<version>.
Make sure to leave only one const ei_impulse_t ei_default_impulse = impulse_233502_3; this will define which of your impulse is the default one.
Subtract and merge the trained_model_ops_define.h or tflite_resolver.h
Make sure the macros EI_TFLITE_DISABLE_... are a COMBINATION of the ones present in two deployments.
For EON-compiled projects:
E.g. if #define EI_TFLITE_DISABLE_SOFTMAX_IN_U8 1 is present in one deployment and absent in the other, it should be ABSENT in the combined trained_model_ops_define.h.
For non-EON-Compiled projects:
E.g. if resolver.AddFullyConnected(); is present in one deployment and absent in the other, it should be PRESENT in the combined tflite-resolver.h. Remember to change the length of the resolver array if necessary.
In this example, here are the lines to deleted:
Prepare the c++ application
Clone this repository: https://github.com/edgeimpulse/example-standalone-inferencing-multi-impulse
Copy the content of the merged-impulse directory to example-standalone-inferencing-multi-impulse (replace the files and directory sharing the same).
Rename the variables in source/main.cpp
Edit the source/main.cpp file and replace the callback function names, the features buffers.
Note: The run_classifier takes the impulse pointer as a first argument
Copy the raw features from the studio Live Classification page.
Compile and run
Enter make -j in this directory to compile the project Enter ./build/app to run the application Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio.
Enter rm -f build/app && make clean to clean the project.
Congrats, you can now run multiple Impulse!!
Limitations
The custom ML accelerator deployments are unlikely to work (TDA4VM, DRPAI, MemoryX, Brainchip).
The custom tflite kernels (ESP NN, Silabs MVP, Arc MLI) should work - but may require some additional work. I.e: for ESP32 you may need to statically allocate arena for the image model.
In general, running multiple impulses on an MCU can be challenging due to limited processing power, memory, and other hardware constraints. Make sure to thoroughly evaluate the capabilities and limitations of your specific MCU and consider the resource requirements of the impulses before attempting to run them concurrently.