Multi-impulse
Once you successfully trained or imported a model, you can use Edge Impulse to download a C++ library that bundles both your signal processing and your machine learning model. Until recently, we could only run one impulse on MCUs.
Feature under development
Please note that this method is still under integration in the studio and has not yet been fully tested on all targets. This tutorial is for advanced users only. Thus, we will provide limited support on the forum until the integration is completed. If you are interested in using it for an enterprise project, please check our pricing page and contact us directly, our solution engineers can work with you on the integration.
In this tutorial, we will see how to run multiple impulses using the downloaded C++ libraries of two different projects.
As an example, we will build an intrusion detection system. We will use a first model to detect glass-breaking sounds, if we detected this sound, we will then classify an image to see if there is a person or not in the image.
You can have a look at this Github repository to make sure your directory structures, files and variables are correct.
Multi-impulse vs multi-model vs sensor fusion
Running multi-impulse refers to running two separate projects (different data, different DSP blocks and different models) on the same target. It will require modifying some files in the EI-generated SDKs.
Running multi-model refers to running two different models (same data, same DSP block but different tflite models) on the same target. See how to run a motion classifier model and an anomaly detection model on the same device in this tutorial.
Sensor fusion refers to the process of combining data from different types of sensors to give more information to the neural network. See how to use sensor fusion in this tutorial.
Make sure you have two impulses fully trained. In this tutorial, we will use the following public projects:
Head to your projects' deployment pages and download the C++ libraries:

Deployment page of the glass-breaking project
Make sure to select the same model versions (EON-Compiled enabled/disabled and int8/float32) for your projects.
Extract the two archive in a directory (
multi-impulse
for example).Rename the tflite model files:
Go to the
tflite-model
directory in your extracted archives and rename the following files by post-fixing them with the name of the project:- for EON compiled projects:
tflite_model_compiled.cpp
/tflite_model_compiled.h
. - for non-EON-compiled projects:
tflite-trained.cpp
/tflite-trained.h
.
Original structure:
> multi-impulse % tree -L 3
.
├── audio
│ ├── CMakeLists.txt
│ ├── README.txt
│ ├── edge-impulse-sdk
│ │ ├── CMSIS
│ │ ├── LICENSE
│ │ ├── LICENSE-apache-2.0.txt
│ │ ├── README.md
│ │ ├── classifier
│ │ ├── cmake
│ │ ├── dsp
│ │ ├── porting
│ │ ├── sources.txt
│ │ ├── tensorflow
│ │ └── third_party
│ ├── model-parameters
│ │ ├── model_metadata.h
│ │ └── model_variables.h
│ └── tflite-model
│ ├── trained_model_compiled.cpp
│ ├── trained_model_compiled.h
│ └── trained_model_ops_define.h
└── image
├── CMakeLists.txt
├── README.txt
├── edge-impulse-sdk
│ ├── CMSIS
│ ├── LICENSE
│ ├── LICENSE-apache-2.0.txt
│ ├── README.md
│ ├── classifier
│ ├── cmake
│ ├── dsp
│ ├── porting
│ ├── sources.txt
│ ├── tensorflow
│ └── third_party
├── model-parameters
│ ├── model_metadata.h
│ └── model_variables.h
└── tflite-model
├── trained_model_compiled.cpp
├── trained_model_compiled.h
└── trained_model_ops_define.h
22 directories, 22 files
New structure after renaming the files:
>multi-impulse % tree -L 3
.
├── audio
│ ├── CMakeLists.txt
│ ├── README.txt
│ ├── edge-impulse-sdk
│ │ ├── CMSIS
│ │ ├── LICENSE
│ │ ├── LICENSE-apache-2.0.txt
│ │ ├── README.md
│ │ ├── classifier
│ │ ├── cmake
│ │ ├── dsp
│ │ ├── porting
│ │ ├── sources.txt
│ │ ├── tensorflow
│ │ └── third_party
│ ├── model-parameters
│ │ ├── model_metadata.h
│ │ └── model_variables.h
│ └── tflite-model
│ ├── trained_model_compiled_audio.cpp
│ ├── trained_model_compiled_audio.h
│ └── trained_model_ops_define.h
└── image
├── CMakeLists.txt
├── README.txt
├── edge-impulse-sdk
│ ├── CMSIS
│ ├── LICENSE
│ ├── LICENSE-apache-2.0.txt
│ ├── README.md
│ ├── classifier
│ ├── cmake
│ ├── dsp
│ ├── porting
│ ├── sources.txt
│ ├── tensorflow
│ └── third_party
├── model-parameters
│ ├── model_metadata.h
│ └── model_variables.h
└── tflite-model
├── trained_model_compiled_image.cpp
├── trained_model_compiled_image.h
└── trained_model_ops_define.h
22 directories, 22 files
Rename the variables (EON model functions, such as trained_model_input etc or tflite model array names) by post-fixing them with the name of the project.
e.g: Change the
trained_model_compiled_audio.h
from:#ifndef trained_model_GEN_H
#define trained_model_GEN_H
#include "edge-impulse-sdk/tensorflow/lite/c/common.h"
// Sets up the model with init and prepare steps.
TfLiteStatus trained_model_init( void*(*alloc_fnc)(size_t,size_t) );
// Returns the input tensor with the given index.
TfLiteStatus trained_model_input(int index, TfLiteTensor* tensor);
// Returns the output tensor with the given index.
TfLiteStatus trained_model_output(int index, TfLiteTensor* tensor);
// Runs inference for the model.
TfLiteStatus trained_model_invoke();
//Frees memory allocated
TfLiteStatus trained_model_reset( void (*free)(void* ptr) );
// Returns the number of input tensors.
inline size_t trained_model_inputs() {
return 1;
}
// Returns the number of output tensors.
inline size_t trained_model_outputs() {
return 1;
}
#endif
to:
#include "edge-impulse-sdk/tensorflow/lite/c/common.h"
// Sets up the model with init and prepare steps.
TfLiteStatus trained_model_audio_init( void*(*alloc_fnc)(size_t,size_t) );
// Returns the input tensor with the given index.
TfLiteStatus trained_model_audio_input(int index, TfLiteTensor* tensor);
// Returns the output tensor with the given index.
TfLiteStatus trained_model_audio_output(int index, TfLiteTensor* tensor);
// Runs inference for the model.
TfLiteStatus trained_model_audio_invoke();
//Frees memory allocated
TfLiteStatus trained_model_audio_reset( void (*free)(void* ptr) );
// Returns the number of input tensors.
inline size_t trained_model_audio_inputs() {
return 1;
}
// Returns the number of output tensors.
inline size_t trained_model_audio_outputs() {
return 1;
}
#endif
Tip: Use an IDE to use the "Find and replace feature.
Here is a list of the files that need to be modified (the names may change if not compiled with the EON compiler):
tflite-model/trained_model_compiled_<project1|2>.h
tflite-model/trained_model_compiled_<project1|2>.cpp

Visual Studio find and replace
Be careful here when using the "find and replace" from your IDE, NOT all variables looking like
_model_
need to be replaced.Example for the audio project:
#ifndef _EI_CLASSIFIER_MODEL_VARIABLES_H_
#define _EI_CLASSIFIER_MODEL_VARIABLES_H_
#include <stdint.h>
#include "model_metadata.h"
#include "tflite-model/trained_model_compiled_audio.h"
#include "edge-impulse-sdk/classifier/ei_model_types.h"
#include "edge-impulse-sdk/classifier/inferencing_engines/engines.h"
const char* ei_classifier_inferencing_categories_audio[] = { "Background", "Glass_Breaking" };
uint8_t ei_dsp_config_3_axes_audio[] = { 0 };
const uint32_t ei_dsp_config_3_axes_size_audio = 1;
ei_dsp_config_mfe_t ei_dsp_config_3_audio = {
3, // uint32_t blockId
3, // int implementationVersion
1, // int length of axes
0.02f, // float frame_length
0.01f, // float frame_stride
40, // int num_filters
256, // int fft_length
300, // int low_frequency
0, // int high_frequency
101, // int win_size
-52 // int noise_floor_db
};
const size_t ei_dsp_blocks_size_audio = 1;
ei_model_dsp_t ei_dsp_blocks_audio[ei_dsp_blocks_size_audio] = {
{ // DSP block 3
3960,
&extract_mfe_features,
(void*)&ei_dsp_config_3_audio,
ei_dsp_config_3_axes_audio,
ei_dsp_config_3_axes_size_audio
}
};
const ei_config_tflite_eon_graph_t ei_config_tflite_graph_audio_0 = {
.implementation_version = 1,
.model_init = &trained_model_audio_init,
.model_invoke = &trained_model_audio_invoke,
.model_reset = &trained_model_audio_reset,
.model_input = &trained_model_audio_input,
.model_output = &trained_model_audio_output,
};
const ei_learning_block_config_tflite_graph_t ei_learning_block_config_audio_0 = {
.implementation_version = 1,
.block_id = 0,
.object_detection = 0,
.object_detection_last_layer = EI_CLASSIFIER_LAST_LAYER_UNKNOWN,
.output_data_tensor = 0,
.output_labels_tensor = 1,
.output_score_tensor = 2,
.graph_config = (void*)&ei_config_tflite_graph_audio_0
};
const size_t ei_learning_blocks_size_audio = 1;
const ei_learning_block_t ei_learning_blocks_audio[ei_learning_blocks_size_audio] = {
{
&run_nn_inference,
(void*)&ei_learning_block_config_audio_0,
},
};
const ei_model_performance_calibration_t ei_calibration_audio = {
1, /* integer version number */
false, /* has configured performance calibration */
(int32_t)(EI_CLASSIFIER_RAW_SAMPLE_COUNT / ((EI_CLASSIFIER_FREQUENCY > 0) ? EI_CLASSIFIER_FREQUENCY : 1)) * 1000, /* Model window */
0.8f, /* Default threshold */
(int32_t)(EI_CLASSIFIER_RAW_SAMPLE_COUNT / ((EI_CLASSIFIER_FREQUENCY > 0) ? EI_CLASSIFIER_FREQUENCY : 1)) * 500, /* Half of model window */
0 /* Don't use flags */
};
const ei_impulse_t impulse_233502_3 = {
.project_id = 233502,
.project_owner = "Edge Impulse Inc.",
.project_name = "Glass breaking - audio classification",
.deploy_version = 3,
.nn_input_frame_size = 3960,
.raw_sample_count = 16000,
.raw_samples_per_frame = 1,
.dsp_input_frame_size = 16000 * 1,
.input_width = 0,
.input_height = 0,
.input_frames = 0,
.interval_ms = 0.0625,
.frequency = 16000,
.dsp_blocks_size = ei_dsp_blocks_size_audio,
.dsp_blocks = ei_dsp_blocks_audio,
.object_detection = 0,
.object_detection_count = 0,
.object_detection_threshold = 0,
.object_detection_last_layer = EI_CLASSIFIER_LAST_LAYER_UNKNOWN,
.fomo_output_size = 0,
.tflite_output_features_count = 2,
.learning_blocks_size = ei_learning_blocks_size_audio,
.learning_blocks = ei_learning_blocks_audio,
.inferencing_engine = EI_CLASSIFIER_TFLITE,
.quantized = 1,
.compiled = 1,
.sensor = EI_CLASSIFIER_SENSOR_MICROPHONE,
.fusion_string = "audio",
.slice_size = (16000/4),
.slices_per_model_window = 4,
.has_anomaly = 0,
.label_count = 2,
.calibration = ei_calibration_audio,
.categories = ei_classifier_inferencing_categories_audio
};
const ei_impulse_t ei_default_impulse = impulse_233502_3;
#endif // _EI_CLASSIFIER_MODEL_METADATA_H_
Example for the image project:
#ifndef _EI_CLASSIFIER_MODEL_VARIABLES_H_
#define _EI_CLASSIFIER_MODEL_VARIABLES_H_
#include <stdint.h>
#include "model_metadata.h"
#include "tflite-model/trained_model_compiled_image.h"
#include "edge-impulse-sdk/classifier/ei_model_types.h"
#include "edge-impulse-sdk/classifier/inferencing_engines/engines.h"
const char* ei_classifier_inferencing_categories_image[] = { "person", "unknown" };
uint8_t ei_dsp_config_3_axes_image[] = { 0 };
const uint32_t ei_dsp_config_3_axes_size_image = 1;
ei_dsp_config_image_t ei_dsp_config_3_image = {
3, // uint32_t blockId
1, // int implementationVersion
1, // int length of axes
"RGB" // select channels
};
const size_t ei_dsp_blocks_size_image = 1;
ei_model_dsp_t ei_dsp_blocks_image[ei_dsp_blocks_size_image] = {
{ // DSP block 3
27648,
&extract_image_features,
(void*)&ei_dsp_config_3_image,
ei_dsp_config_3_axes_image,
ei_dsp_config_3_axes_size_image
}
};
const ei_config_tflite_eon_graph_t ei_config_tflite_graph_image_0 = {
.implementation_version = 1,
.model_init = &trained_model_image_init,
.model_invoke = &trained_model_image_invoke,
.model_reset = &trained_model_image_reset,
.model_input = &trained_model_image_input,
.model_output = &trained_model_image_output,
};
const ei_learning_block_config_tflite_graph_t ei_learning_block_config_image_0 = {
.implementation_version = 1,
.block_id = 0,
.object_detection = 0,
.object_detection_last_layer = EI_CLASSIFIER_LAST_LAYER_UNKNOWN,
.output_data_tensor = 0,
.output_labels_tensor = 1,
.output_score_tensor = 2,
.graph_config = (void*)&ei_config_tflite_graph_image_0
};
const size_t ei_learning_blocks_size_image = 1;
const ei_learning_block_t ei_learning_blocks_image[ei_learning_blocks_size_image] = {
{
&run_nn_inference,
(void*)&ei_learning_block_config_image_0,
},
};
const ei_model_performance_calibration_t ei_calibration_image = {
1, /* integer version number */
false, /* has configured performance calibration */
(int32_t)(EI_CLASSIFIER_RAW_SAMPLE_COUNT / ((EI_CLASSIFIER_FREQUENCY > 0) ? EI_CLASSIFIER_FREQUENCY : 1)) * 1000, /* Model window */
0.8f, /* Default threshold */
(int32_t)(EI_CLASSIFIER_RAW_SAMPLE_COUNT / ((EI_CLASSIFIER_FREQUENCY > 0) ? EI_CLASSIFIER_FREQUENCY : 1)) * 500, /* Half of model window */
0 /* Don't use flags */
};
const ei_impulse_t impulse_233515_5 = {
.project_id = 233515,
.project_owner = "Edge Impulse Inc.",
.project_name = "Person vs unknown - image classification",
.deploy_version = 5,
.nn_input_frame_size = 27648,
.raw_sample_count = 9216,
.raw_samples_per_frame = 1,
.dsp_input_frame_size = 9216 * 1,
.input_width = 96,
.input_height = 96,
.input_frames = 1,
.interval_ms = 1,
.frequency = 0,
.dsp_blocks_size = ei_dsp_blocks_size_image,
.dsp_blocks = ei_dsp_blocks_image,
.object_detection = 0,
.object_detection_count = 0,
.object_detection_threshold = 0,
.object_detection_last_layer = EI_CLASSIFIER_LAST_LAYER_UNKNOWN,
.fomo_output_size = 0,
.tflite_output_features_count = 2,
.learning_blocks_size = ei_learning_blocks_size_image,
.learning_blocks = ei_learning_blocks_image,
.inferencing_engine = EI_CLASSIFIER_TFLITE,
.quantized = 1,
.compiled = 1,
.sensor = EI_CLASSIFIER_SENSOR_CAMERA,
.fusion_string = "image",
.slice_size = (9216/4),
.slices_per_model_window = 4,
.has_anomaly = 0,
.label_count = 2,
.calibration = ei_calibration_image,
.categories = ei_classifier_inferencing_categories_image
};
const ei_impulse_t ei_default_impulse = impulse_233515_5;
#endif // _EI_CLASSIFIER_MODEL_METADATA_H_
Create a new directory (
merged-impulse
for example). Copy the content of one project into this new directory (audio
for example). Copy the content of the tflite-model
directory from the other project (image
) inside the newly created merged-impulse/tflite-model
.The structure of this new directory should look like the following:
> merged-impulse % tree -L 2
.
├── CMakeLists.txt
├── README.txt
├── edge-impulse-sdk
│ ├── CMSIS
│ ├── LICENSE
│ ├── LICENSE-apache-2.0.txt
│ ├── README.md
│ ├── classifier
│ ├── cmake
│ ├── dsp
│ ├── porting
│ ├── sources.txt
│ ├── tensorflow
│ └── third_party
├── model-parameters
│ ├── model_metadata.h
│ └── model_variables.h
└── tflite-model
├── trained_model_compiled_audio.cpp
├── trained_model_compiled_audio.h
├── trained_model_compiled_image.cpp
├── trained_model_compiled_image.h
├── trained_model_ops_define_audio.h
└── trained_model_ops_define_image.h
10 directories, 14 files
Copy the necessary variables and structs from previously updated
image/model_metadata.h
file content to the merged-impulse/model_metadata.h
.To do so, include both of these lines in the
#include
section:#include "tflite-model/trained_model_compiled_audio.h"
#include "tflite-model/trained_model_compiled_image.h"
The section that should be copied is from
const char* ei_classifier_inferencing_categories...
to the line before const ei_impulse_t ei_default_impulse = impulse_<ProjectID>_<version>
.Make sure to leave only one
const ei_impulse_t ei_default_impulse = impulse_233502_3;
this will define which of your impulse is the default one.Make sure the macros
EI_TFLITE_DISABLE_...
are a COMBINATION of the ones present in two deployments.For EON-compiled projects:
E.g. if
#define EI_TFLITE_DISABLE_SOFTMAX_IN_U8 1
is present in one deployment and absent in the other, it should be ABSENT in the combined trained_model_ops_define.h
.For non-EON-Compiled projects:
E.g. if
resolver.AddFullyConnected();
is present in one deployment and absent in the other, it should be PRESENT in the combined tflite-resolver.h
. Remember to change the length of the resolver array if necessary.In this example, here are the lines to deleted:

diff trained_model_ops_define.h
Clone this repository: https://github.com/edgeimpulse/example-standalone-inferencing-multi-impulse
git clone [email protected]:edgeimpulse/example-standalone-inferencing-multi-impulse.git
Copy the content of the
merged-impulse
directory to example-standalone-inferencing-multi-impulse
(replace the files and directory sharing the same).Edit the
source/main.cpp
file and replace the callback function names, the features buffers.Note: The run_classifier takes the impulse pointer as a first argument
Enter
make -j
in this directory to compile the project Enter ./build/app
to run the application Compare the output predictions to the predictions of the test sample in the Edge Impulse Studio.> example-standalone-inferencing-multi-impulse % ./build/app
run_classifier with audio impulse returned: 0
Timing: DSP 0 ms, inference 0 ms, anomaly 0 ms
Predictions:
Background: 0.00000
Glass_Breaking: 0.99609
run_classifier with image impulse returned: 0
Timing: DSP 0 ms, inference 10 ms, anomaly 0 ms
Predictions:
person: 0.99609
unknown: 0.00000
Enter
rm -f build/app && make clean
to clean the project.Congrats, you can now run multiple Impulse!!
- The custom ML accelerator deployments are unlikely to work (TDA4VM, DRPAI, MemoryX, Brainchip).
- The custom tflite kernels (ESP NN, Silabs MVP, Arc MLI) should work - but may require some additional work. I.e: for ESP32 you may need to statically allocate arena for the image model.
- In general, running multiple impulses on an MCU can be challenging due to limited processing power, memory, and other hardware constraints. Make sure to thoroughly evaluate the capabilities and limitations of your specific MCU and consider the resource requirements of the impulses before attempting to run them concurrently.
If you see the following segmentation fault, make sure to subtract and merge the trained_model_ops_define.h or tflite_resolver.h
./build/app
run_classifier with audio impulse returned: 0
Timing: DSP 0 ms, inference 0 ms, anomaly 0 ms
Predictions:
Background: 0.00000
Glass_Breaking: 0.99609
zsh: segmentation fault ./build/app
Last modified 1mo ago