Skip to main content
Only available on the Enterprise planThis feature is only available on the Enterprise plan. Review our plans and pricing or sign up for our free expert-led trial today.
Multi-project deployment enables you to bundle and run multiple Edge Impulse projects and impulses within a single deployment. This unlocks more advanced on-device pipelines, such as model cascades or running multiple independent models on the same MCU or Linux device — all using the standard Edge Impulse SDK. Key benefits:
  • Multiple impulses in one SDK build
  • Mix and match projects
    • Different projects
    • Different impulses
    • Quantized and non-quantized models
    • Object detection and non–object detection models together
  • No additional RAM or flash overhead when deploying only a single impulse
  • Works across all Edge Impulse supported development platforms
Some examples:
  • Model cascades:
    • An object detection model to locate text
    • Followed by an OCR or classification model on the detected regions
  • Multiple independent models on one device:
    • Running several sensor-based models side-by-side on a single MCU
    • Supporting different modalities or use cases simultaneously

Multi-project deployment

Multi-project deployment

Multi-project deployment is enabled at the organization level.
  1. Go to Multi-project deployments in your organization
  2. Add one or more entries, each specifying:
    • Project ID
    • Impulse ID
    • Variant (e.g. quantized or non-quantized, int8 or float32)
For example, with the following tutorial projects added to your organization: Copy and paste the following example configuration (with projectIds changed to the IDs for projects in your organization):
[
    { "projectId": 14225, "impulseId": 2, "variant": "int8" },
    { "projectId": 14227, "impulseId": 1, "variant": "int8" },
    { "projectId": 25483, "impulseId": 1, "variant": "float32" }
]

Multi-project deployment with impulses configuration

Inference runtime

After defining your projects and impulses, choose your inference runtime:
  • EON Compiler
  • EON Compiler (RAM Optimized)
  • TensorFlow Lite

Deployment build configuration and runtime selection

After building, Edge Impulse generates a ZIP file, similar to a standard C++ deployment. You can load previous multi-project deployment configurations into the ‘Impulses’ field, by selecting the button to the right of the Job ID:

Load previous multi-project deployment configurations

Deployment package structure

The generated .tar.gz looks like a normal C++ deployment, with one key difference: it now contains multiple models, one per configured impulse.

Example multi-project deployment structure

Each deployed model has:
  • Its own model file
  • Its corresponding model parameters

Example multi-project deployment structure for 3 deployed models

New orchestration entry point

Multi-project deployments ship with an updated main.cpp example file. This file contains all orchestration logic required to run multiple deployed impulses in your application.

Working with multiple impulses in code

Features arrays

Instead of a single features[] array, you now have one features array per impulse. For example, with three impulses:
  • Impulse 1: Quantized (int8) classification / anomaly detection
  • Impulse 2: Free-form impulse
  • Impulse 3: Object detection (e.g. FOMO, float32)
Each impulse:
  • Has its own input features
  • Produces its own output tensors

Running inference

At runtime, inference is straightforward:
  1. Initialize each impulse
  2. Populate the corresponding features array
  3. Call run_classifier() once per impulse, using the appropriate impulse handle
  4. Read and interpret the results per model
All remaining logic is standard Edge Impulse inference orchestration.

Example output

In a single application run, you might see:
  • Impulse 1
    • Classification result: wave ≈ 0.1
    • Anomaly score: ≈ 1.1
  • Impulse 2
    • Free-form outputs
    • Multiple output tensors
  • Impulse 3
    • Object detection result
    • Single detected object class (e.g. person, or object of interest)

Troubleshooting

No common issues have been identified thus far. If you encounter an issue, please reach out on the forum or, if you are on the Enterprise plan, through your support channels.

Additional resources