LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • C++ Libraries
  • Input to the run_classifier function
  • Signal layout for time-series data
  • Signal layout for image data
  • Directly quantize image data
  • Static allocation
  1. Run inference

C++ library

PreviousEdge Impulse Python SDKNextAs a generic C++ library

Last updated 6 months ago

The provided methods package all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally.

C++ Libraries

Impulses can be deployed as a C++ library. The library does not have any external dependencies and can be built with any C++11 compiler, see .

We have end-to-end guides for:

We also have tutorials for:

Using Arduino IDE

Using OpenMV IDE

On Linux-based devices

Using the DRP-AI library

Using WebAssembly

Did you know?

Input to the run_classifier function

The input to the run_classifier function is always a signal_t structure with raw sensor values. This structure has two properties:

  • total_length - the total number of values. This should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (from model_metadata.h). E.g. if you have 3 sensor axes, 100Hz sensor data, and 2 seconds of data this should be 600.

  • get_data - a function that retrieves slices of data required by the DSP process. This is used in some DSP algorithms (like all audio-based ones) to page in the required data, and thus saves memory. Using this function you can store (f.e.) the raw data in flash or external RAM, and page it in when required.

F.e. this is how you would page in data from flash:

// this is placed in flash
const float features[300] = { 0 };

// function that pages the data in
int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    memcpy(out_ptr, features + offset, length * sizeof(float));
    return 0;
}

int main() {
    // construct the signal
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application
float features[30] = { 0 };
signal_t signal;
numpy::signal_from_buffer(features, 30, &signal);
// ... rest of the application
const int16_t features[300] = { 0 };

int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    return numpy::int16_to_float(features + offset, out_ptr, length);
}

int main() {
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application

Signal layout for time-series data

Signals are always a flat buffer, so if you have multiple sensor data you'll need to flatten it. E.g. for sensor data with three axes:

Input data:
Axis 1:  9.8,  9.7,  9.6
Axis 2:  0.3,  0.4,  0.5
Axis 3: -4.5, -4.6, -4.8

Signal: 9.8, 0.3, -4.5, 9.7, 0.4, -4.6, 9.6, 0.5, -4.8

Signal layout for image data

The signal for image data is also flattened, starting with row 1, then row 2 etc. And every pixel is a single value in HEX format (RRGGBB). E.g.:

Input data (3x2 pixel image):
BLACK RED  RED
GREEN BLUE WHITE

Signal: 0x000000, 0xFF0000, 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFFFF

Directly quantize image data

If you're doing image classification and have a quantized model, the data is automatically quantized when reading the data from the signal to save memory. This is automatically enabled when you call run_impulse. To control the size of the buffer that's used to read from the signal in this case you can set the EI_DSP_IMAGE_BUFFER_STATIC_SIZE macro (which also allocates the buffer statically).

Static allocation

To statically allocate the neural network model, set this macro:

  • EI_CLASSIFIER_ALLOCATION_STATIC=1

You can easily control where the tensor arena is allocated by defining the EI_TENSOR_ARENA_LOCATION macro, specifying .where_to_allocate. This is particularly useful for large size requirements and when the target has external RAM:

For example:

  • EI_TENSOR_LOCATION="<.where_to_allocate>" - Here, <.where_to_allocate> can be a memory region such as ".sram," depending on your target's linker file.

Additionally we support full static allocation for quantized image models. To do so set this macro:

  • EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024

Static allocation is not supported for other DSP blocks at the moment.

with our C++, Node.js, Python or Go SDKs.

These tutorials show you how to run your impulse, but you'll need to hook in your sensor data yourself. We have a number of examples on how to do that in the documentation, or you can use the full firmware for any of the as a starting point - they have everything (including sensor integration) already hooked up. Or keep reading for documentation about the sensor format and inputs that we expect.

You can build binaries for supported development boards straight from the studio. These will include your full impulse. See

If you have your data already in RAM you can use the function to construct the signal:

The get_data function expects floats to be returned, but you can use the and helper functions if your own buffers are int8_t or int16_t (useful to save memory). E.g.:

We do have an end-to-end example on constructing a signal from a frame buffer in RGB565 format, which is easily adaptable to other image formats, see: .

Running your impulse as a C++ library
Running your impulse on your desktop
Running your impulse on Zephyr on a Nordic semiconductor development board
Running your impulse in Simplicity Studio on the TB Sense 2
Running your impulse on STM32 using STM32Cube.MX
Running your impulse on the Himax WE-I Plus
Running your impulse on the Espressif ESP-EYE (ESP32)
Running your impulse on the Raspberry Pi RP2040
Running your impulse on the Sony Spresense
Running your impulse on the Syntiant TinyML Board
Running your impulse on the TI LaunchPad using GCC and the SimpleLink SDK
Running your impulse in an Arduino sketch
Running your impulse on the OpenMV Cam H7 Plus
Running your impulse on a Linux system
Running your impulse on the Renesas RZ/V2L
Running your impulse in Node.js
Running your impulse in the browser
Data forwarder
fully supported development boards
Edge Impulse Firmwares
signal_from_buffer
int8_to_float
int16_to_float
example-signal-from-rgb565-frame-buffer