C++ library

The provided methods package all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally.

C++ Libraries

Impulses can be deployed as a C++ library. The library does not have any external dependencies and can be built with any C++11 compiler, see Running your impulse as a C++ library.

We have end-to-end guides for:

We also have tutorials for:

Using Arduino IDE

Using OpenMV IDE

On Linux-based devices

Using WebAssembly

These tutorials show you how to run your impulse, but you'll need to hook in your sensor data yourself. We have a number of examples on how to do that in the Data forwarder documentation, or you can use the full firmware for any of the fully supported development boards as a starting point - they have everything (including sensor integration) already hooked up. Or keep reading for documentation about the sensor format and inputs that we expect.

Did you know?

You can build binaries for supported development boards straight from the studio. These will include your full impulse. See Edge Impulse Firmwares

Input to the run_classifier function

The input to the run_classifier function is always a signal_t structure with raw sensor values. This structure has two properties:

  • total_length - the total number of values. This should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (from model_metadata.h). E.g. if you have 3 sensor axes, 100Hz sensor data, and 2 seconds of data this should be 600.

  • get_data - a function that retrieves slices of data required by the DSP process. This is used in some DSP algorithms (like all audio-based ones) to page in the required data, and thus saves memory. Using this function you can store (f.e.) the raw data in flash or external RAM, and page it in when required.

F.e. this is how you would page in data from flash:

// this is placed in flash
const float features[300] = { 0 };

// function that pages the data in
int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    memcpy(out_ptr, features + offset, length * sizeof(float));
    return 0;
}

int main() {
    // construct the signal
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application

If you have your data already in RAM you can use the signal_from_buffer function to construct the signal:

float features[30] = { 0 };
signal_t signal;
numpy::signal_from_buffer(features, 30, &signal);
// ... rest of the application

The get_data function expects floats to be returned, but you can use the int8_to_float and int16_to_float helper functions if your own buffers are int8_t or int16_t (useful to save memory). E.g.:

const int16_t features[300] = { 0 };

int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    return numpy::int16_to_float(features + offset, out_ptr, length);
}

int main() {
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application

Signal layout for time-series data

Signals are always a flat buffer, so if you have multiple sensor data you'll need to flatten it. E.g. for sensor data with three axes:

Input data:
Axis 1:  9.8,  9.7,  9.6
Axis 2:  0.3,  0.4,  0.5
Axis 3: -4.5, -4.6, -4.8

Signal: 9.8, 0.3, -4.5, 9.7, 0.4, -4.6, 9.6, 0.5, -4.8

Signal layout for image data

The signal for image data is also flattened, starting with row 1, then row 2 etc. And every pixel is a single value in HEX format (RRGGBB). E.g.:

Input data (3x2 pixel image):
BLACK RED  RED
GREEN BLUE WHITE

Signal: 0x000000, 0xFF0000, 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFFFF

We do have an end-to-end example on constructing a signal from a frame buffer in RGB565 format, which is easily adaptable to other image formats, see: example-signal-from-rgb565-frame-buffer.

Directly quantize image data

If you're doing image classification and have a quantized model, the data is automatically quantized when reading the data from the signal to save memory. This is automatically enabled when you call run_impulse. To control the size of the buffer that's used to read from the signal in this case you can set the EI_DSP_IMAGE_BUFFER_STATIC_SIZE macro (which also allocates the buffer statically).

Static allocation

To statically allocate the neural network model, set this macro:

  • EI_CLASSIFIER_ALLOCATION_STATIC=1

Additionally we support full static allocation for quantized image models. To do so set this macro:

  • EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024

Static allocation is not supported for other DSP blocks at the moment.

Last updated

Revision created

Update firmware coordinates