Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The C++ inferencing SDK is a portable library for digital signal processing and machine learning inferencing, and it contains native implementations for both processing and learning blocks in Edge Impulse. It is written in C++11 with all dependencies bundled, and can be built on both desktop systems and on microcontrollers. The SDK is located on GitHub: edgeimpulse/inferencing-sdk-cpp.
The easiest way of developing against the SDK is to use the Deployment page in the Edge Impulse studio. Deploying your impulse bundles all blocks, configuration and the SDK into a single package. To run the deployed package on your machine or embedded device, see the Running your impulse locally tutorials.
The SDK contains an implementation of all algorithms in software, but you can optionally output hardware-optimized code. For example, on Cortex-M microcontrollers we leverage CMSIS-DSP to optimize certain vector operations. These optimizations are selected at compile time in config.hpp
, and mostly live in numpy.hpp
. If you want to add optimizations for a new target this would be a good place to start. We welcome contributions!
Public-facing structs for Edge Impulse C++ SDK.
Holds the output of inference, anomaly results, and timing information.
ei_impulse_result_t
holds the output of run_classifier()
. If object detection is enabled, then the output results is a pointer to an array of bounding boxes of size bounding_boxes_count
, as given by ei_impulse_result_bounding_box_t. Otherwise, results are stored as an array of classification scores, as given by ei_impulse_result_classification_t.
If anomaly detection is enabled (e.g. EI_CLASSIFIER_HAS_ANOMALY == 1
), then the anomaly score will be stored as a floating point value in anomaly
.
Timing information is stored in an ei_impulse_result_timing_t struct.
Source: classifier/ei_classifier_types.h
Example: standalone inferencing main.cpp
Holds the output of visual anomaly detection (FOMO-AD)
If visual anomaly detection is enabled (e.g. EI_CLASSIFIER_HAS_VISUAL_ANOMALY == 1
), then the output results will be a pointer to an array of grid cells of size visual_ad_count
, as given by ei_impulse_result_bounding_box_t.
The visual anomaly detection result is stored in visual_ad_result
, which contains the mean and max values of the grid cells.
Source: classifier/ei_classifier_types.h
Example: standalone inferencing main.cpp
Holds information for a single bounding box.
If object detection is enabled (i.e. EI_CLASSIFIER_OBJECT_DETECTION == 1
), then inference results will be one or more bounding boxes. The bounding boxes with the highest confidence scores (assuming those scores are equal to or greater than EI_CLASSIFIER_OBJECT_DETECTION_THRESHOLD
), given by the value
member, are returned from inference. The total number of bounding boxes returned will be at least EI_CLASSIFIER_OBJECT_DETECTION_COUNT
. The exact number of bounding boxes is stored in bounding_boxes_count
field of [ei_impulse_result_t]/C++ Inference SDK Library/structs/ei_impulse_result_t.md).
A bounding box is a rectangle that ideally surrounds the identified object. The (x
, y
) coordinates in the struct identify the top-left corner of the box. label
is the predicted class with the highest confidence score. value
is the confidence score between [0.0..1.0] of the given label
.
Source: classifier/ei_classifier_types.h
Example: standalone inferencing main.cpp
Holds timing information about the processing (DSP) and inference blocks.
Records timing information during the execution of the preprocessing (DSP) and inference blocks. Can be used to determine if inference will meet timing requirements on your particular platform.
Source: classifier/ei_classifier_types.h
Example: standalone inferencing main.cpp
Holds the output of inference, anomaly results, and timing information.
ei_impulse_result_t
holds the output of run_classifier()
. If object detection is enabled (e.g. EI_CLASSIFIER_OBJECT_DETECTION == 1
), then the output results is a pointer to an array of bounding boxes of size bounding_boxes_count
, as given by ei_impulse_result_bounding_box_t. Otherwise, results are stored as an array of classification scores, as given by ei_impulse_result_classification_t.
If anomaly detection is enabled (e.g. EI_CLASSIFIER_HAS_ANOMALY == 1
), then the anomaly score will be stored as a floating point value in anomaly
.
Timing information is stored in an ei_impulse_result_timing_t struct.
Source: classifier/ei_classifier_types.h
Example: standalone inferencing main.cpp
Holds the callback pointer for retrieving raw data and the length of data to be retrieved.
Holds the callback function, get_data(size_t offset, size_t length, float *out_ptr)
. This callback should be implemented by the user and fills the memory location given by *out_ptr
with raw features. Features must be flattened to a 1-dimensional vector, as described in this guide.
get_data()
may be called multiple times during preprocessing or inference (e.g. during execution of run_classifier() or run_classifier_continuous()). The offset
argument will update to point to new data, and length
data must be copied into the location specified by out_ptr
. This scheme allows raw features to be stored in RAM or flash memory and paged in as necessary.
Note that get_data()
(even after multiple calls during a single execution of run_classifier()
or run_classifier_continuous()
) will never request more than a total number of features as given by total_length
.
Source: dsp/numpy_types.h
Example: standalone inferencing main.cpp
Member | Description |
---|---|
Member | Description |
---|---|
Member | Description |
---|---|
Member | Description |
---|---|
Member | Description |
---|---|
Member | Description |
---|---|
public const char * label
Label of the detected object
public float value
Value of the detected object
public float mean_value
Mean value of the grid cells
public float max_value
Max value of the grid cells
public const char * label
Pointer to a character array describing the associated class of the given bounding box. Taken from one of the elements of ei_classifier_inferencing_categories[]
.
public uint32_t x
x coordinate of the top-left corner of the bounding box
public uint32_t y
y coordinate of the top-left corner of the bounding box
public uint32_t width
Width of the bounding box
public uint32_t height
Height of the bounding box
public float value
Confidence score of the label describing the bounding box
public int sampling
If using run_impulse()
to perform sampling and inference, it is the amount of time (in milliseconds) it took to fetch raw samples. Not used for run_classifier()
.
public int dsp
Amount of time (in milliseconds) it took to run the preprocessing (DSP) block
public int classification
Amount of time (in milliseconds) it took to run the inference block
public int anomaly
Amount of time (in milliseconds) it took to run anomaly detection. Valid only if EI_CLASSIFIER_HAS_ANOMALY == 1
.
public int64_t dsp_us
Amount of time (in milliseconds) it took to run the post-processing block
public int64_t classification_us
Amount of time (in milliseconds) it took to run the inference block
public int64_t anomaly_us
Amount of time (in microseconds) it took to run anomaly detection. Valid only if EI_CLASSIFIER_HAS_ANOMALY == 1
.
public ei_impulse_result_bounding_box_t * bounding_boxes
Array of bounding boxes of the detected objects, if object detection is enabled.
public uint32_t bounding_boxes_count
Number of bounding boxes detected. If object detection is not enabled, this will be 0.
public ei_impulse_result_classification_t classification
Array of classification results. If object detection is enabled, this will be empty.
public float anomaly
Anomaly score. If anomaly detection is not enabled, this will be 0. A higher anomaly score indicates greater likelihood of an anomalous sample (e.g. it is farther away from its cluster).
public ei_impulse_result_timing_t timing
Timing information for the processing (DSP) and inference blocks.
public bool copy_output
Copy the output data to a buffer. If set to false, the output data will be returned as a pointer to the internal buffer. If set to true, the output data will be copied to the buffer provided in ei_impulse_output_t
.
public ei_impulse_result_bounding_box_t * visual_ad_grid_cells
Array of grid cells of the detected visual anomalies, if visual anomaly detection is enabled.
public uint32_t visual_ad_count
Number of grid cells detected as visual anomalies, if visual anomaly detection is enabled.
public ei_impulse_visual_ad_result_t visual_ad_result
Visual anomaly detection result, if visual anomaly detection is enabled.
public std::function< int(size_t offset, size_t length, float *out_ptr)> get_data
Callback function to be implemented by the user. Parameters are given as get_data(size_t offset, size_t length, float *out_ptr)
and should return an int (e.g. EIDSP_OK
if copying completed successfully). No bytes will be requested outside of the total_length
. Callback parameters:
offset
: The offset in the signal
length
: The number of samples to write into out_ptr
out_ptr
: An out buffer to set the signal data
public size_t total_length
Total number of samples the user will provide (via get_data). This value should match either the total number of raw features required for a full window (ie, the window size in Studio, but in samples), OR, if using run_classifier_continuous(), the number of samples in a single slice) for a new slice (run_classifier_continuous()
) in order to perform preprocessing and inference.
Global variables accessable outside of the C++ SDK library.
Brief: Array of class label strings
Source: Can be found in model-parameters/model_variables.h if you deploy your impulse as a C++ library.
Description: Global variable containing a static array of class labels in alphabetical order. Each label is a string.
Number of labels is equal to EI_CLASSIFIER_LABEL_COUNT
.
Example: You can print out all of the available labels with, for example:
Public-facing functions for running inference using the Edge Impulse C++ library.
Source: classifier/ei_run_classifier.h
Brief: Initialize static variables for running preprocessing and inference continuously.
Description:
Initializes and clears any internal static variables needed by run_classifier_continuous()
. This includes the moving average filter (MAF). This function should be called prior to calling run_classifier_continuous()
.
Blocking: yes
Example: nano_ble33_sense_microphone_continuous.ino
Brief: Initialize static variables for running preprocessing and inference continuously.
Description:
Initializes and clears any internal static variables needed by run_classifier_continuous()
. This includes the moving average filter (MAF). This function should be called prior to calling run_classifier_continuous()
.
Blocking: yes
Example: nano_ble33_sense_microphone_continuous.ino
handle
struct with information about model and DSP
Brief: Deletes static variables when running preprocessing and inference continuously.
Description:
Deletes internal static variables used by run_classifier_continuous()
, which includes the moving average filter (MAF). This function should be called when you are done running continuous classification.
Blocking: yes
Example: ei_run_audio_impulse.cpp
Brief: Run preprocessing (DSP) on new slice of raw features. Add output features to rolling matrix and run inference on full sample.
Description:
Accepts a new slice of features give by the callback defined in the signal
parameter. It performs preprocessing (DSP) on this new slice of features and appends the output to a sliding window of pre-processed features (stored in a static features matrix). The matrix stores the new slice and as many old slices as necessary to make up one full sample for performing inference.
run_classifier_init()
must be called before making any calls to run_classifier_continuous().
For example, if you are doing keyword spotting on 1-second slices of audio and you want to perform inference 4 times per second (given by EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
), you would collect 0.25 seconds of audio and call run_classifier_continuous(). The function would compute the Mel-Frequency Cepstral Coefficients (MFCCs) for that 0.25 second slice of audio, drop the oldest 0.25 seconds' worth of MFCCs from its internal matrix, and append the newest slice of MFCCs. This process allows the library to keep track of the pre-processed features (e.g. MFCCs) in the window instead of the entire set of raw features (e.g. raw audio data), which can potentially save a lot of space in RAM. After updating the static matrix, inference is performed using the whole matrix, which acts as a sliding window of pre-processed features.
Additionally, a moving average filter (MAF) can be enabled for run_classifier_continuous()
, which averages (arithmetic mean) the last n inference results for each class. n is EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW / 2
. In our example above, if we enabled the MAF, the values in result
would contain predictions averaged from the previous 2 inferences.
To learn more about run_classifier_continuous()
, see this guide on continuous audio sampling. While the guide is written for audio signals, the concepts of continuous sampling and inference can be extrapolated to any time-series data.
Blocking: yes
Example: nano_ble33_sense_microphone_continuous.ino
signal
Pointer to a signal_t struct that contains the number of elements in the slice of raw features (e.g. EI_CLASSIFIER_SLICE_SIZE
) and a pointer to a callback that reads in the slice of raw features.
result
Pointer to an ei_impulse_result_t
struct that contains the various output results from inference after run_classifier() returns.
debug
Print internal preprocessing and inference debugging information via ei_printf()
.
enable_maf
Enable the moving average filter (MAF) for the classifier.
Error code as defined by EI_IMPULSE_ERROR
enum. Will be EI_IMPULSE_OK
if inference completed successfully.
Brief: Run preprocessing (DSP) on new slice of raw features. Add output features to rolling matrix and run inference on full sample.
Description:
Accepts a new slice of features give by the callback defined in the signal
parameter. It performs preprocessing (DSP) on this new slice of features and appends the output to a sliding window of pre-processed features (stored in a static features matrix). The matrix stores the new slice and as many old slices as necessary to make up one full sample for performing inference.
run_classifier_init()
must be called before making any calls to run_classifier_continuous().
For example, if you are doing keyword spotting on 1-second slices of audio and you want to perform inference 4 times per second (given by EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
), you would collect 0.25 seconds of audio and call run_classifier_continuous(). The function would compute the Mel-Frequency Cepstral Coefficients (MFCCs) for that 0.25 second slice of audio, drop the oldest 0.25 seconds' worth of MFCCs from its internal matrix, and append the newest slice of MFCCs. This process allows the library to keep track of the pre-processed features (e.g. MFCCs) in the window instead of the entire set of raw features (e.g. raw audio data), which can potentially save a lot of space in RAM. After updating the static matrix, inference is performed using the whole matrix, which acts as a sliding window of pre-processed features.
Additionally, a moving average filter (MAF) can be enabled for run_classifier_continuous()
, which averages (arithmetic mean) the last n inference results for each class. n is EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW / 2
. In our example above, if we enabled the MAF, the values in result
would contain predictions averaged from the previous 2 inferences.
To learn more about run_classifier_continuous()
, see this guide on continuous audio sampling. While the guide is written for audio signals, the concepts of continuous sampling and inference can be extrapolated to any time-series data.
Blocking: yes
Example: nano_ble33_sense_microphone_continuous.ino
impulse
ei_impulse_handle_t
struct with information about preprocessing and model.
signal
Pointer to a signal_t struct that contains the number of elements in the slice of raw features (e.g. EI_CLASSIFIER_SLICE_SIZE
) and a pointer to a callback that reads in the slice of raw features.
result
Pointer to an ei_impulse_result_t
struct that contains the various output results from inference after run_classifier() returns.
debug
Print internal preprocessing and inference debugging information via ei_printf()
.
enable_maf
Enable the moving average filter (MAF) for the classifier.
Error code as defined by EI_IMPULSE_ERROR
enum. Will be EI_IMPULSE_OK
if inference completed successfully.
Brief: Run the classifier over a raw features array.
Description: Overloaded function run_classifier() that defaults to the single impulse.
Blocking: yes
signal
Pointer to a signal_t
struct that contains the total length of the raw feature array, which must match EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, and a pointer to a callback that reads in the raw features.
result
Pointer to an ei_impulse_result_t struct that will contain the various output results from inference after run_classifier()
returns.
debug
Print internal preprocessing and inference debugging information via ei_printf()
.
Error code as defined by EI_IMPULSE_ERROR
enum. Will be EI_IMPULSE_OK
if inference completed successfully.
Brief: Run the classifier over a raw features array.
Description:
Accepts a signal_t
input struct pointing to a callback that reads in pages of raw features. run_classifier()
performs any necessary preprocessing on the raw features (e.g. DSP, cropping of images, etc.) before performing inference. Results from inference are stored in an ei_impulse_result_t
struct.
Blocking: yes
Example: standalone inferencing main.cpp
impulse
Pointer to an ei_impulse_handle_t
struct that contains the model and preprocessing information.
signal
Pointer to a signal_t
struct that contains the total length of the raw feature array, which must match EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, and a pointer to a callback that reads in the raw features.
result
Pointer to an ei_impulse_result_t struct that will contain the various output results from inference after run_classifier()
returns.
debug
Print internal preprocessing and inference debugging information via ei_printf()
.
Error code as defined by EI_IMPULSE_ERROR
enum. Will be EI_IMPULSE_OK
if inference completed successfully.
The following list gives information about the most important #define
macros found in model-parameters/model_metadata.h. Note that not all macros are listed--just the ones you'll probably care about.
Source: Can be found in model-parameters/model_metadata.h if you deploy your impulse as a C++ library.
Important! model_metadata.h is automatically generated by the Edge Impulse Studio. You should not modify it.
Here's the provided information converted into a Markdown table with the H2 header on the left and the description on the right:
Examples: The following examples demonstrate possible implementations of this function for various platforms. Note the __attribute__((weak))
in most of the definitions, which means that a user could override the implementation elsewhere in the program:
Brief: Cancellable sleep, can be triggered with signal from other thread.
Description: Cancelable sleep, can be triggered with signal from other thread Allow the processor or thread to sleep or block for the given time.
time_ms
Time in milliseconds to sleep
EI_IMPULSE_OK
if successful, error code otherwise
Brief: Read the millisecond timer.
Description: Read the millisecond timer This function should return the number of milliseconds that have passed since the start of the program. If you do not need to determine the run times for DSP and inference blocks, you can simply return 0 from this function. Your impulse will still work correctly without timing information.
The number of milliseconds that have passed since the start of the program
Brief: Read the microsecond timer.
Description: This function should return the number of milliseconds that have passed since the start of the program. If you do not need to determine the run times for DSP and inference blocks, you can simply return 0 from this function. Your impulse will still work correctly without timing information.
The number of microseconds that have passed since the start of the program
Brief: Send a single character to the serial port.
Description:
c
The chararater to send
Brief: Read a single character from the serial port.
Description:
The character read from the serial port
Brief: Print wrapper around printf()
Description:
ei_printf()
is declared internally to the Edge Impulse SDK library so that debugging information (e.g. during inference) can be printed out. However, the function must be defined by the user, as printing methods can change depending on the platform and use case. For example, you may want to print debugging information to stdout in Linux or over a UART serial port on a microcontroller.
format
Pointer to a character array or string that should be printed
...
Other optional arguments may be passed as necessary (e.g. handle to a UART object). Note that any calls to ei_printf()
from within the edge-impulse-sdk library do not pass anything other than the format
argument.
Brief: Used to print floating point numbers.
Description:
Some platforms cannot handle directly printing floating point numbers (e.g. to a console or over a serial port). If your platform cannot directly print floats, provide an implementation of this function to print them as needed (for example, construct a string containing scientific notation with integers and call ei_printf()
).
If your platform can print floating point values, the easiest implementation of this function is as follows:
attribute((weak)) void ei_printf_float(float f) { printf("%f", f); }
f
The floating point number to print
Brief: Wrapper around malloc.
Description:
This function should allocate size
bytes and return a pointer to the allocated memory. In bare-metal implementations, it can simply be a wrapper for malloc()
. For example:
attribute((weak)) void *ei_malloc(size_t size) { return malloc(size); }
If you intend to run your impulse in a multi-threaded environment, you will need to ensure that your implementation of ei_malloc()
is thread-safe. For example, if you are using FreeRTOS, here is one possible implementation:
attribute((weak)) void *ei_malloc(size_t size) { return pvPortMalloc(size); }
size
The number of bytes to allocate
Brief: Wrapper around calloc.
Description:
This function should allocate nitems * size
bytes and initialize all bytes in this allocated memory to 0. It should return a pointer to the allocated memory. In bare-metal implementations, it can simply be a wrapper for calloc()
. For example:
attribute((weak)) void *ei_calloc(size_t nitems, size_t size) { return calloc(nitems, size); }
If you intend to run your impulse in a multi-threaded environment, you will need to ensure that your implementation of ei_calloc()
is thread-safe. For example, if you are using FreeRTOS, here is one possible implementation:
attribute((weak)) void *ei_calloc(size_t nitems, size_t size) { void *ptr = NULL; if (size > 0) { ptr = pvPortMalloc(nitems * size); if(ptr) memset(ptr, 0, (nitems * size)); } return ptr; }
nitems
Number of blocks to allocate and clear
size
Size (in bytes) of each block
Brief: Wrapper around free.
Description:
This function should free the memory space pointed to by ptr
. If ptr
is NULL
, no operation should be performed. In bare-metal implementations, it can simply be a wrapper for free()
. For example:
attribute((weak)) void ei_free(void *ptr) { free(ptr); }
If you intend to run your impulse in a multi-threaded environment, you will need to ensure that your implementation of ei_free()
is thread-safe. For example, if you are using FreeRTOS, here is one possible implementation:
attribute((weak)) void ei_free(void *ptr) { pvPortFree(ptr); }
ptr
Pointer to the memory to free
Macro | Description |
---|---|
Value | Description |
---|---|
These functions are required to be implemented by the user for the target platform. See for more information. They are declared internally in the Edge Impulse C++ SDK library, and they must be defined by the user.
Source:
EI_CLASSIFIER_NN_INPUT_FRAME_SIZE
Number of inputs (words) to the machine learning model block. Should match the number of outputs of the preprocessing (DSP) block. For example, if the DSP block outputs a 320x320 image with 1 word for each color (RGB), then EI_CLASSIFIER_NN_INPUT_FRAME_SIZE
will be 320 * 320 * 3 = 307200. The trained machine learning model will expect this number of inputs.
EI_CLASSIFIER_RAW_SAMPLE_COUNT
Number of sample frames expected by the DSP block. For example, if your window size is set to 2000 ms with a 100 Hz sampling rate, EI_CLASSIFIER_RAW_SAMPLE_COUNT
will equal 2 s * 100 Hz = 200 sample frames. For image data, this is the total number of pixels in the input image, which is equal to EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT
.
EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
Number of numerical samples in each frame. For example, if you are using a 3-axis accelerometer, EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
is 3.
EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
Total number of values expected by the DSP block input. It is equal to EI_CLASSIFIER_RAW_SAMPLE_COUNT * EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
.
EI_CLASSIFIER_INPUT_WIDTH
Image data will be resized so that the width matches this amount, using the Resize mode method set in the Edge Impulse Studio. Set to 0 for non-image data.
EI_CLASSIFIER_INPUT_HEIGHT
Image data will be resized so that the height matches this amount, using the Resize mode method set in the Edge Impulse Studio. Set to 0 for non-image data.
EI_CLASSIFIER_INPUT_FRAMES
Number of image frames used as input to an impulse. Set to 1 for image classification and object detection tasks. Set to 0 for non-image data.
EI_CLASSIFIER_INTERVAL_MS
Number of milliseconds between sampling the sensor. For non-image data, this is equal to 1000 / EI_CLASSIFIER_FREQUENCY
. Set to 1 for image data.
EI_CLASSIFIER_LABEL_COUNT
Number of labels in ei_classifier_inferencing_categories[]
, which is the number of classes that can be predicted by the classification model.
EI_CLASSIFIER_HAS_ANOMALY
Set to 1 if there is an anomaly block in the impulse, 0 otherwise.
EI_CLASSIFIER_FREQUENCY
Sampling frequency of the sensor(s). For non-image data, this is equal to 1000 / EI_CLASSIFIER_INTERVAL_MS
. Set to 0 for image data.
EI_CLASSIFIER_HAS_MODEL_VARIABLES
Set to 1 if model-parameters/model_variables.h is present in the library, 0 otherwise.
EI_CLASSIFIER_OBJECT_DETECTION
Set to 1 if the impulse is configured for object detection, 0 otherwise.
EI_CLASSIFIER_OBJECT_DETECTION_COUNT
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Maximum number of objects that will be detected in each input image.
EI_CLASSIFIER_OBJECT_DETECTION_THRESHOLD
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Only bounding boxes with confidence scores equal to or above this value will be returned from inference.
EI_CLASSIFIER_OBJECT_DETECTION_CONSTRAINED
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Set to 1 if constrained object detection model is used, 0 otherwise.
EI_CLASSIFIER_INFERENCING_ENGINE
The inferencing engine to be used. This can have the following values. Default is EI_CLASSIFIER_TFLITE
, which uses TensorFlow Lite for Microcontrollers (TFLM) as the inference engine.
EI_CLASSIFIER_NONE
EI_CLASSIFIER_UTENSOR
EI_CLASSIFIER_TFLITE
EI_CLASSIFIER_CUBEAI
EI_CLASSIFIER_TFLITE_FULL
EI_CLASSIFIER_TENSAIFLOW
EI_CLASSIFIER_TENSORRT
EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
Number of slices to gather per window. For example, if you want run_classifier_continuous()
to be called every 0.25 s and you have a window size of 1 s, EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
should be set to 4. It is set to 4 by default. Note that you can override this value in your main code if you #define
this macro prior to including the SDK. For example:
See this guide to learn more about continuous sampling. You can see an example here that shows how to set the number of slices per window to something other than 4 prior to including the Edge Impulse C++ SDK library.
EI_CLASSIFIER_SLICE_SIZE
Number of samples in a slice. Equal to EI_CLASSIFIER_RAW_SAMPLE_COUNT / EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
. For run_classifier_continouous()
applications, you can usually set signal.total_length
to EI_CLASSIFIER_SLICE_SIZE
. See this example.
EI_CLASSIFIER_USE_FULL_TFLITE
This can be defined and set to 1 by the user if using full TensorFlow Lite. Note that setting this to 1 while EI_CLASSIFIER_INFERENCING_ENGINE
is set to EI_CLASSIFIER_TFLITE
will force EI_CLASSIFIER_INFERENCING_ENGINE
to be set to EI_CLASSIFIER_TFLITE_FULL
. Not compatible with EON Compiler.
EI_IMPULSE_OK
Success
EI_IMPULSE_ERROR_SHAPES_DONT_MATCH
The shape of data does not match the shape of input layer.
EI_IMPULSE_CANCELED
Impulse execution is cancelled by user.
EI_IMPULSE_TFLITE_ERROR
Error in TesnorFlow Lite inference engine
EI_IMPULSE_DSP_ERROR
Error in processing portion of impulse
EI_IMPULSE_TFLITE_ARENA_ALLOC_FAILED
Failed to allocate memory in TensorFlow Lite arena, often caused by a lack of available heap memory.
EI_IMPULSE_CUBEAI_ERROR
Error in CubeAI inference engine (STM32)
EI_IMPULSE_ALLOC_FAILED
Memory allocation failed. Could be caused by a fragmented heap. Try to increase heap size.
EI_IMPULSE_ONLY_SUPPORTED_FOR_IMAGES
This function is only supported for impulses with an image input.
EI_IMPULSE_UNSUPPORTED_INFERENCING_ENGINE
The chosen inference engine (e.g. in Studio) is incapable of running this impulse.
EI_IMPULSE_OUT_OF_MEMORY
Out of memory. Could be caused by a fragmented heap. Try to increase heap size.
EI_IMPULSE_INPUT_TENSOR_WAS_NULL
Input tensor was null
EI_IMPULSE_OUTPUT_TENSOR_WAS_NULL
Output tensor was null
EI_IMPULSE_SCORE_TENSOR_WAS_NULL
Score tensor is null (for SSD Object Detection models).
EI_IMPULSE_LABEL_TENSOR_WAS_NULL
Label tensor is null (for SSD Object Detection models).
EI_IMPULSE_TENSORRT_INIT_FAILED
TensorRT (NVIDIA) initialization failed.
EI_IMPULSE_DRPAI_INIT_FAILED
DRP-AI (Renesas) initialization failed.
EI_IMPULSE_DRPAI_RUNTIME_FAILED
DRP-AI (Renesas) runtime failed.
EI_IMPULSE_DEPRECATED_MODEL
The model is deprecated and cannot be used. You should re-export the impulse from Studio.
EI_IMPULSE_LAST_LAYER_NOT_AVAILABLE
The last layer is not available in the model.
EI_IMPULSE_INFERENCE_ERROR
Error during inference.
EI_IMPULSE_AKIDA_ERROR
Error in Akida inference engine (BrainChip)
EI_IMPULSE_INVALID_SIZE
The shape of data does not match the shape of input layer.
EI_IMPULSE_ONNX_ERROR
Error in ONNX inference engine
EI_IMPULSE_MEMRYX_ERROR
Error in Memryx inference engine