The following list gives information about the most important #define
macros found in model-parameters/model_metadata.h. Note that not all macros are listed--just the ones you'll probably care about.
Source: Can be found in model-parameters/model_metadata.h if you deploy your impulse as a C++ library.
Important! model_metadata.h is automatically generated by the Edge Impulse Studio. You should not modify it.
Here's the provided information converted into a Markdown table with the H2 header on the left and the description on the right:
Macro | Description |
---|---|
EI_CLASSIFIER_NN_INPUT_FRAME_SIZE
Number of inputs (words) to the machine learning model block. Should match the number of outputs of the preprocessing (DSP) block. For example, if the DSP block outputs a 320x320 image with 1 word for each color (RGB), then EI_CLASSIFIER_NN_INPUT_FRAME_SIZE
will be 320 * 320 * 3 = 307200. The trained machine learning model will expect this number of inputs.
EI_CLASSIFIER_RAW_SAMPLE_COUNT
Number of sample frames expected by the DSP block. For example, if your window size is set to 2000 ms with a 100 Hz sampling rate, EI_CLASSIFIER_RAW_SAMPLE_COUNT
will equal 2 s * 100 Hz = 200 sample frames. For image data, this is the total number of pixels in the input image, which is equal to EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT
.
EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
Number of numerical samples in each frame. For example, if you are using a 3-axis accelerometer, EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
is 3.
EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
Total number of values expected by the DSP block input. It is equal to EI_CLASSIFIER_RAW_SAMPLE_COUNT * EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME
.
EI_CLASSIFIER_INPUT_WIDTH
Image data will be resized so that the width matches this amount, using the Resize mode method set in the Edge Impulse Studio. Set to 0 for non-image data.
EI_CLASSIFIER_INPUT_HEIGHT
Image data will be resized so that the height matches this amount, using the Resize mode method set in the Edge Impulse Studio. Set to 0 for non-image data.
EI_CLASSIFIER_INPUT_FRAMES
Number of image frames used as input to an impulse. Set to 1 for image classification and object detection tasks. Set to 0 for non-image data.
EI_CLASSIFIER_INTERVAL_MS
Number of milliseconds between sampling the sensor. For non-image data, this is equal to 1000 / EI_CLASSIFIER_FREQUENCY
. Set to 1 for image data.
EI_CLASSIFIER_LABEL_COUNT
Number of labels in ei_classifier_inferencing_categories[]
, which is the number of classes that can be predicted by the classification model.
EI_CLASSIFIER_HAS_ANOMALY
Set to 1 if there is an anomaly block in the impulse, 0 otherwise.
EI_CLASSIFIER_FREQUENCY
Sampling frequency of the sensor(s). For non-image data, this is equal to 1000 / EI_CLASSIFIER_INTERVAL_MS
. Set to 0 for image data.
EI_CLASSIFIER_HAS_MODEL_VARIABLES
Set to 1 if model-parameters/model_variables.h is present in the library, 0 otherwise.
EI_CLASSIFIER_OBJECT_DETECTION
Set to 1 if the impulse is configured for object detection, 0 otherwise.
EI_CLASSIFIER_OBJECT_DETECTION_COUNT
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Maximum number of objects that will be detected in each input image.
EI_CLASSIFIER_OBJECT_DETECTION_THRESHOLD
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Only bounding boxes with confidence scores equal to or above this value will be returned from inference.
EI_CLASSIFIER_OBJECT_DETECTION_CONSTRAINED
If EI_CLASSIFIER_OBJECT_DETECTION
is set to 1, this macro is defined. Set to 1 if constrained object detection model is used, 0 otherwise.
EI_CLASSIFIER_INFERENCING_ENGINE
The inferencing engine to be used. This can have the following values. Default is EI_CLASSIFIER_TFLITE
, which uses TensorFlow Lite for Microcontrollers (TFLM) as the inference engine.
EI_CLASSIFIER_NONE
EI_CLASSIFIER_UTENSOR
EI_CLASSIFIER_TFLITE
EI_CLASSIFIER_CUBEAI
EI_CLASSIFIER_TFLITE_FULL
EI_CLASSIFIER_TENSAIFLOW
EI_CLASSIFIER_TENSORRT
EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
Number of slices to gather per window. For example, if you want run_classifier_continuous()
to be called every 0.25 s and you have a window size of 1 s, EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
should be set to 4. It is set to 4 by default. Note that you can override this value in your main code if you #define
this macro prior to including the SDK. For example:
See this guide to learn more about continuous sampling. You can see an example here that shows how to set the number of slices per window to something other than 4 prior to including the Edge Impulse C++ SDK library.
EI_CLASSIFIER_SLICE_SIZE
Number of samples in a slice. Equal to EI_CLASSIFIER_RAW_SAMPLE_COUNT / EI_CLASSIFIER_SLICES_PER_MODEL_WINDOW
. For run_classifier_continouous()
applications, you can usually set signal.total_length
to EI_CLASSIFIER_SLICE_SIZE
. See this example.
EI_CLASSIFIER_USE_FULL_TFLITE
This can be defined and set to 1 by the user if using full TensorFlow Lite. Note that setting this to 1 while EI_CLASSIFIER_INFERENCING_ENGINE
is set to EI_CLASSIFIER_TFLITE
will force EI_CLASSIFIER_INFERENCING_ENGINE
to be set to EI_CLASSIFIER_TFLITE_FULL
. Not compatible with EON Compiler.