Object tracking
Object Tracking is a new postprocessing layer that allows you to track bounding boxes across inference runs, turning raw bounding boxes into stable “tracked” detections. This can significantly reduce jitter and provide continuity of object labels across frames. Object tracking works both for models trained in Edge Impulse, as well as through Bring your own model (BYOM).

Configuring object tracking
1. Enabling object tracking
To enable the Object Tracking feature:
Open your Edge Impulse project.
Go to the Dashboard.
Scroll down to 'Administrative zone' and enable Post-processing / object tracking.
Click Save experiments.
You'll now have a new entry in the left navigation bar called "Post-processing".

2. Uploading data
To configure object tracking you'll need to upload one or more video files of representable scenes. E.g. a video of your production line if you want to track products on a conveyor belt; or a video of people walking around if you're building intruder detection systems. This data does not need to be labeled.
To upload this data, go to Data acquisition > Post-processing; and click the upload icon. We support video files in the most common formats, but there's a 100MB file size limit.

3. Tuning object tracking parameters
After you've uploaded your data you can go to Post-processing (in the left navigation bar). This UI allows you to quickly iterate over all object tracking parameters to find the perfect configuration for your specific use case. This UI can also be used to see raw bounding box predictions overlaid onto your videos, which is a great way to assert model performance.

Configuring the object tracking parameters to identify people.
When you're happy with the results, click Save to store the configuration.
4. Deploying your model
Once you've configured object tracking, all deployments (Linux, Mobile, EON Compiler, C++ library, WebAssembly, etc.) that contain an Object Detection block will automatically include the object tracking postprocessing layer. 🚀
Re-configuring object tracking thresholds at runtime

When you have built a model that includes Object Tracking postprocessing, you can dynamically configure the tracking thresholds.
Linux CLI
Use edge-impulse-linux-runner --model-file <model.eim>
.
The runner’s interactive console (and web UI via
http://localhost:PORT
) now includes configurable tracking thresholds (click the 'gear' icon).
Mobile client
If you’re running your impulse in the Edge Impulse mobile client, you can configure thresholds in the UI as well (click the 'gear' icon).
Node.js SDK
In the Node.js SDK, there is a new function to set these thresholds at runtime:
// classifier is an instance of EdgeImpulseClassifier
classifier.setLearnBlockThreshold({
keepGrace: 5,
maxObservations: 5,
iouThreshold: 0.5,
});
Code deployments (C++)
For C++ library deployments, you can configure thresholds in model-parameters/model_variables.h
(name may vary based on your project’s generated files). A typical configuration might look like:
const ei_object_tracking_config_t ei_posprocessing_config_9 = {
1, /* implementation_version */
5, /* keep_grace */
5, /* max_observations */
0.5000f, /* iou_threshold */
true /* use_iou */
};
You can update the configuration at runtime through set_threshold_postprocessing
.
Comparing object tracking vs. standard object detection
A simple way to see the difference between raw bounding boxes and tracked bounding boxes:

Terminal 1:
PORT=9200 edge-impulse-linux-runner --model-file ./model-with-object-tracking.eim
Terminal 2:
PORT=9201 edge-impulse-linux-runner --model-file ./model-without-object-tracking.eim
Open http://localhost:9200
and http://localhost:9201
in two separate browser windows and observe the difference in bounding box stability. You’ll see smoother, more persistent bounding boxes with object tracking enabled.
Accessing tracked objects in the inference output
C++ libraries
ei_impulse_result_t result;
// ... run inference ...
for (uint32_t i = 0; i < result.postprocessed_output.object_tracking_output.open_traces_count; i++) {
ei_object_tracking_trace_t trace = result.postprocessed_output.object_tracking_output.open_traces[i];
// Access tracked object data:
// trace.id, trace.label, trace.x, trace.y, trace.width, trace.height
}
WebAssembly
let result = classifier.classify(/* frame or image data */);
console.log(result.object_tracking_results);
EIM files
When reading inference metadata from an EIM file, look under the object_tracking
field to retrieve tracked objects.
Advanced usage
Looking for a more complex example? Check out the Model Cascade approach, which chains together an Object Tracking model with an LLM (e.g., GPT-4):
Troubleshooting
If you encounter any issues with object tracking, please reach out to your solutions engineer for assistance.
Last updated
Was this helpful?