Schematic diagram
model.eim
) will be deployed using the TensorRT library, configured with GPU optimizations and integrated through the Linux C++ SDK. Additionally, the Edge Impulse model will be seamlessly integrated into our Python codebase to facilitate cumulative object counting. Our proprietary algorithm compares current frame coordinates with those of previous frames to identify new objects and avoid duplicate counting.
Jetson Nano, camera, and conveyor belt
Data variation
Upload data
Auto-labeling
Label cluster
Manual labeling
Balance ratio\_80/20
Blocks
Save parameters
Generate features
Result
Test
.zip
file and then we’re ready for model deployment with the Edge Impulse C++ SDK directly on the NVIDIA Jetson Nano.
TensorRT build library
ssh
via a PC or laptop with Ethernet and setup Edge Impulse firmware in the terminal:
/build/model.eim
If your Jetson Nano is running on a dedicated power supply (as opposed to a battery), its performance can be maximized by this command:
sudo /usr/bin/jetson_clocks
Now the model is ready to run in a high-level language such as the Python program in the next step. To ensure this model works, we can run the Edge Impulse Runner with the camera setup on the Jetson Nano and run the conveyor belt. You can the see the camera stream via your browser (the IP address is provided when Edge Impulse Runner first starts up). Run this command:
Video stream from your browser
Deploy to CPU
CPU vs GPU
classify.py
in examples/image
from the linux-python-sdk directory
. We turned it into an object tracking program by solving a bipartite matching problem so the same object can be tracked across different frames to avoid double counting. For more detail, you can download and check the python program at this link, https://github.com/Jallson/High_res_hi_speed_object_counting_FOMO_720x720
count\_moving\_bolt.py
model.eim
is located: