Hardware
Note: When collecting data samples, it’s important to remember that the images of vehicles (trucks or cars) to be labeled should not be too small, as the model we’re building can only recognize objects with a minimum size of 32x32 pixels.
Collect\_data
Upload\_COCO-json
Upload\_video
Split\_into\_image
Labeling\_with\_Yolo
Train\_and\_Test
Learning\_blocks
Save\_parameters
Generate\_features
NN\_setting\_and\_result
Live\_classification
Model\_test
TensorRT\_build\_library
pip3 install Cython
, then install the Linux Python SDK: pip3 install pyaudio edge_impulse_linux
. You’ll also need to clone the examples: git clone https://github.com/edgeimpulse/linux-sdk-python
Next, build and download the model.
sudo apt install -y clang
Clone the following repository and install these submodules:
git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
cd example-standalone-inferencing-linux && git submodule update --init --recursive
Then install OpenCV:
sh build-opencv-linux.sh
Now make sure the contents of the TensorRT folder from the Edge Impulse Studio .zip
file download have been unzipped and moved to the example-standalone-inferencing-linux
directory.
Build a specific model targeting Orin Nano GPU with TensorRT:
APP_EIM=1 TARGET_JETSON_ORIN=1 make -j
The resulting file will be in ./build/model.eim
edge-impulse-linux-runner --clean
, which will allow you to select your project. Log in to your account and choose your project. This process will download the model.eim
file, which is specifically built with the TensorRT library targeting the Orin Nano GPU. During the process, the console will display the path where the model.eim
has been downloaded. For example, in the image below, it shows the file located at /home/orin/.ei-linux-runner/models/310628/v15
.
For convenience, you can copy this file to the same directory as the Python program you’ll be creating in the next steps. For instance, you can use the following command to copy it to the home directory: cp -v model.eim /home/orin
Check\_progress
edge-impulse-linux-runner --model-file <path to directory>/model.eim
Live\_stream
Classify.py
script from Edge Impulse’s examples in the linux-python-sdk
directory. We have adapted it into an object tracking program by integrating a tracking library, which identifies whether the moving object is the same vehicle or a different one by assigning different IDs. This prevents miscounts or double counts.
For speed calculation, we also use this tracking library by adding two horizontal lines on the screen. We measure the actual distance between these lines and divide it by the timestamp of the object passing between the lines. The direction is determined by the order in which the lines are crossed, for example, A —> B is IN, while B —> A is OUT.
In the first code example, we use a USB camera connected to the Orin Nano and run the program with the following command:
Note: For video/camera capture display, you cannot use the headless method from a PC/laptop. Instead, connect a monitor directly to the Orin Nano to view the visuals, including the lines, labeled bounding boxes, IN and OUT counts, and vehicle speeds.
Python\_code
Camera\_feed