model.eim
) will be deployed with the TensorRT library, which will be compiled with optimizations for the GPU and will be setup via the Linux C++ SDK. Once the model can identify different pizza toppings, an additional Python program will be added, to check each pizza for a standard quantity of pepperoni, mushrooms, and paprikas. This project is a proof-of-concept that can be widely applied in the product manufacturing and food production industries to perform quality checks based on a quantity requirement of part in a product.
.zip
file, then we’re ready for model deployment with the Edge Impulse C++ SDK on the Jetson Nano side.
ssh
from your PC/laptop and install the Edge Impulse tooling via the terminal:
Clang
as a C++ compiler:
sh build-opencv-linux.sh
Now make sure the contents of the TensorRT folder you downloaded from the Edge Impulse Studio have been unzipped and moved to the example-standalone-inferencing-linux
directory. For FOMO, we need to edit the variables in the source/eim.cpp
file with:
./build/model.eim
If your Jetson Nano is run with a dedicated power supply, its performance can be maximized by this command:
edge-impulse-linux-runner --download modelfile.eim
then running it with the same command as above.
git clone https://github.com/edgeimpulse/linux-sdk-python
as well, so that you have the samples locally.
The program I made (topping.py
) is a modification of Edge Impulse’s classify.py
in the examples/image
folder from the linux-python-sdk
directory.
model.eim
), for example: 0 0 2 3 3 1 0 1 3 3 3 2 0 0 0 2 3 3 2 0 0 2 5 5 1 0 0 2 3 3 1 0 0 1 2 2 0 0
will record 0 as the sequence separator and record the peak value in each sequence. As an example, if the correct number of toppings on a pizza (per quality control standards) is 3, and we know that a 0 is a separator, and anything other than 3 is bad…then 0 3 0 3 0 3 0 5 0 3 0 2 0
is: OK OK OK BAD OK BAD
My Python program (topping.py
) can be downloaded at this link: https://github.com/Jallson/PizzaQC_Conveyor_Belt/blob/main/topping.py
To run the program, use the command along with the path where model.eim
file is located. Be sure to use the one built for the GPU, in case you have both on the Nano still: