Project Demos
About this Project
In this project I’m taking advantage of Edge Impulse’s FOMO (Faster Objects, More Objects) algorithm that’s really fast and efficient in object detection. The algorithm is suitable for recognizing different types of objects placed on a cashier table without the use of a barcode, and is able to output the total price of items. Even a 96x96 pixel image with grayscale color depth provides enough data to make this project work. The model is exported into a Python program which is deployed to a Raspberry Pi, so it can be run locally with no cloud-connectivity needed once deployed. By running the machine learning model on the edge, this device will use less energy, less human labour, and can cut down overall hardware costs. This proof-of-concept can be further developed with more data variation, camera angles, and different cashier environment and lighting conditions to improve its accuracy in a real-world application.Hardware Components
- Raspberry Pi 4 Model B
- USB webcam
- LCD Display 16 x 2
Software/Apps & Online Services
- Edge Impulse Studio
- Raspberry Pi OS
- Terminal
Others
- 3D printed case for Pi4
Steps
Preparation
Prepare the Raspberry Pi, connect via SSH, install dependencies, and the Edge Impulse for Linux CLI. Follow this guide for extra details.Data Collection
For the image collection, I took some pictures using the USB webcam attached to the Raspberry Pi that’s connected to the Edge Impulse Studio, and some other pictures are taken from a smartphone camera. The position and orientation of the items are shifted between pictures to help the ML model recognize the object later in the process.Data Labeling
Click on Labeling with Bounding-Boxes method, and choose Raspberry Pi 4 for latency calculations.

Train and Build Model
Create an Impulse with 160x160 pixels and Grayscale parameter, and choose Image and Object Detection blocks. Choose FOMO (MobileNet V2 0.35) which will produce 8 layer outputs (Cadbury, Mentos, Indomie, KitKat, etc.) with a pretty accurate result. After testing is done, we can check the video stream from the Raspberry Pi via a browser by using theedge-impulse-linux-runner
. If the camera is performing as expected, then the model is ready to be deployed to the Raspberry Pi 4
Deploy Python to the Raspberry Pi 4, output to LCD 16 x 2:
The Python program I created utilizes the .eim
file from the training result, which transforms the object input to prices and quantity of objects output. The program also displays the output on a 16 x 2 LCD.



