Inventory Stock Tracker - FOMO - BrainChip Akida
Manage the availability and location of your products in the warehouse using the Brainchip AKD1000 for fast and seamless detection using Machine Vision.
Last updated
Manage the availability and location of your products in the warehouse using the Brainchip AKD1000 for fast and seamless detection using Machine Vision.
Last updated
Created By: Christopher Mendez
Public Project Link: https://studio.edgeimpulse.com/public/425288/live
Industries, stores, workshops and many other professional environments have to manage an inventory. Whether of products or tools, this need is normally addressed with a limited digital or manual solution. This project aims to contribute to the cited need with a smart approach that will let you know the products/tools quantity and their exact location in the rack, box or drawer.
The system will be constantly tracking the terminal blocks on a tray, counting them and streaming a live view in a web server. In addition, you will have real-time location feedback on an LED matrix.
To develop this project we will use the following hardware:
It should be noted that the AKD1000 Neuromorphic Hardware Accelerator is the main component of this project thanks to some interesting characteristics that make it ideal for this use case.
Considering that our project will end up being deployed in industrial and commercial environments, it's crucial that it can do its job efficiently and with very low energy consumption. This is where BrainChip's technology makes sense. Akida™ neuromorphic processor mimics the human brain to analyze only essential sensor inputs at the point of acquisition - processing data with unparalleled performance, precision, and economy of energy.
To develop the project model we are going to use:
To fully assemble the project:
Stack the PCIe Slot Extension Adapter Board under the Raspberry Pi and connect the flat cable accordingly (dedicated instructions).
Screw the 3D-printed arm to the Raspberry Pi using the available spacers thread.
Screw the MIPI camera to the 3D-printed arm and connect the flat cable from the camera to the CAM0 slot on the Raspberry Pi.
Stack the Grove Base Hat on the Raspberry Pi 40 pins header.
Connect the Grove cable from the LED Matrix to an I2C connector on the Grove Base Hat.
Screw the cooling fan holder in the PCIe Slot Extension Adapter Board and connect it to +5V and GND on the 40 pins header (Optional).
With the Raspberry Pi Imager, flash a micro-SD card with the Raspberry Pi OS Lite (64-bit), enter the OS Customisation menu by typing Ctrl + Shift + X
and add your login credentials, enable the wireless LAN by adding your WiFi credentials and verify that the SSH connection is enabled in the Services settings.
Once the micro-SD card is flashed and verified, eject it and install it in your Raspberry Pi 5.
Once the system is powered up and connected to the internet (I used WiFi), you can access it by an SSH connection: you will need to know the device's local IP address, in my case, I got it from the list of connected devices of my router.
To start setting up the device for a custom model deployment, let's verify we have installed all the packages we need.
I am using Putty for the SSH connection. Log in using the set credentials, in this case, the username is raspberrypi and the password is raspberrypi.
Once in, verify that the Akida PCIe board is detected:
Create a virtual environment:
Install the Akida driver:
With the driver modules already mounted and the tools ready, install the Akida driver:
Once installed, verify it is installed correctly and if it detects the mounted AKD1000 PCIe card.
Install some specific project dependencies:
You can clone the public Edge Impulse project if you'd like, from this link.
First, we need to create an Edge Impulse Studio account if we haven't yet, and create a new project:
For the creation of the dataset of our model, we have several options, uploading the images from the Raspberry Pi with a USB camera or using our computer or phone. In this case, I chose to take them from the phone using its camera.
The dataset consists of 1 class in which we capture the "piece", a terminal block in this case, from several angles and perspectives. Use the Labeling queue to easily label all the pieces in one frame.
Taking at least +95 pictures of the piece class will let you create a robust enough model
After having the dataset ready, it is time to define the structure of the model.
In the left side menu, we navigate to Impulse design > Create impulse and define the following settings for each block, respectively:
Image width: 224
Image height: 224
Resize mode: Fit shortest axis
Add an Image processing block since this project will work with images as inputs.
We are going to use an Object Detection learning block developed for Brainchip Akida hardware.
Finally, we save the Impulse design, it should end up looking like this:
After having designed the impulse, it's time to set the processing and learning blocks.
In the Image processing block, we set the "Color depth" parameter to RGB, click on Save parameters and then Generate features.
In the Object Detection learning block, define the following settings:
Number of training cycles: 60
Learning rate: 0.0005
In the Neural network architecture, select the Akida FOMO AkidaNet(alpha=0.5 @224x224x3).
Click on the Start training button and wait for the model to be trained and the confusion matrix to show up.
The results of the confusion matrix can be improved by adding more samples to the dataset. After some trial and error testing different models I was able to get one stable and robust enough for the application.
To be able to run the project, we need to go back to our SSH connection with the device and clone the project from the Github repository, for this, use the following command:
Enter the repository directory:
We are going through the content in detail later.
It is recommended that you install Edge Impulse for Linux following this link or the steps below:
Then to update npm packages:
It should show you the installed version (1.8.0 at writing time)
To activate the MIPI camera support run the following command:
Use the cursor keys to select and open Interfacing Options, then select Camera, and follow the prompt to enable the camera. Reboot the Raspberry Pi.
If you want to test the model as it is without any modification, jump to the Run Inferencing section.
Once the project is cloned locally on the Raspberry Pi, you can download the project model from Edge Impulse Studio by navigating to the Dashboard section and downloading the MetaTF .fbz
file.
Once downloaded, from the model download directory, open a new terminal and copy the model to the Raspberry Pi using scp
command as follows:
You will be asked for your Raspberry Pi login password.
Now, the model is on the Raspberry Pi's local storage (/home/raspberrypi)
, and you can verify it by listing the directory content using ls
.
Move the model to the project directory with the following command (from /home/raspberrypi)
:
Here we have the model on the project directory, so now everything is ready to be run.
In the project directory, there are several script options with the following characteristics:
inventory.py
: is the original program, it uses a MIPI camera feed to run the inference.
stock.py
: is an optimized version of the original program, also uses a MIPI camera but the object markers are bigger.
low-power.py
: is a lower-power program with half of energy consumption, and also uses a MIPI camera.
usb-inference.py
: is a version that uses a USB camera instead of a MIPI camera (no Matrix control).
There are other auxiliary scripts for testing purposes:
mipi_inference.py
: this program runs the FOMO model without controlling the LED Matrix.
matrix_test.py
: this program tests the LED matrix displaying colors and patterns.
To run the project, type the following command:
The .fbz model is hard coded in the script, so if you want to use the custom one you downloaded, update the "model_file" variable in the python script.
The project will start running and streaming a live view of the camera feed plus showing you in the LED matrix the location of detected objects alongside the FOMO inference results, object count, frames per second and energy consumption. To watch a preview of the camera feed, open your favorite browser and enter http://<Raspberry Pi IP>:8080
.
Here I show you the whole project working and running.
This project leverages the Brainchip Akida Neuromorphic Hardware Accelerator to propose an innovative solution to inventory stock tracking. It showed a very good performance running at 56 FPS, with less than 100 mW of power consumption and tracking a lot of pieces at a time.