LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Hardware and Software Requirements
  • Akida™ PCIe Board
  • Software
  • Hardware Setup
  • Raspberry Pi 5 Setup
  • Setting up the Development Environment
  • Data Collection
  • Impulse Design
  • Input block (Image data):
  • Processing block (Image):
  • Learning block (BrainChip Akida)
  • Model Training
  • Confusion Matrix
  • Project Setup
  • Deployment
  • Run Inferencing
  • Demo
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Computer Vision Projects

Inventory Stock Tracker - FOMO - BrainChip Akida

Manage the availability and location of your products in the warehouse using the Brainchip AKD1000 for fast and seamless detection using Machine Vision.

PreviousX-Ray Classification and Analysis - Brainchip AkidaNextContainer Counting - Arduino Nicla Vision

Last updated 10 months ago

Was this helpful?

Created By: Christopher Mendez

Public Project Link:

Introduction

Industries, stores, workshops and many other professional environments have to manage an inventory. Whether of products or tools, this need is normally addressed with a limited digital or manual solution. This project aims to contribute to the cited need with a smart approach that will let you know the products/tools quantity and their exact location in the rack, box or drawer.

The system will be constantly tracking the terminal blocks on a tray, counting them and streaming a live view in a web server. In addition, you will have real-time location feedback on an LED matrix.

Hardware and Software Requirements

To develop this project we will use the following hardware:

Akida™ PCIe Board

It should be noted that the AKD1000 Neuromorphic Hardware Accelerator is the main component of this project thanks to some interesting characteristics that make it ideal for this use case.

Considering that our project will end up being deployed in industrial and commercial environments, it's crucial that it can do its job efficiently and with very low energy consumption. This is where BrainChip's technology makes sense. Akida™ neuromorphic processor mimics the human brain to analyze only essential sensor inputs at the point of acquisition - processing data with unparalleled performance, precision, and economy of energy.

Software

To develop the project model we are going to use:

Hardware Setup

To fully assemble the project:

  • Screw the 3D-printed arm to the Raspberry Pi using the available spacers thread.

  • Screw the MIPI camera to the 3D-printed arm and connect the flat cable from the camera to the CAM0 slot on the Raspberry Pi.

  • Stack the Grove Base Hat on the Raspberry Pi 40 pins header.

  • Connect the Grove cable from the LED Matrix to an I2C connector on the Grove Base Hat.

  • Screw the cooling fan holder in the PCIe Slot Extension Adapter Board and connect it to +5V and GND on the 40 pins header (Optional).

Raspberry Pi 5 Setup

With the Raspberry Pi Imager, flash a micro-SD card with the Raspberry Pi OS Lite (64-bit), enter the OS Customisation menu by typing Ctrl + Shift + X and add your login credentials, enable the wireless LAN by adding your WiFi credentials and verify that the SSH connection is enabled in the Services settings.

Once the micro-SD card is flashed and verified, eject it and install it in your Raspberry Pi 5.

Setting up the Development Environment

Once the system is powered up and connected to the internet (I used WiFi), you can access it by an SSH connection: you will need to know the device's local IP address, in my case, I got it from the list of connected devices of my router.

To start setting up the device for a custom model deployment, let's verify we have installed all the packages we need.

I am using Putty for the SSH connection. Log in using the set credentials, in this case, the username is raspberrypi and the password is raspberrypi.

Once in, verify that the Akida PCIe board is detected:

lspci | grep Co-processor # will check if the PCIe card is plugged in correctly.

Create a virtual environment:

python3 -m venv .venv --system-site-packages #create virtual env
source .venv/bin/activate  #enter virtual env

Install the Akida driver:

apt-get install -y git # install git to be able to clone the driver repository
git clone https://github.com/Brainchip-Inc/akida_dw_edma # clone the repository
sudo apt install build-essential linux-headers-$(uname -r) # install system dependencies
cd akida_dw_edma # enter the repository
sudo ./install.sh # run the driver installation script
apt-get install python3-pip -y # install the pip tool

With the driver modules already mounted and the tools ready, install the Akida driver:

python3 -m pip install akida

Once installed, verify it is installed correctly and if it detects the mounted AKD1000 PCIe card.

pip show akida # prints out the driver version
akida devices # search for compatible Akida devices

Install some specific project dependencies:

python3 -m pip install scipy
python3 -m pip install --upgrade pip setuptools wheel
pip install h5py --only-binary h5py
python3 -m pip install tensorflow
python3 -m pip install matplotlib
python3 -m pip install imageio
python3 -m pip install IPython
python3 -m pip install opencv-python
python3 -m pip install Flask

Data Collection

For the creation of the dataset of our model, we have several options, uploading the images from the Raspberry Pi with a USB camera or using our computer or phone. In this case, I chose to take them from the phone using its camera.

The dataset consists of 1 class in which we capture the "piece", a terminal block in this case, from several angles and perspectives. Use the Labeling queue to easily label all the pieces in one frame.

Taking at least +95 pictures of the piece class will let you create a robust enough model

Impulse Design

After having the dataset ready, it is time to define the structure of the model.

In the left side menu, we navigate to Impulse design > Create impulse and define the following settings for each block, respectively:

Input block (Image data):

  • Image width: 224

  • Image height: 224

  • Resize mode: Fit shortest axis

Processing block (Image):

Add an Image processing block since this project will work with images as inputs.

Learning block (BrainChip Akida)

We are going to use an Object Detection learning block developed for Brainchip Akida hardware.

Finally, we save the Impulse design, it should end up looking like this:

Model Training

After having designed the impulse, it's time to set the processing and learning blocks.

In the Image processing block, we set the "Color depth" parameter to RGB, click on Save parameters and then Generate features.

In the Object Detection learning block, define the following settings:

  • Number of training cycles: 60

  • Learning rate: 0.0005

In the Neural network architecture, select the Akida FOMO AkidaNet(alpha=0.5 @224x224x3).

Click on the Start training button and wait for the model to be trained and the confusion matrix to show up.

Confusion Matrix

The results of the confusion matrix can be improved by adding more samples to the dataset. After some trial and error testing different models I was able to get one stable and robust enough for the application.

Project Setup

git clone https://github.com/mcmchris/brainchip-inventory-check.git

Enter the repository directory:

cd brainchip-inventory-check

We are going through the content in detail later.

sudo apt update
curl -sL https://deb.nodesource.com/setup_20.x | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
sudo npm install edge-impulse-linux -g --unsafe-perm

Then to update npm packages:

sudo npm install -g npm@10.8.1
edge-impulse-linux --version

It should show you the installed version (1.8.0 at writing time)

To activate the MIPI camera support run the following command:

sudo raspi-config

Use the cursor keys to select and open Interfacing Options, then select Camera, and follow the prompt to enable the camera. Reboot the Raspberry Pi.

Deployment

Once the project is cloned locally on the Raspberry Pi, you can download the project model from Edge Impulse Studio by navigating to the Dashboard section and downloading the MetaTF .fbz file.

Once downloaded, from the model download directory, open a new terminal and copy the model to the Raspberry Pi using scp command as follows:

scp <model file>.fbz raspberrypi@<Device IP>:~ # command format
scp akida_model_2.fbz raspberrypi@10.0.0.207:~ # actual command in my case

You will be asked for your Raspberry Pi login password.

Now, the model is on the Raspberry Pi's local storage (/home/raspberrypi), and you can verify it by listing the directory content using ls.

Move the model to the project directory with the following command (from /home/raspberrypi):

mv akida_model_2.fbz /root/brainchip-inventory-check/model

Here we have the model on the project directory, so now everything is ready to be run.

Run Inferencing

In the project directory, there are several script options with the following characteristics:

  • inventory.py: is the original program, it uses a MIPI camera feed to run the inference.

  • stock.py: is an optimized version of the original program, also uses a MIPI camera but the object markers are bigger.

  • low-power.py: is a lower-power program with half of energy consumption, and also uses a MIPI camera.

  • usb-inference.py: is a version that uses a USB camera instead of a MIPI camera (no Matrix control).

There are other auxiliary scripts for testing purposes:

  • mipi_inference.py: this program runs the FOMO model without controlling the LED Matrix.

  • matrix_test.py: this program tests the LED matrix displaying colors and patterns.

To run the project, type the following command:

python3 <your prefered program>
# to run the original program:
python3 inventory.py

The .fbz model is hard coded in the script, so if you want to use the custom one you downloaded, update the "model_file" variable in the python script.

The project will start running and streaming a live view of the camera feed plus showing you in the LED matrix the location of detected objects alongside the FOMO inference results, object count, frames per second and energy consumption. To watch a preview of the camera feed, open your favorite browser and enter http://<Raspberry Pi IP>:8080.

Demo

Here I show you the whole project working and running.

Conclusion

This project leverages the Brainchip Akida Neuromorphic Hardware Accelerator to propose an innovative solution to inventory stock tracking. It showed a very good performance running at 56 FPS, with less than 100 mW of power consumption and tracking a lot of pieces at a time.

Stack the PCIe Slot Extension Adapter Board under the Raspberry Pi and connect the flat cable accordingly ().

You can clone the public Edge Impulse project if you'd like, from .

First, we need to create an account if we haven't yet, and create a new project:

To be able to run the project, we need to go back to our SSH connection with the device and clone the project from the , for this, use the following command:

It is recommended that you install Edge Impulse for Linux following this or the steps below:

If you want to test the model as it is without any modification, jump to the section.

Akida™ PCIe Board
PCIe Slot For Raspberry Pi 5 Extension Adapter Board
Raspberry Pi 5
Camera Module 3 - IMX708
RGB LED Matrix
Grove Base Hat for Raspberry Pi (Optional)
Custom 3D parts
Edge Impulse Studio
dedicated instructions
this link
Edge Impulse Studio
Github repository
link
Run Inferencing
https://studio.edgeimpulse.com/public/425288/live
Project overview
Hardware required for the project
Hardware Setup Final Result
Raspberry Pi image settings
Device IP Address
SSH Connection through Putty
Akida driver verification
New project creation
Dataset creating source
Raw image and labeled image
Final impulse design
Confusion matrix results
Downloading the project model
Copying the model to the Raspberry Pi
Project directory
Project running | Inference results