LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Our Solution
  • Hardware requirements
  • Software requirements
  • Hardware Setup
  • Software Setup
  • Data Collection
  • Creating the Impulse
  • Generating Features
  • Training the Model
  • Testing the model
  • Deploying the Model on the Edge
  • Sending Data to the Cloud Using Adafruit IO
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Computer Vision Projects

Container Counting - Arduino Nicla Vision

Use an Arduino Nicla Vision to identify containers with TinyML, and send a count of the detected objects to a cloud dashboard.

PreviousInventory Stock Tracker - FOMO - BrainChip AkidaNextSmart Smoke Alarm - Arduino Nano 33

Last updated 1 year ago

Was this helpful?

Created By:

Public Project Link:

Introduction

Accurate inventory management is critical for any business that relies on the sale of physical goods. Inventories can represent a significant investment of capital, and even a small error in inventory levels can have a major impact on a company's bottom line. Furthermore, customers expect to be able to find the products they need when they want them, and out-of-stock items can lead to lost sales. In order to properly manage their inventories, businesses need to keep track of both the level of stock on hand and the rate at which stock is being sold. By using this information to forecast future demand, businesses can avoid both overstock and out of stock events. In today's competitive marketplace, effective inventory management can be the difference between success and failure.

Machine Learning algorithms power automatic inventory tracking systems that can automatically detect and classify objects in images, even as items are moved around. This is important because it helps to ensure that inventory levels are accurate, which is essential for businesses to run smoothly. Machine Learning can also be used to automatically count items in containers, such as boxes on a shelf. This is important because it helps to reduce the amount of time that employees need to spend counting inventory manually. As a result, automatic inventory tracking can save businesses time and money.

Our Solution

In this tutorial we'll show you how to use Computer Vision and Machine Learning to count the number of containers that enter a warehouse in real-time. We will use the Arduino Nicla Vision camera to capture the training data and run the ML model. The Edge Impulse platform will enable us to build, train and deploy an object recognition model. We’ll explain how to use the FOMO (Faster Objects, More Objects) novel machine learning algorithm that enables object detection for highly constrained devices, such as the Arduino Nicla Vision.

Arduino Nicla Vision is the perfect match for this use case, as it has a powerful processor with a 2MP color camera that supports TinyML and can easily be integrated into Edge Impulse. It also offers WiFi and Bluetooth Low Energy connectivity so you can send your data to the cloud without having to use another development board. All of these features packed on a really tiny board for around $100 can be an attractive package for an Edge Computing mini computer.

Hardware requirements

  • Micro USB cable

Software requirements

  • Edge Impulse account

  • Adafruit IO account

Hardware Setup

The lid is secured by using M3 bolts and threaded inserts that were placed inside the base piece using a soldering iron. This ensures a good fit of the lid over the base piece and allows the case to be opened and closed repeatedly without having to worry about damaging a 3D printed thread. Moreover, we have opted to use a GoPro mount on the lid, a common pick in the open-source hardware community, that makes this design compatible with numerous other mounts available online that will fit your application.

Depending on your setup, lighting may vary throughout the day so you can also add a light source to ensure constant illumination. This is a crucial aspect to ensure the performance of the ML model, as it can heavily influence the detected features.

Software Setup

Data Collection

First, we must gather some images to train the model. We’ve used two types of containers, so we’ll define two corresponding classes (Gray and Beige). If your containers come in all sorts of sizes and colors you could go for adding a marker on them and training the model to recognize the marker rather than the entire container.

Connect the Nicla Vision board to your laptop, go to OpenMV and click on the Connect button in the bottom left corner. If you cannot connect the board from the first try, double click the button on the Nicla Vision to put the board into bootloader mode (the green LED should be blinking). Then go to Tools -> Dataset Editor -> New Dataset and choose a folder where you want to save the images.

Click on New Class Folder and give it a name, do this for each class. Select the class by clicking on it and then click on the image button under New Class Folder to capture an image. Rotate the container each time you take a picture to make sure you capture all the possible angles. You can take around 40 pictures for each class.

Now that we have a proper dataset, we can create a new Edge Impulse Project. Once logged in to your Edge Impulse account, you will be greeted by the Project Creation screen. Click on Create new project, give it a meaningful name and select Developer as your desired project type. Afterward, select Images as the type of data you wish to use. Next, choose Object Detection and go to the Data acquisition menu. Click on Upload existing data and upload all the images you’ve just taken for both classes.

Go to Labelling queue and draw a bounding box around the container. You won’t have to do this manually for each image because Edge Impulse will automatically detect the marking and draw the bounding boxes on the rest of the dataset in order to greatly speed up the process. Just make sure you go through all of the images and adjust the bounding box when needed.

Creating the Impulse

Now we can create the Impulse. Go to Impulse Design and set the image size to 96x96px, add an Image processing block, and an Object Detection block. We chose to use 96x96px images because the Nicla Vision board only has 1MB RAM and 2MB Flash memory available. We use Object Detection as the project type because we want to detect multiple objects in an image.

The output features will be our categories, meaning the labels we’ve previously defined (high, low, and normal).

Generating Features

Now go to the Image menu in the Impulse Design menu. Select Grayscale as the Color depth (the FOMO algorithm only works with this option, not RGB) and click on Save Parameters and Generate Features. This will resize all the images to 96x96px and change the color depth of the images. You’ll also be able to visualize the generated features in the Feature Explorer, clustered based on similarity. A good rule of thumb is that clusters that are well separated in the feature explorer will be easier to learn for the machine learning model.

Training the Model

Now that we have the features we can start training the neural network in the Object Detection menu. When choosing the model we have to consider the memory constraints of the Nicla Vision board (1MB RAM and 2MB Flash memory). The FOMO (Faster Objects, More Objects) model is perfect for this use case as it uses 30x less processing power and memory than MobileNet SSD or YOLOv5 for real-time object tracking. Specifically, we’ve used the FOMO MobileNetV2 0.35 model. You can select the model and check out its memory requirements by clicking Choose a different model.

Make sure the Learning rate is 0.001 and you can use the rest of the settings with their default values. After training the model you can come back and tweak them to obtain better accuracy. There is no certain answer to what are the best settings for the model, as it depends from case to case, but you can experiment with different values, making sure to avoid underfitting and overfitting. As an example, this is what worked for us:

Testing the model

To test the model, go to Model testing and select Test all. The model will classify all of the test set samples and provide you with an overall accuracy score for the model.

Deploying the Model on the Edge

We’ve created, trained, and validated our model, so now it’s time to deploy it to the Nicla Vision Board. Go to Deployment in the Edge Impulse menu, select OpenMV Firmware and click on the Build button at the bottom of the page. This will generate an OpenMV firmware and download it as a ZIP file. Unzip it and you’ll find inside several files including edge_impulse_firmware_arduino_nicla_vision.bin and ei_object_detection.py, which we are interested in.

The next step is loading the downloaded firmware containing the ML model to the Nicla Vision board. So go back to OpenMV and go to Tools -> Run Bootloader (Load Firmware), select the .bin file, and click Run. Now go to File -> Open File and select the Python file from the archive.

To save memory, you can adjust the frame size and the window size:

sensor.set_framesize(sensor.QQVGA)     # Set frame size to QQVGA (160x120) or QVGA (240x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.

If you hover over set_framesize you’ll find all the available options. In our case, both QVGA and QQVGA work well.

Finally, click Connect and Run and you can test the model!

Sending Data to the Cloud Using Adafruit IO

Adafruit IO is a cloud-based platform that allows you to easily interact with embedded devices. For example, you can use Adafruit IO to collect data from sensors and control actuators in real-time. You can also use Adafruit IO to create interactive interfaces, such as dashboards and data visualizations. It is easy to get started with Adafruit IO, and there is a large community of users who can provide support. In addition, Adafruit IO is compatible with a wide range of hardware platforms, making it a versatile tool for IoT applications.

We’ve modified the previous MicroPython code by adding a few lines to connect to WiFi and to send an MQTT request to publish data to Adafruit IO. You have to adjust the WiFi credentials in the following code, as well as the client ID, Adafruit IO username, and Adafruit IO API key in the MQTT Client definition. Also adjust the feed names in client.publish().

# Edge Impulse - OpenMV Object Detection (FOMO) Example

import sensor, tf, math, network, time
from mqtt import MQTTClient

SSID='<WIFI_SSID>' # Network SSID
KEY='<WIFI_PASS>'  # Network key

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

# Init wlan module and connect to network
print("Trying to connect... (may take a while)...")

wlan = network.WLAN(network.STA_IF)
wlan.active(True)
wlan.connect(SSID, KEY)

while not wlan.isconnected():
   print(".")
   pass

# We should have a valid IP now via DHCP
print(wlan.ifconfig())

def sub_cb(topic, msg):
   print((topic, msg))

client = MQTTClient('<CLIENT_ID>', 'io.adafruit.com', 1883, '<ADAFRUIIO_USERNAME>', '<ADAFRUITIO_KEY>', keepalive=30)
client.connect()

net = None
labels = None
min_confidence = 0.5

try:
   # Load built in model
   labels, net = tf.load_builtin_model('trained')
except Exception as e:
   raise Exception(e)

colors = [ # Add more colors if you are detecting more than 7 types of classes at once.
   (255,   0,   0),
   (  0, 255,   0),
   (255, 255,   0),
   (  0,   0, 255),
   (255,   0, 255),
   (  0, 255, 255),
   (255, 255, 255),
]

while(True):
   img = sensor.snapshot()

   # detect() returns all objects found in the image (splitted out per class already)
   # we skip class index 0, as that is the background, and then draw circles of the center
   # of our objects

   count = [0, 0]

   for i, detection_list in enumerate(net.detect(img, thresholds=[(math.ceil(min_confidence * 255), 255)])):
       if (i == 0): continue # background class
       if (len(detection_list) == 0): continue # no detections for this class?

       print("********** %s **********" % labels[i])
       for d in detection_list:
           [x, y, w, h] = d.rect()
           center_x = math.floor(x + (w / 2))
           center_y = math.floor(y + (h / 2))
           print('x %d\ty %d' % (center_x, center_y))
           img.draw_circle((center_x, center_y, 12), color=colors[i], thickness=2)
           count[i-1] = count[i-1] + 1

   time.sleep(5)
   print("Beige: " + str(count[0]) + " Gray: " + str(count[1]))
   client.publish('alexandra182/feeds/beigeContainers', str(count[0]))
   client.publish('alexandra182/feeds/grayContainers', str(count[1]))

Create a new dashboard and from *Dashboard Settings -> Create a new block, choose Line Chart. Connect both Feeds, check Stepped Line, and now you should be able the visualize the data in real-time!

Conclusion

Arduino Nicla is an innovative computer vision system that is ideal for a variety of uses, and it can detect and track objects in real-time, even under challenging conditions. Furthermore, it is highly customizable, making it possible to tailor it to suit any specific application. Whether you need to monitor traffic flows or count containers in real-time like in our example, Arduino Nicla is a perfect choice. Combining it with the new FOMO feature from Edge Impulse you can run a slim but powerful Computer Vision system at the edge.

We’ve 3D printed an enclosure for the Arduino Nicla Vision board (you can ) and mounted it at a high enough point to get a bird's-eye view that captures all the important objects from the scene (the entire storing area and the containers on it). The case provides an extra layer of protection against environmental factors.

Start by installing the and creating an Edge Impulse account if you haven’t already.

For this use case, we’ve used Adafruit IO to visualize the number of detected containers in real-time, through a Dashboard. To do this, first go to and define a feed for each detected class.

If you need assistance in deploying your own solutions or more information about the tutorial above please !

Arduino Nicla Vision
OpenMV IDE
download it from here
OpenMV IDE
io.adafruit.com
reach out to us
Zalmotek
https://studio.edgeimpulse.com/studio/122050