LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • The Solution
  • Hardware requirements
  • Software requirements
  • Hardware Setup
  • Setting up the NVIDIA Jetson
  • Software Setup
  • Installing the dependencies to run Edge Impulse
  • Building the TinyML Model
  • Creating an Edge Impulse Project
  • Connecting the device
  • Collecting and preparing the dataset
  • Creating the impulse
  • Generating features
  • Training the model
  • Validating the model
  • Deploying the model on the edge
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Computer Vision Projects

Recyclable Materials Sorter - Nvidia Jetson Nano

Use computer vision and an Nvidia Jetson Nano to improve the accuracy of a Recyclable Materials Sorter.

PreviousWorkplace Organizer - Nvidia Jetson NanoNextAnalog Meter Reading - Arduino Nicla Vision

Last updated 1 year ago

Was this helpful?

Created By:

Public Project Link:

Introduction

A reverse vending machine (RVM) is a machine that allows a person to insert a used or empty glass bottle, plastic bottle or aluminum can in exchange for a reward. You might have seen them around large stores, gas stations, restaurants, and malls. These devices are the first line of the long journey of reusable plastic: getting the empty bottle containers back from the people once they are used.

Some device models accept only one type of recyclable container like aluminum cans or bottles, others accept all types and sort them accordingly inside the machine in special larger containers.

We at Zalmotek have built a prototype for such a machine that is able to automatically process and sort glass bottles, PETs, and aluminum cans.

You might not know this, but there is no sensor for plastic or glass. While some capacitive sensors can accurately differentiate metals and glass from plastic, for the rest you have to combine other indirect sensor capabilities to narrow the decision in establishing what type of item is in the detector area. For example, reflective light sensors shine through glass and some plastic bottles, but not through metal. And inductive sensors are great for detecting aluminum cans, but not plastic and glass. We ended up using a combination of 5 sensors on our prototype (1 x inductive, 1 x reflective, and 2 x capacitive sensors calibrated at different thresholds) and our success rate is somewhere at 70%. Not bad for a prototype, but the main problem is that it will forever stay like this unless we employ Machine Learning and Computer Vision to improve the success rate.

The Solution

We are aiming to improve the rate of detection of liquid containers by using Edge Impulse to create an added layer of artificial intelligence to the existing sensor network with Computer Vision that will also work as a log of what the machine detects which we can review afterward, having the data from the sensors combined with a picture of the object.

The items are transported from the user through conveyor belts, on which they are sorted, so it’s important to take the pictures on the same surface and use the same illumination conditions. This will be a very important factor to ensure good detection rates. We are using conveyor belts used in the food industry because they are easy to maintain and sanitize.

Hardware requirements

  • Jetson Nano 2GB Developer Kit

  • microSD card (64GB UHS-1 recommended)

  • Display

  • USB keyboard and mouse

  • Raspberry Pi Camera Module V2 (or another external CSI or USB camera)

  • CSI/USB cable

Software requirements

  • Edge Impulse account

Hardware Setup

Setting up the NVIDIA Jetson

After the experimental tests the Jetson will be placed inside the reverse vending machine itself in a designated enclosure and will also serve as a user interface driver using a small HDMI screen.

Software Setup

Installing the dependencies to run Edge Impulse

wget -q -O - https://cdn.edgeimpulse.com/firmware/linux/jetson.sh | bash

Building the TinyML Model

Creating an Edge Impulse Project

First step towards building your TinyML Model is creating a new Edge Impulse Project. Choose Images as the type of data you will use, then choose Image Classification, as we will only have to detect one plastic, aluminum or glass recipient in an image.

Connecting the device

To connect the Jetson Nano to the Edge Impulse project, run the following command in the terminal:

edge-impulse-linux --disable-microphone

If you have previously used your device in the past for other Edge Impulse projects, run the following command to reassign the device to a new project:

edge-impulse-linux --disable-microphone --clean

If you have only one active project, the device will be automatically assigned to it. If you have multiple Edge Impulse projects, select in the terminal the desired one.

Give a recognizable name to your board and press enter.

Your board is now connected to the Edge Impulse project and you can see it in the connected devices panel.

Collecting and preparing the dataset

There are multiple ways to go about gathering your dataset:

  1. Manually taking a bunch of photos, aka data points, using an application like “Cheese!” that comes preinstalled on the NVIDIA Jetson.

  2. Recording a video and extracting the frames every 1 second, using a Python script.

  3. Take photos using the Data Acquisition panel in Edge Impulse Studio.

For the sake of this tutorial, we have decided to go with the third option, as we find the Data Acquisition panel fit for such a use case that requires a rather small number of photos.

Go to Data Acquisition on Edge Impulse Studio and make sure the NVIDIA Jetson is connected to Edge Impulse be running the command that we mentioned previously:

edge-impulse-linux --disable-microphone

In the Record New Data menu, specify the label and click Start sampling to take a picture. Make sure you capture all the possible positions and rotations of the recipient, to ensure the model will perform well in non-ideal scenarios. Once you’re done with a category, adjust the label and start sampling again. We’ve used 4 types of recipients for each category (aluminum, plastic and glass), but you can use more if you want to build a more complex model.

You should also perform a train/test split to balance the data.

Creating the impulse

Now we can create the impulse. Go to Impulse Design and set the image size to 160x160px, add an Image processing block, and a Transfer Learning block. We won’t train a model built from scratch, but rather make use of the capabilities of a pre-trained model and retrain its final layers on our dataset, saving a lot of precious time and resources. The only constraint of using this method is that we have to resize the images from our dataset to the size of the images the model was initially trained on (so either 96x96 or 160x160). The output features will be our categories, meaning the labels we’ve previously defined (aluminum, plastic and glass).

Generating features

Now go to Image in the Impulse Design menu and click Save Parameters and Generate Features. This will resize all the images to 160x160px and optionally change the color depth to either RGB or Grayscale. We chose the default mode, RGB, as the color is an important feature for the recyclable recipients we want to detect. You’ll also be able to visualize the generated features in the Feature explorer, clustered based on similarity. A good rule of thumb is that clusters that are well separated in the feature explorer will be easier to learn for the machine learning model.

Training the model

Now that we have the features we can start training the neural network. Leave the default settings and choose the MobileNetV2 96x96 0.35 model, which is a pretty light model. Since we’re running on an NVIDIA Jetson, we can also run more powerful models from all of all the listed ones, but if you’re running on a dev board with less resources you can use a lighter model.

Validating the model

Time to test our trained model! Go to Model testing and click on Classify all. You’ll see in the Model testing results tab how the model performed on our testing data. We obtained an accuracy of 98.7% which is pretty good! You can also take a look at the Confusion matrix to identify weak spots of the model and what labels are more likely to be misclassified. Based on this, you can add more items in the training dataset for these classes.

Deploying the model on the edge

To run the inference on the target, use the following command:

edge-impulse-linux-runner --clean

and select the project containing the model you wish to deploy.

Once the model downloads, access the URL provided in your serial monitor to watch the video feed, in a browser.

We’ve tested it using the recipients we’ve trained it on and it works well.

Keep in mind that the more samples are labeled correctly and photographed from multiple angles, the better the results will be in the future so it’s useful to keep all the detected objects for further reference so you can improve the model.

Conclusion

Not all recycling is created equal. Selective recycling, also known as "source separation" has several advantages over traditional recycling methods:

  • it helps to ensure that recyclable materials are actually recycled instead of being sent to landfill.

  • it reduces the need for sorting and cleaning at recycling facilities, which can save time and money.

  • selective recycling can help to increase the overall quality of recycled materials.

For these reasons, selective recycling is a useful way to reduce waste and promote sustainability and automating it is the only efficient way to go about it.

NVIDIA Jetson Nano 2GB DevKit has a quick get-started guide that, based on your operating system, will help you write the OS on an SD card and start the system. We also recommend having an enclosure for the Jetson to protect it from all sorts of nefarious events. In this tutorial, we have found the case to be a good fit.

Register for a free account on the Edge Impulse platform , then power up the Jetson and connect the display, keyboard and mouse to run the following commands to install the Linux runner. Start a terminal and run the setup script:

For more in-depth details about the Jetson setup, you can check , although the above command is enough for going to the next step.

Now the existing software stack running on the Reverse vending machine includes many other components needed to control conveyor belts, monitor sensors, allow user interaction via the screen interface and send alerts based on the filling level of bins. Integrating this added Computer Vision layer will probably be done via the . This might differ in your actual real-world use case.

If you need assistance in deploying your own solutions or more information about the tutorial above please .

here
reComputer
here
this link
Edge Impulse Python SDK
reach out to us
Zalmotek
https://studio.edgeimpulse.com/public/89338/latest/