LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • The SiLabs xG24 and Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
  • Introduction - Playing Poker at the Edge, Part 2 of 2
  • Use-case Explanation
  • Components and Hardware Configuration
  • Configure the Hardware
  • 3D-printing the Stand and the Case
  • Data Collection Process
  • Software and Hardware Used to Capture Data:
  • Steps to Reproduce
  • Collecting Images of Nonuniform Waste Material
  • Building, Training, and Testing the Model
  • Testing the Model
  • Model Deployment
  • Go-live and Results
  • Results
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Computer Vision Projects

SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2

Identify waste for recycling with computer vision and TinyML, then sort it with a Dobot Magician robot arm.

PreviousSiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1NextObject Detection and Visualization - Seeed Grove Vision AI Module

Last updated 1 year ago

Was this helpful?

The SiLabs xG24 and Arducam - Sorting Objects with Computer Vision and Robotics - Part 2

Created By: Thomas Vikstrom

Public Project Link:

Introduction - Playing Poker at the Edge, Part 2 of 2

In this tutorial you will learn how you can sort both poker cards and solid waste using a real robot arm and TinyML (Tiny Machine Learning) running on a development board officially supported by Edge Impulse, namely the SiLabs xG24 Development Kit and Arducam.

In you learned how to classify the playing cards themselves according to their colour, and so this tutorial will focus more on interpreting and utilising the signals provided by the xG24 board to control the robot. It is thus recommended to at least browse through Part 1 before reading Part 2.

Hardware used in this tutorial is the aforementioned as well as the robot arm. However, any other Python-programmable robot arm can most probably be adapted to work according to the steps in this tutorial.

Use-case Explanation

Sorting cards with a robot arm - in this case somewhat slowly - might not be that useful in practice. Nevertheless, it serves as a starting point into TinyML and robotics, and binds them together with quite straightforward Python programming. Poker playing cards were chosen as they are lightweight and uniform in size, making them optimal to start with.

Due to the serious issues climate change is causing our planet, we need to take actions to mitigate or at least reduce the effects from our overconsumption of resources. One of these actions is to sort and recycle as much as possible at the source, but also sort the inevitable remaining waste into metals, plastics, bio-waste, etc. for proper recycling or transformation into energy.

Obviously a robot arm for educational use cannot be used for industrial purposes, but the general ideas learned through these two tutorials can be applied for up-scaling of sorting (e.g. non-defective and defective products on a conveyor belt, unripe and ripe fruits, waste, etc.).

Components and Hardware Configuration

Software Used

  • Edge Impulse

  • Python, any recent 3.x version should be ok

    • pydobot library, install with pip install pydobot

    • pyserial library, install with pip install pyserial

  • Python programs that I wrote to sort cards and waste with, using the Dobot robot arm:

Hardware Used:

  • 3D-printer to print protective case and stand (Optional)

  • Playing cards, poker, UNO, etc.

  • Household solid waste of metal, plastic, paper, cardboard, etc.

Configure the Hardware

  • No special configuration is needed for the robot arm, just ensure it is calibrated. This tutorial is based on the suction cup being installed.

  • The devices are connected through USB-cables, and using serial communication: Robot arm <==> Computer <==> SiLabs & Arducam

3D-printing the Stand and the Case

  • Print with high quality, I printed with 0.15 mm accuracy, the three larger parts took over two hours each to print with my budget friendly, but slow printer.

  • No support is needed when rotating the parts properly in the slicing software. As I don't have a heated bed, I printed with a raft.

  • I used only 10 % in-fill as the equipment is very lightweight

  • There are holes for screws, but apart from firmly attaching the camera front to the back, screws are strictly not needed. In the photo below I have not used screws, hence the skewness.

Data Collection Process

For waste data the same principles were used, where I used a mobile phone camera for most of the images taken, and the device itself (xG24 + Arducam) for some additional images.

Software and Hardware Used to Capture Data:

  • SiLabs xG24 was used for ~10 % of the data

  • Mobile phone camera (iPhone 12) was used for ~90 % of the data

Steps to Reproduce

  • Also here I noticed that the initial model performance in real situations, and when using the xG24 device was far from perfect

Collecting Images of Nonuniform Waste Material

It is relatively easy to develop a very robust ML-model when using poker playing cards as they are uniform in size and have very good contrast. The same does however not apply when it comes to waste, as waste comes in many different forms, colours, and sizes. For example, metal can be shining and reflective, but when painted might look very similar to plastic. When looking at these pictures, perhaps you can guess which lids are made of metal, and which of plastic?

Answer: The first one is made of metal, the others of plastic.

Also, poker cards barely cause any shadows at all, but most waste material produce - depending on the light conditions - shadows, which can confuse an ML model. For example, how can you know if the ML model really is "seeing" the object itself, or only focusing on the shadows it's causing? For this reason, I tried to vary the light conditions when collecting images by using different artificial light sources. In addition, I also collected some images using daylight (not easy in Finland in February with very short days...).

The objects used in this project were chosen so that they could be lifted with the robot's suction cup, thus they could not be of any size, form, or weight. I decided to collect images of four types of objects: paper, cardboard, metal, and plastic. In addition, I also collected images where none of the objects where present, in practice mainly of the table I'd put the objects on. For sure, sorting solid waste into only four classes might not be enough in a real scenario, obviously this is dependant on the country and city where you live. At my workplace e.g., we sort using six bins: plastic, glass, metal, paper, cardboard, and biowaste.

I ended up with a total of 1353 images, very evenly divided into the five classes. Out of these images, I used the xG24 device and Arducam only for a few tens, as it takes so much more time to collect images compared to using a mobile phone camera. But even these few images made a difference, making the final model perform better!

Building, Training, and Testing the Model

After you've collected some data, you need to build and train the model. The main steps in this process are to create an Impulse, extract features, and finally train the model. Again, with image classification and when using Edge Impulse, this is often pretty straightforward.

Steps to Reproduce

  • Also here, I knew beforehand that the 256 kB RAM memory would put some constraints on what model configuration to use. Following that, I chose to use an image size of 96x96 pixels when creating the impulse, and MobileNetV1 when later training the model.

  • Instead of using Resize mode: Squash as with the poker cards, I used the default Fit shortest axis, although I doubt it matters much in this type of project.

  • After having trained with a few different configurations, I found that MobileNetV1 96x96 0.25 (final layer: 64 neurons, 0.1 dropout) gave the most bang for the buck.

  • The trained model has an accuracy of 97.8% which is actually quite good with this relatively sparse data. When looking at the few images it classified incorrectly, one can also understand why some of them can be challenging to predict.

  • It might be a coincidence, but all mispredicted images were taken using the Arducam. It has lower image quality than a modern mobile phone camera, and also needs more light to produce decent image quality, so this is something to consider when using Arducam.

Image
Label
Predicted

paper

nothing (= table)

cardboard

paper

plastic

metal

cardboard

metal

  • The estimated inference time on the xG24 device is quite similar as in Part one

Testing the Model

Before deploying the model to the device itself, you should check how well it works on data it has not seen before. This is where the 15% Test data that was put aside comes into play. If the model performs poorly on test data, you can expect real performance to be even worse. But even a 100% accuracy on Test data does not guarantee success in real life.

If the training performance is very good, but the test performance is poor, the reason might be that your model is overfitting on the training data. In that case you might need to collect more data, change the model or reduce its complexity. Now would be a good time to try the EON Tuner mentioned earlier.

  • To test the model from within Edge Impulse, just click on Model testing and then Classify all

  • In my case, the testing resulted in 100% accuracy, thus even better than the training performance. However, as only a very few of the images taken with Arducam might have ended up in the test category, and those images might be more challenging to predict, I wanted to confirm the model performance in real situations before taking the 100% for granted!

Model Deployment

Go-live and Results

Preparations

  • If sorting cards, use same or similar cards as when collecting images

  • If sorting waste, use the same objects as when collecting images, but try to also find similar, but not identical waste for later testing.

  • Connect your computer to Dobot and to the xG24 + Arducam

  • Place the Arducam so it points to the place where the robot will pick up objects

Dry Run

  • Reset or reconnect the xG24 device

  • Open a command prompt and run edge-impulse-run-impulse --debug, this will show inference results as a running log, but you can also see a live camera output in a browser window.

    • Once you are satisfied with the performance, abort the operation by Ctrl-c or by closing the command prompt. If you forget this, the serial port is busy and can't be used by the Python program in next the step.

    • If you already now notice that the model is not performing as expected, go back and collect more images or tune the model.

Live Run

  • Connect the Dobot arm and power it on

  • Put the first object where it should be for picking it up with the suction cup

  • Open the Python program PyDobot_sorting_cards.py or PyDobot_sorting_waste.py with an IDE or text editor

    • If you want to do a dry run using this program instead of the command prompt, please change from only_inference = False to only_inference = True in the main function:

        def main():
            global x,y,z,r,j1,j2,j3,j4
            
            only_inference = True
            while only_inference == True:
                label = inference()
  • If you have used different labels than in these tutorials, please adjust the labels in the program labels = ["back:", "black:", "no_card:", "red:"]. Remember they have to be in alphabetical order, also remember to put the colon symbol (:) at the end of each label!

  • Both programs are parsing the serial stream coming from the xG24 device and stripping away everything apart from the prediction and probability. This part was more challenging to program than I'd thought, partly as the program can start "midstream", partly due to the speed of the serial transmission. Here's how the serial stream might look like, i.e. same as when running edge-impulse-run-impulse from a command prompt:

        Predictions (DSP: 11 ms., Classification: 216 ms., Anomaly: 0 ms.):
        back: 0.00000
        black: 0.00000
        no_card: 0.00391
        red: 0.99609
  • Run the program, either from within an IDE or from a command prompt

    • The external Pydobot library I'm using has sometimes a challenge connecting to the robot, so you might need to press the robot's Reset button once or twice to get a connection from Python. The error messages in these cases are typically either IndexError: index out of range or AttributeError: 'NoneType' object has no attribute 'params'

    • The program shows the inferencing results in the terminal/output window and how many items it has sorted.

    • Watch and be amazed when the robot (hopefully) sorts your waste into four different piles!

    • "Feed" the waste eating robot with more waste!

Results

Both the card sorting and waste sorting models work quite well in practice, only a few times have objects been misclassified. Main challenges have been with the lighting conditions, as the Arducam seems to need much more light than a mobile phone camera. Other challenges have been more physical in that the robot's suction cup can't lift porous materials, and while it's a versatile robot, it has a limited range of movement.

Conclusion

This was a series of two tutorials, Part one getting up and running with xG24 Dev Kit and Arducam together with Edge Impulse, and Part two introducing how to sort cards or solid waste with a robot arm. I encourage you to replicate one or both of the projects, feel free to clone my Edge Impulse public projects. Why not try developing an ML model that can classify cards into suits, and not only colours! Or, develop an ML model classifying household waste typical for your place of living. While it is more tangible and often also fun using a physical robot arm or similar device to sort objects, I recommend you start with smaller steps, e.g. by connecting the edge device to a display, and showing the inferencing results there.

In the second part of this tutorial you will learn how to adapt the card sorting solution to sorting waste into different piles. This is not a new idea, Google recently announced having performed a with a fleet of 23 robots using reinforcement learning. Another quite similar approach as used in this tutorial was made by . He also used the Dobot Magician robot arm, but with a Raspberry Pi 3 and an Intel Movidius Neural Compute Stick.

for soldering to the SiLabs board

robot arm

For details about configuring SiLabs and Arducam, check

STL-files are found in the

As this project is partly a continuation of part one, please see the for how poker card data was collected.

to use this with Edge Impulse, you first need to flash the Edge Impulse firmware, detailed steps are found in the

Please see for detailed steps how to collect images when using a mobile phone and when using the xG24 and Arducam.

The steps to build, train, and test the waste classification model are close to identical as , with the following comments:

In this project I later used the to search for a more optimal model, but as RAM memory is the main constraint when running ML on the xG24, I could not use any of the suggested MobileNetV2 models.

Regardless of if you are using the robot to sort cards or solid waste, the deployment steps are identical. For deploying the ML model to the xG24 kit, please use same steps as in to flash the firmware.

Especially the waste sorting model can be further up-scaled to really make an impact. To do this wisely one should try to use existing waste data images instead of tediously collecting own images. I actually looked into this and found a few free databases online, one of them at . I actually uploaded these >15000 images to Edge Impulse and have tried a few ML models - as well as EON-tuner - to find an optimal one. This dataset consists of 12 classes, including bio-waste, clothes, shoes, three types of glass, etc.

large scale waste sorting experiment
Peter Ma in 2018
PyDobot_sorting_cards.py
PyDobot_sorting_waste.py
SiLabs xG24-DK2601B EFR32xG24 Dev Kit
Arducam B0067 2MP OV2640 SPI Camera for Arduino
Pin Header 2.54mm 1x20 Pin
Dobot Magician
Tutorial 1
GitHub repo
Data Collection Process
Edge Impulse Studio & CLI (Command-Line Interface)
documentation
Part one
the ones in Part 1
EON Tuner
Part 1
Kaggle
https://studio.edgeimpulse.com/public/193509/latest
Part 1
SiLabs xG24 Dev board
Dobot Magician