
Introduction
A reverse vending machine (RVM) is a machine that allows a person to insert a used or empty glass bottle, plastic bottle or aluminum can in exchange for a reward. You might have seen them around large stores, gas stations, restaurants, and malls. These devices are the first line of the long journey of reusable plastic: getting the empty bottle containers back from the people once they are used. Some device models accept only one type of recyclable container like aluminum cans or bottles, others accept all types and sort them accordingly inside the machine in special larger containers. We at Zalmotek have built a prototype for such a machine that is able to automatically process and sort glass bottles, PETs, and aluminum cans.


The Solution
We are aiming to improve the rate of detection of liquid containers by using Edge Impulse to create an added layer of artificial intelligence to the existing sensor network with Computer Vision that will also work as a log of what the machine detects which we can review afterward, having the data from the sensors combined with a picture of the object. The items are transported from the user through conveyor belts, on which they are sorted, so it’s important to take the pictures on the same surface and use the same illumination conditions. This will be a very important factor to ensure good detection rates. We are using conveyor belts used in the food industry because they are easy to maintain and sanitize.Hardware requirements
- Jetson Nano 2GB Developer Kit
- microSD card (64GB UHS-1 recommended)
- Display
- USB keyboard and mouse
- Raspberry Pi Camera Module V2 (or another external CSI or USB camera)
- CSI/USB cable
Software requirements
- Edge Impulse account


Hardware Setup
Setting up the NVIDIA Jetson
NVIDIA Jetson Nano 2GB DevKit has a quick get-started guide here that, based on your operating system, will help you write the OS on an SD card and start the system. We also recommend having an enclosure for the Jetson to protect it from all sorts of nefarious events. In this tutorial, we have found the reComputer case to be a good fit. After the experimental tests the Jetson will be placed inside the reverse vending machine itself in a designated enclosure and will also serve as a user interface driver using a small HDMI screen.


Software Setup
Installing the dependencies to run Edge Impulse
Register for a free account on the Edge Impulse platform here, then power up the Jetson and connect the display, keyboard and mouse to run the following commands to install the Linux runner. Start a terminal and run the setup script:Building the TinyML Model
Creating an Edge Impulse Project
First step towards building your TinyML Model is creating a new Edge Impulse Project. Choose Images as the type of data you will use, then choose Image Classification, as we will only have to detect one plastic, aluminum or glass recipient in an image.Connecting the device
To connect the Jetson Nano to the Edge Impulse project, run the following command in the terminal:Collecting and preparing the dataset
There are multiple ways to go about gathering your dataset:- Manually taking a bunch of photos, aka data points, using an application like “Cheese!” that comes preinstalled on the NVIDIA Jetson.
- Recording a video and extracting the frames every 1 second, using a Python script.
- Take photos using the Data Acquisition panel in Edge Impulse Studio.


Creating the impulse
Now we can create the impulse. Go to Impulse Design and set the image size to 160x160px, add an Image processing block, and a Transfer Learning block. We won’t train a model built from scratch, but rather make use of the capabilities of a pre-trained model and retrain its final layers on our dataset, saving a lot of precious time and resources. The only constraint of using this method is that we have to resize the images from our dataset to the size of the images the model was initially trained on (so either 96x96 or 160x160). The output features will be our categories, meaning the labels we’ve previously defined (aluminum, plastic and glass).
Generating features
Now go to Image in the Impulse Design menu and click Save Parameters and Generate Features. This will resize all the images to 160x160px and optionally change the color depth to either RGB or Grayscale. We chose the default mode, RGB, as the color is an important feature for the recyclable recipients we want to detect. You’ll also be able to visualize the generated features in the Feature explorer, clustered based on similarity. A good rule of thumb is that clusters that are well separated in the feature explorer will be easier to learn for the machine learning model.

Training the model
Now that we have the features we can start training the neural network. Leave the default settings and choose the MobileNetV2 96x96 0.35 model, which is a pretty light model. Since we’re running on an NVIDIA Jetson, we can also run more powerful models from all of all the listed ones, but if you’re running on a dev board with less resources you can use a lighter model.
Validating the model
Time to test our trained model! Go to Model testing and click on Classify all. You’ll see in the Model testing results tab how the model performed on our testing data. We obtained an accuracy of 98.7% which is pretty good! You can also take a look at the Confusion matrix to identify weak spots of the model and what labels are more likely to be misclassified. Based on this, you can add more items in the training dataset for these classes.
Deploying the model on the edge
To run the inference on the target, use the following command:



Conclusion
Not all recycling is created equal. Selective recycling, also known as “source separation” has several advantages over traditional recycling methods:- it helps to ensure that recyclable materials are actually recycled instead of being sent to landfill.
- it reduces the need for sorting and cleaning at recycling facilities, which can save time and money.
- selective recycling can help to increase the overall quality of recycled materials.