
Background
As ecommerce availability continues to expand worldwide, many people prefer to shop online and have their purchases delivered to their homes. As a result, package theft has increased along with more parcels being delivered. According to a recent SafeWise analysis, 210 million shipments will be stolen in 2021. In some cases, thieves follow delivery trucks and steal the package immediately when it has been delivered. Although there are ways to prevent package theft, such as having packages delivered to the Post Office or giving the courier access to your home remotely, many individuals prefer door deliveries. However, we may not always be around to collect packages, or thieves may be quicker to do so!Monitor delivered packages with TinyML
There are a couple of techniques to prevent package theft, but we’ll focus on parcels delivered to our front porches or mailboxes. We’ll use Edge Impulse to create a Machine Learning model that recognizes parcels. The model will then be deployed on a low-cost, low-power device, the ESP-EYE development board. This board has a 2MP camera that we will use to collect live video feeds of our shipments. To develop our Machine Learning model, we will use FOMO (Faster Objects, More Objects). This is an algorithm developed by Edge Impulse to enable real-time object detection, tracking and counting on microcontrollers. FOMO is 30x faster than MobileNet SSD and runs in <200K of RAM. On a Raspberry Pi 4, live classification with FOMO achieved ~27.7 frames per second, while SSD MobileNetV2 gave ~1.56fps.Things used in this project
Hardware components
- ESP-EYE Board
Software
- Edge Impulse Studio
Quick Start
You can find the public project here: Parcel Detection - FOMO. To add this project into your account, click “Clone this project” at the top of the page. Next, go to “Deploying to ESP-EYE” section below to learn how to deploy the model to the ESP-EYE board. Alternatively, to create a similar project, follow the next steps after creating a new Edge Impulse project.Data Acquisition
First, on the Project Dashboard, we set Labeling method to “Bounding boxes (object detection)”. We want our Machine Learning model to detect parcels in an image. To do this, we need pictures of parcels! Note that our dataset only includes box parcels and not envelopes or poly-mailer bags. In total, the dataset has 275 images with an 80/20 split for train and test data. If you want to add more images to the dataset, Edge Impulse has an uploader that enables different ways of adding data to your project. Afterwards, make sure to perform a Train/test split to re-balance your dataset. Next, we annotate the images and label a parcel in each image.
Impulse Design
We can now use our dataset to train our model. This requires two important features: a processing block and learning block. Documentation on Impulse Design can be found here. We first click ”Create Impulse”. Here, set image width and height to 96x96; and Resize mode to Squash. The Processing block is set to “Image” and the Learning block is “Object Detection (images)”. Click ‘Save Impulse’ to use this configuration. Since the ESP-EYE is resource-constrained device (4MB flash and 8MB PSRAM), we have used 96x96 image size to lower RAM usage.




Model Testing
When training our model, we used 80% of the data in our dataset. The remaining 20% is used to test the accuracy of the model in classifying unseen data. We need to verify that our model has not overfit, by testing it on new data. If your model performs poorly, then it means that it overfit (crammed your dataset). This can be resolved by adding more dataset and/or reconfiguring the processing and learning blocks, and even adding Data Augmentation. If you need to increase performance a bit, some tips and tricks can be found in this guide. Click “Model testing” then “classify all”. Our current model has an accuracy of 91%, which is pretty good.

Deploying to ESP-EYE board
To deploy our model, first go to the “Deployment” section. Next, under “Build firmware” we select Espressif ESP-EYE (ESP32) from the options. To increase performance on the board, we set “Enable EON Compiler” and chose “Quantized(int8)” optimization. This makes our model use 243.9K of RAM and 77.5K of Flash on the board. Chose “Build” and the firmware will be downloaded after the build ends.

edge-impulse-run-impulse --debug
Next, enter the provided URL in a browser and you will see live feeds from the ESP-EYE.

Taking it one step farther
We can use this model to monitor delivered parcels, and take some actions such as sounding an alarm or sending a text message when no parcel, or fewer parcels are detected. A build library will allow us to add custom code to the predictions. We can check if one or more parcels are predicted, save the count, and then monitor the predictions count. If predictions count goes down, that means a parcel(s) is missing and we can raise an alarm using our development board, and even send a signal to other home automation devices such as security cameras or alarms. Edge Impulse has also developed a feature that enables sending an SMS based on inference from our model. This feature works with Development boards that support Edge Impulse for Linux, such as the Raspberry Pi. The repository and documentation can be found here.
Sending SMS uses Twilio service