Object detection tasks take an image and output information about the class and number of objects, position, (and, eventually, size) in the image.
Edge Impulse provides, by default, two different model architectures to perform object detection, MobileNetV2 SSD FPN-Lite uses bounding boxes (objects location and size) and FOMO uses centroids (objects location only).
Want to compare the two models?
Detect objects using MobileNet SSD (bounding boxes) Can run on systems starting from Linux CPUs up to powerful GPUs
Object detection using FOMO (centroids) Can run on high-end MCUs, Linux CPUs, and GPUs
In this tutorial, you'll use machine learning to build a system that can recognize and track multiple objects in your house through a camera - a task known as object detection. Adding sight to your embedded devices can make them see the difference between poachers and elephants, count objects, find your lego bricks, and detect dangerous situations. In this tutorial, you'll learn how to collect images for a well-balanced dataset, how to apply transfer learning to train a neural network and deploy the system to an edge device.
At the end of this tutorial, you'll have a firm understanding of how to do object detection using Edge Impulse.
There is also a video version of this tutorial:
You can view the finished project, including all data, signal processing and machine learning blocks here: Tutorial: object detection.
Running on a microcontroller?
We recently released a brand-new approach to perform object detection tasks on microcontrollers, FOMO, if you are using a constraint device that does not have as much compute, RAM, and flash as Linux platforms, please head to this end-to-end tutorial: Detect objects using FOMO
Alternatively, if you only need to recognize a single object, you can follow our tutorial on Adding sight to your sensors - which performs image classification, hence, limits you to a single object but can also fit on microcontrollers.
You can view the finished project, including all data, signal processing and machine learning blocks here: Tutorial: object detection.
For this tutorial, you'll need a supported device.
If you don't have any of these devices, you can also upload an existing dataset through the Uploader - including annotations. After this tutorial you can then deploy your trained machine learning model as a C++ library and run it on your device.
In this tutorial we'll build a model that can distinguish between two objects on your desk - we've used a lamp and a coffee cup, but feel free to pick two other objects. To make your machine learning model see it's important that you capture a lot of example images of these objects. When training the model these example images are used to let the model distinguish between them.
Capturing data
Capture the following amount of data - make sure you capture a wide variety of angles and zoom level. It's fine if both images are in the same frame. We'll be cropping the images later to be square so make sure the objects are in the frame.
30 images of a lamp.
30 images of a coffee cup.
You can collect data from the following devices:
Collecting image data from the Studio - for the Raspberry Pi 4 and the Jetson Nano.
Or you can capture your images using another camera, and then upload them by going to Data acquisition and clicking the 'Upload' icon.
With the data collected we need to label this data. Go to Data acquisition, verify that you see your data, then click on the 'Labeling queue' to start labeling.
No labeling queue? Go to Dashboard, and under 'Project info > Labeling method' select 'Bounding boxes (object detection)'.
Labeling data
The labeling queue shows you all the unlabeled data in your dataset. Labeling your objects is as easy as dragging a box around the object, and entering a label. To make your life a bit easier we try to automate this process by running an object tracking algorithm in the background. If you have the same object in multiple photos we thus can move the boxes for you and you just need to confirm the new box. After dragging the boxes, click Save labels and repeat this until your whole dataset is labeled.
AI-Assisted Labeling
Use AI-Assisted Labeling for your object detection project! For more information, check out our blog post.
Afterwards you should have a well-balanced dataset listed under Data acquisition in your Edge Impulse project.
Rebalancing your dataset
To validate whether a model works well you want to keep some data (typically 20%) aside, and don't use it to build your model, but only to validate the model. This is called the 'test set'. You can switch between your training and test sets with the two buttons above the 'Data collected' widget. If you've collected data on your development board there might be no data in the testing set yet. You can fix this by going to Dashboard > Perform train/test split.
With the training set in place you can design an impulse. An impulse takes the raw data, adjusts the image size, uses a preprocessing block to manipulate the image, and then uses a learning block to classify new data. Preprocessing blocks always return the same values for the same input (e.g. convert a color image into a grayscale one), while learning blocks learn from past experiences.
For this tutorial we'll use the 'Images' preprocessing block. This block takes in the color image, optionally makes the image grayscale, and then turns the data into a features array. If you want to do more interesting preprocessing steps - like finding faces in a photo before feeding the image into the network -, see the Building custom processing blocks tutorial. Then we'll use a 'Transfer Learning' learning block, which takes all the images in and learns to distinguish between the two ('coffee', 'lamp') classes.
In the studio go to Create impulse, set the image width and image height to 320
, the 'resize mode' to Fit shortest axis
and add the 'Images' and 'Object Detection (Images)' blocks. Then click Save impulse.
Configuring the processing block
To configure your processing block, click Images in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop down menu), and the results of the processing step on the right. You can use the options to switch between 'RGB' and 'Grayscale' mode, but for now leave the color depth on 'RGB' and click Save parameters.
This will send you to the 'Feature generation' screen. In here you'll:
Resize all the data.
Apply the processing block on all this data.
Create a 3D visualization of your complete dataset.
Click Generate features to start the process.
Afterwards the 'Feature explorer' will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 320x320x3=307,200 features) we run a process called 'dimensionality reduction' on the dataset before visualizing this. Here the 307,200 features are compressed down to just 3, and then clustered based on similarity. Even though we have little data you can already see the clusters forming (lamp images are all on the left, coffee all on the right), and can click on the dots to see which image belongs to which dot.
Configuring the transfer learning model
With all data processed it's time to start training a neural network. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. The network that we're training here will take the image data as an input, and try to map this to one of the three classes.
It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
To configure the transfer learning model, click Object detection in the menu on the left. Here you can select the base model (the one selected by default will work, but you can change this based on your size requirements), and set the rate at which the network learns.
Leave all settings as-is, and click Start training. After the model is done you'll see accuracy numbers below the training output. You have now trained your model!
With the model trained let's try it out on some test data. When collecting the data we split the data up between a training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence.
To validate your model, go to Model testing and select Classify all. Here we hit 92.31% precision, which is great for a model with so little data.
To see a classification in detail, click the three dots next to an item, and select Show classification. This brings you to the Live classification screen with much more details on the file (you can also capture new data directly from your development board from here). This screen can help you determine why items were misclassified.
Live Classification Result
This view is particularly useful for a direct comparison between the raw image and the model's interpretation. Each object detected in the image is highlighted with a bounding box. Alongside these boxes, you'll find labels and confidence scores, indicating what the model thinks each object is and how sure it is about its prediction. This mode is ideal for understanding the model's performance in terms of object localization and classification accuracy.
Overlay Mode for the Live Classification Result
In this view, bounding boxes are drawn around the detected objects, with labels and confidence scores displayed within the image context. This approach offers a clearer view of how the bounding boxes align with the objects in the image, making it easier to assess the precision of object localization. The overlay view is particularly useful for examining the model's ability to accurately detect and outline objects within a complex visual scene.
Summary Table
Name: This field displays the name of the sample file analyzed by the model. For instance, 'sample.jpg.22l74u4f' is the file name in this case.
CATEGORY: Lists the types of objects that the model has been trained to detect. In this example, two categories are shown: 'coffee' and 'lamp'.
COUNT: Indicates the number of times each category was detected in the sample file. In this case, both 'coffee' and 'lamp' have a count of 1, meaning each object was detected once in the sample.
INFO: This column provides additional information about the model's performance. It displays the 'Precision score', which, in this example, is 95.00%. The precision score represents the model's accuracy in making correct predictions over a range of Intersection over Union (IoU) values, known as the mean Average Precision (mAP).
With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library or model file that you can include in your embedded software.
Running the impulse on your Raspberry Pi 4 or Jetson Nano
From the terminal just run edge-impulse-linux-runner
. This will build and download your model, and then run it on your development board. If you're on the same network you can get a view of the camera, and the classification results directly from your dev board. You'll see a line like:
Open this URL in a browser to see your impulse running!
Running the impulse on your mobile phone
On your mobile phone just click Switch to classification mode at the bottom of your phone screen. Point it at an object and press 'Capture'.
Integrating the model in your own application
Congratulations! You've added object detection to your sensors. Now that you've trained your model you can integrate your impulse in the firmware of your own edge device, see the Edge Impulse for Linux documentation for the Node.js, Python, Go and C++ SDKs that let you do this in a few lines of code and make this model run on any device. Here's an example of sending a text message through Twilio when an object is seen.
Or if you're interested in more, see our tutorials on Continuous motion recognition or Recognize sounds from audio. If you have a great idea for a different project, that's fine too. Edge Impulse lets you capture data from any sensor, build custom processing blocks to extract features, and you have full flexibility in your Machine Learning pipeline with the learning blocks.
We can't wait to see what you'll build! 🚀
is a brand-new approach to run object detection models on constrained devices. FOMO is a ground-breaking algorithm that brings real-time object detection, tracking and counting to microcontrollers for the first time. FOMO is 30x faster than MobileNet SSD and can run in <200K of RAM.
In this tutorial, we will explain how to count cars to estimate parking occupancy using FOMO.
View the finished project, including all data, signal processing and machine learning blocks here: .
Limitations of FOMO
FOMO does not output bounding boxes but will give you the object's location using centroids. Hence the size of the object is not available.
FOMO works better if the objects have a similar size.
Objects shouldn’t be too close to each other, although this can be optimized when increasing the image input resolution.
If you need the size of the objects for your project, head to the default . tutorial.
For this tutorial, you'll need a .
If you don't have any of these devices, you can also upload an existing dataset through the or use your to connect your device to Edge Impulse. After this tutorial, you can then deploy your trained machine learning model as a C++ library or as a WebAssembly package and run it on your device.
You can collect data from the following devices:
- for the Raspberry Pi 4 and the Jetson Nano.
Collecting image data from any of the that have a camera.
With the data collected, we need to label this data. Go to Data acquisition, verify that you see your data, then click on the 'Labeling queue' to start labeling.
Why use bounding box inputs?
To keep the interoperability with other models, your training image input will use bounding boxes although we will output centroids in the inference process. As such FOMO will use in the background translation between bounding boxes and segmentation maps in various parts of the end-to-end flow. This includes comparing sets between the bounding boxes and the segmentation maps to run profiling and scoring.
Using your own trained model - Useful when you already have a trained model with classes similar to your new task.
Using Object tracking - Useful when you have objects that are similar in size and common between images/frames.
For our case, since the 'car' object is part of the COCO dataset, we will use the YoloV5 pre-trained model to accelerate this process. To enable this feature, we will first click the Label suggestions dropdown,then select “Classify using YOLOv5.”
From the image above, the YOLOV5 model can already help us annotate more than 90% of the cars without us having to do it manually by our hands.
To validate whether a model works well you want to keep some data (typically 20%) aside, and don't use it to build your model, but only to validate the model. This is called the 'test set'. You can switch between your training and test sets with the two buttons above the 'Data collected' widget. If you've collected data on your development board there might be no data in the testing set yet. You can fix this by going to Dashboard > Perform train/test split.
To configure this, go to Create impulse, set the image width and image height to 96, the 'resize mode' to Fit shortest axis and add the 'Images' and 'Object Detection (Images)' blocks. Then click Save Impulse.
To configure your processing block, click Images in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop-down menu), and the results of the processing step on the right. You can use the options to switch between RGB
and Grayscale
modes. Finally, click on Save parameters.
This will send you to the 'Feature generation' screen. In here you'll:
Resize all the data.
Apply the processing block on all this data.
Create a 3D visualization of your complete dataset.
Click Generate features to start the process.
Afterward, the Feature explorer will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 96x96x1=9216 features for grayscale) we run a process called 'dimensionality reduction' on the dataset before visualizing this. Here the 9216 features are compressed down to 2, and then clustered based on similarity as shown in the feature explorer below.
With all data processed it's time to start training our FOMO model. The model will take an image as input and output objects detected using centroids. For our case, it will show centroids of cars detected on the images.
FOMO is fully compatible with any MobileNetV2 model, and depending on where the model needs to run you can pick a model with a higher or lower alpha. Transfer learning also works (although you need to train your base models specifically with FOMO in mind). Another advantage of FOMO is that it has very few parameters to learn from compared to normal SSD networks making the network even much smaller and faster to train. Together this gives FOMO the capabilities to scale from the smallest microcontrollers to full gateways or GPUs.
To configure FOMO, head over to the ‘Object detection’ section, and select 'Choose a different model' then select one of the FOMO models as shown in the image below.
Make sure to start with a learning rate of 0.001 then click start training. After the model is done you'll see accuracy numbers below the training output. You have now trained your FOMO object detection model!
As you may have noticed from the training results above, FOMO uses F1 Score as its base evaluating metric as compared to SSD MobileNetV2 which uses Mean Average Precision (mAP). Using Mean Average Precision (mAP) as the sole evaluation metric can sometimes give limited insights into the model’s performance. This is particularly true when dealing with datasets with imbalanced classes as it only measures how accurate the predictions are without putting into account how good or bad the model is for each class. The combination between F1 score and a confusion matrix gives us both the balance between precision and recall of our model as well as how the model performs for each class.
With the model trained let's try it out on some test data. When collecting the data we split the data up between a training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence. To validate our model, we will go to Model Testing and select Classify all.
Given the little training data we had and the few cycles we trained on, we got an accuracy of 84.62% which can be improved further. To see the classification in detail, we will head to Live Classification* and select one image from our test sample. Click the three dots next to an item, and select Show classification. We can also capture new data directly from your development board from here.
Live Classification Result
From the test image above, our model was able to detect 16 cars out of the actual possible 18 which is a good performance. This can be seen in side by side by default, but you can also switch to overlay mode to see the model's predictions against the actual image content.
Overlay Mode for the Live Classification Result
A display option where the original image and the model's detections overlap, providing a clear juxtaposition of the model's predictions against the actual image content.
Summary Table
The summary table for a FOMO classification result provides a concise overview of the model's performance on a specific sample file, such as 'Parking_data_2283.png.2tk8c1on'. This table is organized as follows:
CATEGORY: Metric, Object category, or class label, e.g., car. COUNT: Shows detection accuracy, frequency, e.g., car detected 7 times.
INFO: Provides performance metrics definitions, including F1 Score, Precision, and Recall, which offer insights into the model's accuracy and efficacy in detection:
Table Metrics F1 Score: (77.78%): Balances precision and recall. Precision: (100.00%): Accuracy of correct predictions. Recall: (63.64%): Proportion of actual objects detected.
Viewing Options
Bottom-right controls adjust the visibility of ground truth labels and model predictions, enhancing the analysis of the model's performance:
Prediction Controls: Customize the display of model predictions, including:
Show All: Show all detections and confidence scores.
Show Correct Only: Focus on accurate model predictions.
Show incorrect only: Pinpoint undetected objects in the ground truth.
Ground Truth Controls: Toggle the visibility of original labels for direct comparison with model predictions.
Show All: Display all ground truth labels.
Hide All: Conceal all ground truth labels.
Show detected only: Highlight ground truth labels detected by the model.
Show undetected only: Identify ground truth labels missed by the model.
With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library or model file that you can include in your embedded software.
From the terminal just run edge-impulse-linux-runner
. This will build and download your model, and then run it on your development board. If you're on the same network you can get a view of the camera, and the classification results directly from your dev board. You'll see a line like:
Open this URL in a browser to see your impulse running!
Go to the Deployment tab, on Build firmware section and select the board-compatible firmware to download it.
Follow the instruction provided to flash the firmware to your board and head over to your terminal and run the edge-impulse-run-impulse --debug
command:
You'll also see a URL you can use to view the image stream and results in your browser:
We can't wait to see what you'll build! 🚀
Alternatively, you can capture your images using another camera, and then upload them directly from the studio by going to Data acquisition and clicking the 'Upload' icon or using Edge Impulse CLI .
All our collected images will be staged for annotation at the "labeling queue". Labeling your objects is as easy as dragging a box around the object, and entering a label. However, when you have a lot of images, this manual annotation method can become tiresome and time consuming. To make this task even easier, Edge impulse provides methods that can help you save time and energy. The AI assisted labeling techniques include:
Using YoloV5 - Useful when your objects are part of the common objects in the .
One of the beauties of FOMO is its fully convolutional nature, which means that just the ratio is set. Thus, it gives you more flexibility in its usage compared to the classical . method. For this tutorial, we have been using 96x96 images but it will accept other resolutions as long as the images are square.
To run using an Arduino library, go to the studio Deployment tab on Create Library section and select Arduino Library to download your custom Arduino library. Go to your Arduino IDE, then click on Sketch >> Include Library >> Add .Zip ( Your downloaded Arduino library). Make sure to follow the instruction provided on . Open Examples >> Examples from custom library and select your library. Upload the ''Portenta_H7_camera'' sketch to your Portenta then open your serial monitor to view results.
Congratulations! You've added object detection using FOMO to your sensors. Now that you've trained your model you can integrate your impulse in the firmware of your own edge device, see or the documentation for the Node.js, Python, Go and C++ SDKs that let you do this in a few lines of code and make this model run on any device.
when an object is seen.
Or if you're interested in more, see our tutorials on or . If you have a great idea for a different project, that's fine too. Edge Impulse lets you capture data from any sensor, build to extract features, and you have full flexibility in your Machine Learning pipeline with the learning blocks.