LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • 1. Prerequisites
  • 2. Building a dataset
  • 3. Designing an impulse
  • 4. Validating your model
  • 5. Running the model on your device
  1. Tutorials
  2. End-to-end tutorials
  3. Object detection

Detect objects with FOMO

PreviousDetect objects using MobileNet SSDNextSensor fusion

Last updated 6 months ago

is a brand-new approach to run object detection models on constrained devices. FOMO is a ground-breaking algorithm that brings real-time object detection, tracking and counting to microcontrollers for the first time. FOMO is 30x faster than MobileNet SSD and can run in <200K of RAM.

In this tutorial, we will explain how to count cars to estimate parking occupancy using FOMO.

View the finished project, including all data, signal processing and machine learning blocks here: .

Limitations of FOMO

  • FOMO does not output bounding boxes but will give you the object's location using centroids. Hence the size of the object is not available.

  • FOMO works better if the objects have a similar size.

  • Objects shouldn’t be too close to each other, although this can be optimized when increasing the image input resolution.

If you need the size of the objects for your project, head to the default . tutorial.

1. Prerequisites

For this tutorial, you'll need a .

If you don't have any of these devices, you can also upload an existing dataset through the or use your to connect your device to Edge Impulse. After this tutorial, you can then deploy your trained machine learning model as a C++ library or as a WebAssembly package and run it on your device.

2. Building a dataset

Capturing data

You can collect data from the following devices:

  • - for the Raspberry Pi 4 and the Jetson Nano.

  • Collecting image data from any of the that have a camera.

With the data collected, we need to label this data. Go to Data acquisition, verify that you see your data, then click on the 'Labeling queue' to start labeling.

Labeling data

Why use bounding box inputs?

To keep the interoperability with other models, your training image input will use bounding boxes although we will output centroids in the inference process. As such FOMO will use in the background translation between bounding boxes and segmentation maps in various parts of the end-to-end flow. This includes comparing sets between the bounding boxes and the segmentation maps to run profiling and scoring.

  • Using your own trained model - Useful when you already have a trained model with classes similar to your new task.

  • Using Object tracking - Useful when you have objects that are similar in size and common between images/frames.

For our case, since the 'car' object is part of the COCO dataset, we will use the YoloV5 pre-trained model to accelerate this process. To enable this feature, we will first click the Label suggestions dropdown,then select “Classify using YOLOv5.”

From the image above, the YOLOV5 model can already help us annotate more than 90% of the cars without us having to do it manually by our hands.

Rebalancing your dataset

To validate whether a model works well you want to keep some data (typically 20%) aside, and don't use it to build your model, but only to validate the model. This is called the 'test set'. You can switch between your training and test sets with the two buttons above the 'Data collected' widget. If you've collected data on your development board there might be no data in the testing set yet. You can fix this by going to Dashboard > Perform train/test split.

3. Designing an impulse

To configure this, go to Create impulse, set the image width and image height to 96, the 'resize mode' to Fit shortest axis and add the 'Images' and 'Object Detection (Images)' blocks. Then click Save Impulse.

Configuring the processing block

To configure your processing block, click Images in the menu on the left. This will show you the raw data on top of the screen (you can select other files via the drop-down menu), and the results of the processing step on the right. You can use the options to switch between RGB and Grayscale modes. Finally, click on Save parameters.

This will send you to the 'Feature generation' screen. In here you'll:

  • Resize all the data.

  • Apply the processing block on all this data.

  • Create a 3D visualization of your complete dataset.

  • Click Generate features to start the process.

Afterward, the Feature explorer will load. This is a plot of all the data in your dataset. Because images have a lot of dimensions (here: 96x96x1=9216 features for grayscale) we run a process called 'dimensionality reduction' on the dataset before visualizing this. Here the 9216 features are compressed down to 2, and then clustered based on similarity as shown in the feature explorer below.

Configuring the object detection model with FOMO

With all data processed it's time to start training our FOMO model. The model will take an image as input and output objects detected using centroids. For our case, it will show centroids of cars detected on the images.

FOMO is fully compatible with any MobileNetV2 model, and depending on where the model needs to run you can pick a model with a higher or lower alpha. Transfer learning also works (although you need to train your base models specifically with FOMO in mind). Another advantage of FOMO is that it has very few parameters to learn from compared to normal SSD networks making the network even much smaller and faster to train. Together this gives FOMO the capabilities to scale from the smallest microcontrollers to full gateways or GPUs.

To configure FOMO, head over to the ‘Object detection’ section, and select 'Choose a different model' then select one of the FOMO models as shown in the image below.

Make sure to start with a learning rate of 0.001 then click start training. After the model is done you'll see accuracy numbers below the training output. You have now trained your FOMO object detection model!

As you may have noticed from the training results above, FOMO uses F1 Score as its base evaluating metric as compared to SSD MobileNetV2 which uses Mean Average Precision (mAP). Using Mean Average Precision (mAP) as the sole evaluation metric can sometimes give limited insights into the model’s performance. This is particularly true when dealing with datasets with imbalanced classes as it only measures how accurate the predictions are without putting into account how good or bad the model is for each class. The combination between F1 score and a confusion matrix gives us both the balance between precision and recall of our model as well as how the model performs for each class.

4. Validating your model

With the model trained let's try it out on some test data. When collecting the data we split the data up between a training and a testing dataset. The model was trained only on the training data, and thus we can use the data in the testing dataset to validate how well the model will work in the real world. This will help us ensure the model has not learned to overfit the training data, which is a common occurrence. To validate our model, we will go to Model Testing and select Classify all.

Given the little training data we had and the few cycles we trained on, we got an accuracy of 84.62% which can be improved further. To see the classification in detail, we will head to Live Classification* and select one image from our test sample. Click the three dots next to an item, and select Show classification. We can also capture new data directly from your development board from here.

Live Classification Result

From the test image above, our model was able to detect 16 cars out of the actual possible 18 which is a good performance. This can be seen in side by side by default, but you can also switch to overlay mode to see the model's predictions against the actual image content.

Overlay Mode for the Live Classification Result

A display option where the original image and the model's detections overlap, providing a clear juxtaposition of the model's predictions against the actual image content.

Summary Table

The summary table for a FOMO classification result provides a concise overview of the model's performance on a specific sample file, such as 'Parking_data_2283.png.2tk8c1on'. This table is organized as follows:

CATEGORY: Metric, Object category, or class label, e.g., car. COUNT: Shows detection accuracy, frequency, e.g., car detected 7 times.

INFO: Provides performance metrics definitions, including F1 Score, Precision, and Recall, which offer insights into the model's accuracy and efficacy in detection:

Table Metrics F1 Score: (77.78%): Balances precision and recall. Precision: (100.00%): Accuracy of correct predictions. Recall: (63.64%): Proportion of actual objects detected.

Viewing Options

Bottom-right controls adjust the visibility of ground truth labels and model predictions, enhancing the analysis of the model's performance:

Prediction Controls: Customize the display of model predictions, including:

  • Show All: Show all detections and confidence scores.

  • Show Correct Only: Focus on accurate model predictions.

  • Show incorrect only: Pinpoint undetected objects in the ground truth.

Ground Truth Controls: Toggle the visibility of original labels for direct comparison with model predictions.

  • Show All: Display all ground truth labels.

  • Hide All: Conceal all ground truth labels.

  • Show detected only: Highlight ground truth labels detected by the model.

  • Show undetected only: Identify ground truth labels missed by the model.

5. Running the model on your device

With the impulse designed, trained and verified you can deploy this model back to your device. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the preprocessing steps, neural network weights, and classification code - in a single C++ library or model file that you can include in your embedded software.

Running the impulse on a Linux-based device

From the terminal just run edge-impulse-linux-runner. This will build and download your model, and then run it on your development board. If you're on the same network you can get a view of the camera, and the classification results directly from your dev board. You'll see a line like:

Want to see a feed of the camera and live classification in your browser? Go to http://192.168.8.117:4912

Open this URL in a browser to see your impulse running!

Running the impulse on a fully supported MCU

Go to the Deployment tab, on Build firmware section and select the board-compatible firmware to download it.

Follow the instruction provided to flash the firmware to your board and head over to your terminal and run the edge-impulse-run-impulse --debug command:

You'll also see a URL you can use to view the image stream and results in your browser:

Want to see a feed of the camera and live classification in your browser? Go to http://192.168.8.117:4912

Running the impulse using a generated Arduino Library

Integrating the model in your own application

We can't wait to see what you'll build! 🚀

Alternatively, you can capture your images using another camera, and then upload them directly from the studio by going to Data acquisition and clicking the 'Upload' icon or using Edge Impulse CLI .

All our collected images will be staged for annotation at the "labeling queue". Labeling your objects is as easy as dragging a box around the object, and entering a label. However, when you have a lot of images, this manual annotation method can become tiresome and time consuming. To make this task even easier, Edge impulse provides methods that can help you save time and energy. The AI assisted labeling techniques include:

Using YoloV5 - Useful when your objects are part of the common objects in the .

One of the beauties of FOMO is its fully convolutional nature, which means that just the ratio is set. Thus, it gives you more flexibility in its usage compared to the classical . method. For this tutorial, we have been using 96x96 images but it will accept other resolutions as long as the images are square.

To run using an Arduino library, go to the studio Deployment tab on Create Library section and select Arduino Library to download your custom Arduino library. Go to your Arduino IDE, then click on Sketch >> Include Library >> Add .Zip ( Your downloaded Arduino library). Make sure to follow the instruction provided on . Open Examples >> Examples from custom library and select your library. Upload the ''Portenta_H7_camera'' sketch to your Portenta then open your serial monitor to view results.

Congratulations! You've added object detection using FOMO to your sensors. Now that you've trained your model you can integrate your impulse in the firmware of your own edge device, see or the documentation for the Node.js, Python, Go and C++ SDKs that let you do this in a few lines of code and make this model run on any device.

when an object is seen.

Or if you're interested in more, see our tutorials on or . If you have a great idea for a different project, that's fine too. Edge Impulse lets you capture data from any sensor, build to extract features, and you have full flexibility in your Machine Learning pipeline with the learning blocks.

FOMO (Faster Objects, More Objects)
Car Parking Occupancy Detection - FOMO
object detection
supported device
Uploader
mobile phone
Collecting image data from the Studio
Collecting image data with your mobile phone
fully-supported development boards
Uploader
3 AI assisted labeling
COCO dataset
object detection
Arduino's Library usage
Deploy your model as a C++ library
Edge Impulse for Linux
Here's an example of sending a text message through Twilio
Continuous motion recognition
Adding sight to your sensors
custom processing blocks
YoloV5 AI Assisted labelling
96*96 input image size
Configuring the processing block
Feature explorer
Selecting FOMO model
Training results
Model testing results
Live Classification - Side by Side
Live Classification - Overlay
Summary Table
Running FOMO object detection on a Raspberry Pi 4
Compiling firmware for Arduino Portenta H7
Running impulse using edgeimpulse CLI on terminal
Running FOMO model using Arduino Library