LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Problem Statement
  • How it works:
  • Hardware Components:
  • Software & Online Services:
  • Steps
  • 1. Collecting Data
  • 2. Labeling
  • 3. Train and Build Model
  • 4. Deploy Model Targeting NVIDIA Orin Nano GPU
  • 5. Build a Visitor Heatmap Program (Python)
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Computer Vision Projects

Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano

Build a visitor heatmap application to understand visitor behaviors and traffic patterns in restuarants, shops, lobbies, and more.

PreviousObject Counting with FOMO - OpenMV Cam RT1062NextVehicle Security Camera - Arduino Portenta H7

Last updated 3 months ago

Was this helpful?

Created By: Jallson Suryo

Public Project Link:

Problem Statement

In data mapping, humans find it easier to understand information through visual representations, especially with clear color groupings, such as in a "Heatmap". Heatmaps are used because their presentation is simple to comprehend, and can represent density and clustering. In contrast, the output from object detection cameras, which usually includes data like the number of bounding boxes, locations, and timestamps, is numerical and difficult to interpret, especially for understanding visitor behavior, flow, or favorite spots.

In this project, we will create a simulation of a scenario in a café, restaurant, or lobby. By integrating cameras, which are commonly already in use in such places, with an additional object detection system and overlaying it with a dynamic heatmap visualization, we can reveal insights about visitor flow, duration, or favorite spots. The goal is to understand visitor behavior patterns so that the interior layout or design can be adjusted to maximize or expand areas that are popular with visitors.

How it works:

This project will explore Edge Impulse FOMO (Faster Objects, More Objects) object detection and combine it with heatmap visualization. To do this, we will use miniature figures and a café/restaurant setting, with an overhead video camera capturing training and testing data to simulate real-life conditions. This will show the number of visitors, the duration of their stay, and generate a dynamic heatmap using a Python program. By using the bounding box location and its time-stamp information, the duration of each object (visitor) can be determined. This method is expected to help analyze visitor habits and their preferred (or less favored) areas. The use of a FOMO model with Tensor RT library deployed on the Jetson Orin Nano has proven to deliver high accuracy and fast inference.

Hardware Components:

  • Miniature figures and interior set (cardboard)

  • NVIDIA Jetson Orin Nano Developer Kit (8GB)

  • USB Camera/webcam (eg. Logitech C270/ C920)

  • DisplayPort to HDMI cable

  • Display/monitor

  • Standee/Tripod

  • Keyboard, mouse or PC/Laptop via ssh

Software & Online Services:

  • NVIDIA Jetpack (5.1.2)

  • Edge Impulse Studio

  • Edge Impulse's Linux & Python SDK

  • Terminal

Steps

1. Collecting Data

In the initial stage of building a model in Edge Impulse Studio, we need to prepare the data. In this project we use a USB camera connected to a pc/laptop, or smartphone camera to capture the images for data collection for ease of use. Take pictures from above, in slightly different angles and lighting conditions to ensure that the model can work under different conditions (to prevent model overfitting). Object size is a crucial aspect (in FOMO) to ensure the performance of this model. You must keep the camera distance from objects consistent, because significant difference in object size will confuse the algorithm.

2. Labeling

The next step is labeling. Click on Data Acquisition, click on "Labelling queue" tab then labeling your "people" by dragging a box around an object and label it, then repeat the steps until all images are labelled.

After labeling, it's recommended to split the data into training and testing sets, around an 80/20 ratio. If you haven't done this yet, click on Train / Test Split and proceed.

3. Train and Build Model

Once your labelled dataset is ready, go to Impulse Design > Create Impulse, and set the image width and height to 640x640. Choose Fit shortest axis, then select Image and Object Detection as the learning and processing blocks, and click Save Impulse. Next, navigate to the Image Parameters section, select Grayscale as the color depth, and press Save parameters. Then click on Generate and navigate to Object Detection section, and leave training setting for Neural Network at their default settings (100 cycles and learning rate 0.001), then we choose FOMO MobileNet V2 0.35. You can then begin the training, and see the progress and result on the right side of the screen.

If everything is OK (we get F1 score > 90%), then we can test the model, go to Model Testing section and click Classify all. Our result is above 90%, so we can move on to the next step — Deployment.

4. Deploy Model Targeting NVIDIA Orin Nano GPU

Alternatively you can skip those steps, there's an easier way: simply ensure that the model has been built in Edge Impulse Studio. From there, you can test, download the model, and run everything directly from the Orin Nano.

On the Orin Nano side, there are several things that need to be done. Make sure the unit uses JetPack: we use Jetpack v 5.1.2, which is usually pre-installed on the SD card. Then, open a Terminal on Orin Nano or ssh via your PC/laptop and setup Edge Impulse firmware in the terminal.

wget -q -O - https://cdn.edgeimpulse.com/firmware/linux/orin.sh | bash

You also need to install Linux Python SDK library (you need Python >=3.7 included in JetPack), and it is possible you need to install Cython for building the numpy package: pip3 install Cython

Then install Linux Python SDK: sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-pip , then: pip3 install pyaudio edge_impulse_linux then clone the examples: git clone https://github.com/edgeimpulse/linux-sdk-python

Next, download the model. Open terminal on the Orin Nano or ssh from your PC/laptop and simply type edge-impulse-linux-runner (add --clean to allow you to select your project if needed). Log in to your account then choose your project. This process will download the model.eim which is specifically built with the TensorRT library targeting the Orin Nano GPU and run EI runner with a camera setup on Orin Nano. You can see a live stream via your browser at http://your-orin-ip-address:4912.

During the process, the console will display the path where the model.eim has been downloaded. For example, in the image below, it shows the file located at /home/orin/.ei-linux-runner/models/240606/v17.

For convenience, you can copy this file to the same directory as the Python program you'll be creating in the next steps. For instance, you can use the following command to copy it to the home directory: cp -v model.eim /home/orin

The inferencing time is around 3-5ms, which is an incredibly fast for object detection speed.

Now the model is ready to run in a high-level language such as the Python program in the next step.

5. Build a Visitor Heatmap Program (Python)

With our FOMO model ready, we can now create a Visitor Heatmap program using Python. This program will utilize the bounding box locations of objects and their duration of presence. The heatmap will consist of semi-transparent color blocks overlaying the camera/video display, implemented using the OpenCV library.

The color transitions will range from blue → green → yellow → orange → red, representing durations from 1 second to over 20 seconds. While real-world video input typically spans hours, for this simulation we will use accelerated stop-motion video. The video/camera resolution for this simulation will be set to 640x640, divided into 20x20 grids as the color blocks for the heatmap.

If we run the program using a video file as input (e.g., video.mp4), we use the path to the video file when executing the program:

python3 heatmap.py <path to modelfile>/model.eim <path to videofile>/video.mp4

Note: For video/camera capture display, you cannot use the headless method from a PC/laptop. Instead, connect a display/monitor directly to the Jetson Orin Nano to view the heatmap visualization.

All codes, images and videos can be accessed at: https://github.com/Jallson/VisitorHeatmap

Check our demo test video:

Conclusion

After testing FOMO for visitor object detection and implementing the heatmap program, we have successfully identified visitor flow patterns, how long they stay, and which areas are their favorites. Although this project input used miniature figures and created with stop-motion video, it effectively simulates typical conditions in a café or restaurant.

The color patterns from the heatmap results can be used to adjust furniture layouts, redesign interiors, or quickly understand visitor behavior. In conclusion, this project was successfully executed with minimal setup, a straightforward training process, and implementation on low-energy devices without requiring an internet connection. This ensures better privacy and makes it easy to apply in public setups such as café, restaurant, library, lobby, and store.

Orin Nano case (3D print file or available for purchase from a variety of sources)

For those who are not familiar with Edge Impulse Studio, please follow these steps. Navigate to , login or create an account, then create a new project. Choose the Images project option, then Object detection. In Dashboard > Project Info, choose "Bounding Boxes" for labeling method, and "NVIDIA Jetson Orin Nano" for the target device. Then in Data acquisition, click on Upload Data tab, choose your photo files, auto split, then click Begin upload.

Update: Alternatively you can try Edge Impulse's new feature, .

Click on the Deployment tab, then search for TensorRT, select (Unoptimized) Float32, and click Build. This will generate the NVIDIA TensorRT library for running inference on the Orin Nano's GPU. Once downloaded, unzip the file, and you'll be ready to deploy the model using the Edge Impulse SDK on the NVIDIA Jetson Orin Nano. You can follow the steps to get this TensorRT model in this link,

https://thingiverse.com/thing:6068997
https://studio.edgeimpulse.com
AI labeling
https://docs.edgeimpulse.com/docs/tools/edge-impulse-for-linux/linux-cpp-sdk
https://studio.edgeimpulse.com/public/544247/latest
Heatmap
Hardware
Taking photo
Upload data
Manual labeling
AI Labeling
Train Test Data Split
Learning blocks
Save parameters
Generate features
NN setting & result
Model Test
Classification Result
EI runner terminal screenshot
Live inferencing
Our code Screenshot