LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Hardware used
  • Software used
  • Code and machine learning repository
  • Working principle
  • Challenges
  • Mobility
  • Object detection using a neural network
  • Capturing training data and labeling objects
  • Domain randomization using NVIDIA Omniverse Replicator
  • Making a scene
  • Icicle models
  • Setting semantic metadata on objects
  • Creating a program for domain randomization
  • Running domain randomization
  • Randomizing colors
  • Randomizing textures
  • Creating realistic outdoor lighting conditions using Sun Study
  • Creating label file for Edge Impulse Studio
  • Creating an object detection project in Edge Impulse Studio
  • Uploading images and labels using CLI edge-impulse-uploader
  • Model training and performance
  • Testing model in simulated environment with NVIDIA Isaac Sim and Edge Impulse extension
  • Deployment to device and LoRaWAN
  • Testing model on device using OpenMV
  • Deploy model as Arduino compatible library and send inference results to The Things Network with LoRaWAN
  • Transmit results to The Things Stack sandbox using LoRaWAN
  • Limitations
  • Weatherproofing
  • Obscured view
  • Object scale
  • Exact number of icicles
  • Non-vertical icicles and snow
  • Grayscale
  • Further reading

Was this helpful?

Edit on GitHub
Export as PDF
  1. Welcome
  2. Featured Machine Learning Projects

Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator

Tracking Rooftop ice buildup detection using Edge Impulse and The Things Network, with synthetic data created in NVIDIA Omniverse Replicator and Sun Studies.

PreviousOptimize a cloud-based Visual Anomaly Detection Model for Edge DeploymentsNextSurgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator

Last updated 4 months ago

Was this helpful?

Created By: Eivind Holt

Public Project Link:

GitHub Repo:

Introduction

Hardware used

  • NVIDIA GeForce RTX

Software used

Code and machine learning repository

Working principle

Icicle formation is detected using a neural network (NN) designed to identify objects in images from the onboard camera. The NN is trained and tested exclusively on synthesized images. The images are generated with realistic simulated lighting conditions. A small amount of real images are used to later verify the model.

Challenges

The main challenge of detecting forming icicles is the translucent nature of ice and natural variation of sunlight. Because of this we need a great number of images to train a model that captures enough features of the ice with varying lighting conditions. Capturing and annotating such a large dataset is incredibly labor intensive. We can mitigate this problem by synthesizing images with varying lighting conditions in a realistic manner and have the objects of interest automatically labeled.

Mobility

A powerful platform combined with a high resolution camera with fish-eye lens would increase the ability to detect icicles. However, by deploying the object detection model to a small, power-efficient, but highly constrained device, options for device installation increase. Properly protected against moisture this device can be mounted outdoors on walls or poles facing the roofs in question. LoRaWAN communication enables low battery consumption and long transmission range.

Object detection using a neural network

Capturing training data and labeling objects

One of the most labor intensive aspects of building any machine learning model is gathering the training data and labeling it. For an object detection model this requires taking hundreds or thousands of images of the objects to detect, drawing rectangles around them, and choosing the correct label for each class. Recently generating pre-labeled images has become feasible and has proven to have great results. This is referred to as synthetic data generation with domain randomization. In this project a model will be trained exclusively on synthetic data, and we will see how it can detect the real life counterparts.

Domain randomization using NVIDIA Omniverse Replicator

Making a scene

Icicle models

Blender change origin cheat sheet:

  • Select vertex on model (Edit Mode), Shift+S-> Cursor to selected

  • (Object Mode) Select Hierarchy, Object>Set Origin\Origin to 3D Cursor

  • (Object Mode) Shift+S\Cursor to World Origin

Tip for export:

  • Selection only

  • Convert Orientation:

    • Forward Axis: X

    • Up Axis: Y

Setting semantic metadata on objects

To be able to produce images for training and include labels, we can use a feature of Replicator toolbox found under menu Replicator > Semantics Schema Editor.

Here we can select each top node representing an item for object detection and add a key-value pair. Choosing "class" as Semantic Type and "ice" as Semantic Data enables us to export this string as a label later.

Creating a program for domain randomization

To keep the items generated in our script separate from the manually created content, we start by creating a new layer in the 3D stage:

with rep.new_layer():

Next we specify that we want to use ray-tracing as our image output. We create a camera and hard code the position. We will point it to our icicles for each render later. Then we use our previously defined semantics data to get references to the icicles for easier manipulation. We also define references to a plane on which we want to scatter the icicles. Lastly we define our render output by selecting the camera and setting the desired resolution. Due to an issue in Omniverse where artifacts are produces at certain resolutions, e.g. 120x120 pixels, we set the output resolution at 128x128 pixels. Edge Impulse Studio will take care of scaling the images to the desired size should we use images of different size than the configured model size.

rep.settings.set_render_pathtraced(samples_per_pixel=64)
cameraPlane = rep.get.prims(path_pattern='/World/CameraPlane')
icePlane = rep.get.prims(path_pattern='/World/IcePlane')
icicles = rep.get.prims(semantics=[("class", "ice")])

camera = rep.create.camera(position=(0, 0, 0))
render_product = rep.create.render_product(camera, (128, 128))

Due to the asynchronous nature of Replicator we need to define our randomization logic as call-back methods by first registering them in the following fashion:

rep.randomizer.register(randomize_camera)
rep.randomizer.register(scatter_ice)

Before defining the logic of the randomization methods we define what will happen during each render:

with rep.trigger.on_frame(num_frames=10000, rt_subframes=50):
    rep.randomizer.scatter_ice(icicles)
    rep.randomizer.randomize_camera(icicles)

The parameter num_frames specifies the desired number of renders. The rt_subframes parameter allows the rendering process to advance a set number of frames before the result is captured and saved to disk. A higher setting enhances complex ray tracing effects like reflections and translucency by giving them more time to interact across surfaces, though it increases rendering time. Each randomization routine is invoked with the option to include specific parameters.

To save each image and its corresponding semantic data, we utilize a designated API. While customizing the writer was considered, attempts to do so using Replicator version 1.9.8 on Windows led to errors. Therefore, we are employing the "BasicWriter" and will develop an independent script to generate a label format that is compatible with Edge Impulse.

writer = rep.WriterRegistry.get("BasicWriter")
writer.initialize(
    output_dir="[set output]",
    rgb=True,
    bounding_box_2d_loose=True)

writer.attach([render_product])
asyncio.ensure_future(rep.orchestrator.step_async())

rgb indicates that we want to save images to disk as .png files. Note that labels are created setting bounding_box_2d_loose. This is used in this case instead of bounding_box_2d_tight as the latter in some cases would not include the tip of the icicles in the resulting bounding box. It also creates labels from the previously defined semantics. The code ends with running a single iteration of the process in Omniverse Code, so we can preview the results.

The bounding boxes can be visualized by clicking the sensor widget, checking "BoundingBox2DLoose" and finally "Show Window".

Now we can implement the randomization logic. First we'll use a method that flips and scatters the icicles on a defined plane.

def scatter_ice(icicles):
with icicles:
    carb.log_info(f'Scatter icicle {icicles}')
    ice_rotation = random.choice(
        [
            (-90, 90, 0),
            (-90, -90, 0),
        ]
    )
    rep.modify.pose(rotation=ice_rotation)
    rep.randomizer.scatter_2d(surface_prims=icePlane, check_for_collisions=True)
return icicles.node

Next a method that randomly places the camera on another defined plane, and makes sure the camera is pointing at the group of icicles and randomizes focus.

def randomize_camera(targets):
with camera:
    rep.randomizer.scatter_2d(surface_prims=cameraPlane)
    rep.modify.pose(look_at=targets)
    rep.modify.attribute("focalLength", rep.distribution.uniform(10.0, 40.0))
return camera.node

We can define the methods in any order we like, but in rep.trigger.on_frame it is crucial that the icicles are placed before pointing the camera.

Running domain randomization

Randomizing colors

The surface behind the icicles may vary greatly, both in color and texture. Using Replicator randomizing the color of an object's material is easy.

In the scene in Omniverse, either manually create a plane behind the icicles, or create one programmatically.

In Code, define a function that takes in a reference to the plane we want to randomize, the color of the distribution functions with min and max value span:

def randomize_screen(screen):
    with screen:
        # Randomize each RGB channel for the whole color spectrum.
        rep.randomizer.color(colors=rep.distribution.uniform((0, 0, 0), (1, 1, 1)))
    return screen.node

Then get a reference to the plane:

screen = rep.get.prims(path_pattern='/World/Screen')

Lastly register the function and trigger it on each new frame:

rep.randomizer.register(randomize_screen)
with rep.trigger.on_frame(num_frames=2000, rt_subframes=50):  # rt_subframes=50
    # Other randomization functions...
    rep.randomizer.randomize_screen(screen)

Now each image will have a background with random (deterministic, same starting seed) RGB color. Replicator takes care of creating a material with a shader for us. As you might remember, in an effort to reduce RAM usage our neural network reduces RGB color channels to grayscale. In this project we could simplify the color randomization to only pick grayscale colors. The example has been included as it would benefit in projects where color information is not reduced. To only randomize in grayscale, we could change the code in the randomization function to use the same value for R, G and B as follows:

def randomize_screen(screen):
    with screen:
        # Generate a single random value for grayscale
        gray_value = rep.distribution.uniform(0, 1)
        # Apply this value across all RGB channels to ensure the color is grayscale
        rep.randomizer.color(colors=gray_value)
    return screen.node

Randomizing textures

To further steer training of the object detection model in capturing features of the desired class, the icicles, and not features that appear due to short comings in the domain randomization, we can create images with the icicles in front of a large variety of background images. A simple way of achieving this is to use a large dataset of random images and randomly assigning one of them to a background plane for each image generated.

import os

def randomize_screen(screen, texture_files):
    with screen:
        # Let Replicator pick a random texture from list of .jpg-files
        rep.randomizer.texture(textures=texture_files)
    return screen.node

# Define what folder to look for .jpg files in
folder_path = 'C:/Users/eivho/source/repos/icicle-monitor/val2017/testing/'
# Create a list of strings with complete path and .jpg file names
texture_files = [os.path.join(folder_path, f) for f in os.listdir(folder_path) if f.endswith('.jpg')]

# Register randomizer
rep.randomizer.register(randomize_screen)

# For each frame, call randomization function
with rep.trigger.on_frame(num_frames=2000, rt_subframes=50):
    # Other randomization functions...
    rep.randomizer.randomize_screen(screen, texture_files)

We could instead generate textures with random shapes and colors. Either way, the resulting renders will look weird, but help the model training process weight features that are relevant for the icicles, not the background.

Creating realistic outdoor lighting conditions using Sun Study

Creating label file for Edge Impulse Studio

python basic_writer_to_pascal_voc.py <input_folder>

or debug from Visual Studio Code by setting input folder in launch.json like this:

"args": ["../out"]

This will create a file bounding_boxes.labels that contains all labels and bounding boxes per image.

Creating an object detection project in Edge Impulse Studio

Uploading images and labels using CLI edge-impulse-uploader

edge-impulse-uploader --category split --directory [folder]

to connect to your account and project, and upload the image files and labels in bounding_boxes.labels. To switch project if necessary, first run:

edge-impulse-uploader --clean

At any time we can find "Perform train/test split" under "Danger zone" in project dashboard, to distribute images between training/testing in a 80/20 split.

Model training and performance

Since our synthetic training images are based on both individual and two different sized clusters of icicles, we can't trust the model performance numbers too much. Greater F1 scores are better, but we will never achieve 100%. Still, we can upload increasing numbers of labeled images and observe how performance numbers increase.

2,000 images:

6,000 images:

14,000 images:

26,000 images:

If we look at results from model testing in Edge Impulse Studio, at first glance the numbers are less than impressive.

However if we investigate individual samples where F1 score is less than 100%, we see that the model indeed has detected the icicles, but clustered differently than how the image was originally labeled. What we should look out for are samples that contain visible icicles where none were detected.

In the end virtual and real-life testing tells us how well the model really performs.

Testing model in simulated environment with NVIDIA Isaac Sim and Edge Impulse extension

Install the Sun Study extension in Isaac Sim to be able to vary light conditions while testing.

Paste your API key found in the Edge Impulse Studio > Dashboard > Keys > Add new API key into Omniverse Extension:

To be able to classify any virtual camera capture we first need to build a version of the model that can run in a JavaScript environment. In Edge Impulse Studio, go to Deployment, find "WebAssembly" in the search box and click Build. We don't need to keep the resulting .zip package, the extension will find and download it by itself in a moment.

Back in the Edge Impulse extension in Isaac Sim, when we expand the "Classification" group, a message will tell us everything is ready: "Your model is ready! You can now run inference on the current scene".

Before we test it we will make some accommodations in the viewport.

Switch to "RTX - Interactive" to make sure the scene is rendered realistically.

Set viewport resolution to square 1:1 with either the same resolution as our intended device inference (120x120 pixels), or (512x512 pixels).

Display Isaac bounding boxes by selecting "BoundingBox2DLoose" under the icon that resembles a robotic sensor, then click "Show Window". Now we can compare the ground truth with model prediction.

Deployment to device and LoRaWAN

Testing model on device using OpenMV

To get visual verification our model works as intended we can go to Deployment in Edge Impulse Studio, select OpenMV Firmware as target and build.

Deploy model as Arduino compatible library and send inference results to The Things Network with LoRaWAN

Start by selecting Arduino library as a Deployment target.

Once built and downloaded, open Arduino IDE, go to Sketch > Include Library > Add .zip Library ... and locate the downloaded library. Next go to File > Examples > [name of project]_inferencing > portenta_h7 > portenta_h7_camera to open a generic sketch template using our model. To test the model continuously and print the results to console this sketch is ready to go. The code might appear daunting, but we really only need to focus on the loop() function.

Transmit results to The Things Stack sandbox using LoRaWAN

In short, we perform inference every 10 seconds. If any icicles are detected we simply transmit a binary 1 to the The Things Stack application. It is probably obvious that the binary payload is redundant, the presence of a message is enough, but this could be extended to transmit other data, for example the prediction confidence, number of clusters, battery level, temperature or light level.

if(bb_found) {
    int lora_err;
    modem.setPort(1);
    modem.beginPacket();
    modem.write((uint8_t)1); // This sends the binary value 0x01
    lora_err = modem.endPacket(true);
}
function Decoder(bytes, port) {
    // Initialize the result object
    var result = {
        detected: false
    };

    // Check if the first byte is non-zero
    if(bytes[0] !== 0) {
        result.detected = true;
    }

    // Return the result
    return result;
}

Now we can observe messages being received and decoded in Live data in the TTS console.

python
# pip install paho-mqtt
import paho.mqtt.subscribe as subscribe

m = subscribe.simple(topics=['#'], hostname="eu1.cloud.thethings.network", port=1883, auth={'username':"icicle-monitor",'password':"NNSXS.V7RI4O2LW3..."}, msg_count=2)
for a in m:
    print(a.topic)
    print(a.payload)
json
v3/icicle-monitor@ttn/devices/portenta-h7-icicle-00/up
{"end_device_ids":{"device_id":"portenta-h7-icicle-00","application_ids":{"application_id":"icicle-monitor"},"dev_eui":"3036363266398F0D","join_eui":"0000000000000000","dev_addr":"260BED9C"},"correlation_ids":["gs:uplink:01HSKMT8KSZFJ7FB23RGSTJAEA"],"received_at":"2024-03-22T17:54:52.358270423Z","uplink_message":{"session_key_id":"AY5jAnqK0GdPG1yygjCmqQ==","f_port":1,"f_cnt":9,"frm_payload":"AQ==","decoded_payload":{"detected":true},"rx_metadata":[{"gateway_ids":{"gateway_id":"eui-ac1f09fffe09141b","eui":"AC1F09FFFE09141B"},"time":"2024-03-22T17:54:52.382076978Z","timestamp":254515139,"rssi":-51,"channel_rssi":-51,"snr":13.5,"location":{"latitude":67.2951736450195,"longitude":14.4321346282959,"altitude":50,"source":"SOURCE_REGISTRY"},"uplink_token":"CiIKIAoUZXVpLWFjMWYwOWZmZmUwOTE0MWISfCf/+CRQbEMOvrnkaCwjsi/evBhDurYRJILijo5K00mQ=","received_at":"2024-03-22T17:54:52.125610010Z"}],"settings":{"data_rate":{"lora":{"bandwidth":125000,"spreading_factor":7,"coding_rate":"4/5"}},"frequency":"867300000","timestamp":254515139,"time":"2024-03-22T17:54:52.382076978Z"},"received_at":"2024-03-22T17:54:52.154041574Z","confirmed":true,"consumed_airtime":"0.046336s","locations":{"user":{"latitude":67.2951772015745,"longitude":14.43232297897339,"altitude":13,"source":"SOURCE_REGISTRY"}},"version_ids":{"brand_id":"arduino","model_id":"lora-vision-shield","hardware_version":"1.0","firmware_version":"1.2.1","band_id":"EU_863_870"},"network_ids":{"net_id":"000013","ns_id":"EC656E0000000181","tenant_id":"ttn","cluster_id":"eu1","cluster_address":"eu1.cloud.thethings.network"}}}'

v3/icicle-monitor@ttn/devices/portenta-h7-icicle-00/up
{"end_device_ids":{"device_id":"portenta-h7-icicle-00","application_ids":{"application_id":"icicle-monitor"},"dev_eui":"3036363266398F0D","join_eui":"0000000000000000"},"correlation_ids":["as:up:01HSKMTN7F60CC3BQXE06B3Q4X","rpc:/ttn.lorawan.v3.AppAs/SimulateUplink:17b97b44-a5cd-45f0-9439-2de42e187300"],"received_at":"2024-03-22T17:55:05.070404295Z","uplink_message":{"f_port":1,"frm_payload":"AQ==","decoded_payload":{"detected":true},"rx_metadata":[{"gateway_ids":{"gateway_id":"test"},"rssi":42,"channel_rssi":42,"snr":4.2}],"settings":{"data_rate":{"lora":{"bandwidth":125000,"spreading_factor":7}},"frequency":"868000000"},"locations":{"user":{"latitude":67.2951772015745,"longitude":14.43232297897339,"altitude":13,"source":"SOURCE_REGISTRY"}}},"simulated":true}'

Observe the difference in the real uplink (first) and simulated uplink (second). In both we find "decoded_payload":{"detected":true}.

Limitations

Weatherproofing

Obscured view

The project has no safe-guard against false negatives. The device will not report if it's view is blocked. This could be resolved by placing static markers on both sides of an area to monitor and included in synthetic training data. Absence of at least one marker could trigger a notification that the view is obscured.

Object scale

Due to optimization techniques in Faster Objects - More Objects (FoMo) determining relative sizes of the icicles is not feasible. As even icicles with small mass can be harmful at moderate elevation this is not a crucial feature.

Exact number of icicles

The object detection model has not been trained to give an exact number of icicles in view. This has no practical implication other than the model verification results appearing worse than practical performance.

Non-vertical icicles and snow

Icicles can appear bent or angled either due to wind or more commonly due to ice and snow masses slowly dropping over roof edges. The dataset generated in this project does not cover this, but it would not take a lot of effort to extend the domain randomization to rotate or warp the icicles.

The training images could benefit from simulating snow with particle effects in Omniverse. The project could also be extended to detect build-up of snow on roofs. For inspiration check out this demo of simulated snow dynamic made in 2014 by Walt Disney Animation Studios for the movie Frozen:

Grayscale

To be able to compile a representation of our neural network and have it run on the severely limited amount of RAM available on the Arduino Portenta H7, pixel representation has been limited to a single channel - grayscale. Colors are not needed to detect icicles so this does not affect the results.

Further reading

The portable device created in this project monitors buildings and warns the responsible parties when potentially hazardous icicles are formed. In ideal conditions, icicles can form at a rate of . In cold climates, many people are injured and killed each year by these solid projectiles, leading responsible building owners to often close sidewalks in the spring to minimize risk. This project demonstrates how an extra set of digital eyes can notify property owners icicles are forming and need to be removed before they can cause harm.

with

with

Project and .

is a novel machine learning algorithm that allows for visual object detection on highly constrained devices through training of a neural network with a number of convolutional layers.

NVIDIA Omniverse Code is an IDE that allows us to compose 3D scenes and to write simple Python code to capture images. Further, the Replicator extension is a toolkit that allows us to label the objects in the images and to simplify common domain randomization tasks, such as scattering objects between images. For an in-depth walkthrough on getting started with Omniverse and Replicator, .

It's possible to create an empty scene in Omniverse and add content programmatically. However, composing initial objects by hand serves as a practical starting point. In this project was used as a basis.

To represent the icicle, a high quality model pack was purchased at .

To be able to import the models into Omniverse and Isaac Sim, all models have to be converted to . While USD is a great emerging standard for describing, composing, simulating, and collaborating within 3D worlds, it is not yet commonly supported in asset marketplaces. outlines considerations when performing conversion using Blender to USD. Note that it is advisable to export each individual model and to choose a suitable origin/pivot point.

With a basic 3D stage created and objects of interest labeled, we can continue creating a program that will make sure we produce images with slight variations. Our program can be named anything, ending in .py and preferably placed close to the stage USD-file. Here is a sample of such a program: :

With a basic randomization program in place, we could run it from the embedded script editor (Window > Script Editor), but more robust Python language support can be achieved by developing in Visual Studio Code instead. To connect VS Code with Omniverse we can use the Visual Studio Code extension . See the for setup. When ready to run go to Replicator > Start and check progress in the defined output folder.

These are rather unsophisticated approaches. More realistic results would be achieved by changing the of the actual walls of the house used as background. Omniverse has a large selection of available materials available in the NVIDIA Assets browser, allowing us to randomize a of the rendered results.

In contrast to a controlled indoor environment, creating a robust object detection model intended for outdoor use needs training images with a wide range of realistic natural light. When generating synthetic images we can utilize an based on sun studies.

The extension let's us set world location, date and time. We can also mix this with the Environment setting in Omniverse, allowing for a wide range of simulation of clouds. As of March 2024 it is not easy to randomize these parameters in script, but this . In the mean time we can set the parameters, generate a few thousand images, change time of day, generate more images and so on.

Edge Impulse Studio supports a wide range of image labeling formats for object detection. The output from Replicator's BasicWriter needs to be transformed so it can be uploaded either through the web interface or via .

Provided is a simple Python program, to help get started. Documentation on the supported Edge Impulse . Run the program from a terminal with:

Look at the or .

Since we have generated both synthetic images and labels, we can use the to efficiently upload both. Use:

Note that the final results include 5000 images from the . Adding this reduces F1 score a bit, but results in a model with significantly less overfitting, that shows almost no false positives when classifying random background scenes.

We can get useful information about model performance with minimal effort by testing it in a virtual environment. Install and the .

Follow on how to flash the device and to modify the ei_object_detection.py code. Remember to change: sensor.set_pixformat(sensor.GRAYSCALE). The file edge_impulse_firmware_arduino_portenta.bin is our firmware for the Arduino Portenta H7 with Vision shield.

Using The Things Stack sandbox (formerly known as The Things Network) we can create a low-power sensor network that allows transmitting device data with minimal energy consumption, long range, and no network fees. Your area may already be covered by a crowd funded network, or you can gateway. is really fun!

Following the on the topic, we create an application in The Things Stack sandbox and register our first device.

Next we will simplify things by merging an example Arduino sketch for transmitting a LoRaWAN message, with the Edge Impulse generated object detection model code. Open the example sketch called LoraSendAndReceive included with the MKRWAN(v2) library mentioned in the . There is an example of this for you in the , where you can find an Arduino sketch with the merged code.

There are a few things to consider in the implementation: The device should enter deep sleep mode and disable/put to sleep all peripherals between object detection runs. Default operation of the Portenta H7 with the Vision shield consumes a lot of energy and will drain a battery quickly. To find out how much energy is consumed we can use a device such as the . Hook up the positive power supply to VIN, negative to GND. Since VIN bypasses the Portenta power regulator we should provide 5V, however in my setup the Otii Arc is limited to 4.55V. Luckily it seems to be sufficient and we can take some measurements. By connecting the Otii Arc pin RX to the Portenta pin D14/PA9/UART1 TX, in code we can write debug messages to Serial1. This is incredibly helpful in determining what power consumption is associated with what part of the code.

As we can see the highlighted section should be optimized for minimal power consumption. This is a complicated subject, especially on a but there are some examples for general guidance:

.

The project code presented here runs inference on an image every 10 seconds. However, this is for demonstration purposes and in a deployment should be much less frequent, like once per hour during daylight. Have a look at this project for an example of how to via LoRaWAN downlink message. This could be further controlled automatically via an application that has access to an .

Next, in the The Things Stack application we need to define a function that will be used to decode the byte into a JSON structure that is easier to interpret when we pass the message further up the chain of services. The function can be found in the .

An integral part of The Things Stack is an MQTT message broker. At this point we can use and create any suitable notification system for the end user. The following is an MQTT client written in Python to demonstrate the principle. Note that the library paho-mqtt has been used in a way so that it will block the program execution until two messages have been received. Then it will print the topic and payloads. In a real implementation, it would be better to register a callback and perform some action for each message received.

TTS has a range of for specific platforms, or you could set up a mechanism.

For permanent outdoor installation the device requires a properly sealed enclosure. The camera is mounted on the shield PCB and will need some engineering to be able to see through the enclosure while remaining water tight. For inspiration on how to create weather-proof enclosures that allow sensors and antennas outside access, on friction fitting and use of rubber washers. The referenced project also proves that battery operated sensors can work with no noticeable degradation in winter conditions (to at least -15 degrees Celcius).

Insights into .

more than 1 cm (0.39 in) per minute
Arduino Portenta H7
Arduino Portenta Vision Shield w/LoRa Connectivity
Otii Arc from Qoitech
Edge Impulse Studio
NVIDIA Omniverse Code
Replicator
NVIDIA Isaac Sim
Edge Impulse extension
Visual Studio Code
Blender
Impulse
Github code repository
FOMO (Faster Objects, More Objects)
see this associated article
a royalty free 3D model of a house
Turbo Squid
OpenUSD-format
This article
replicator_init.py
Embedded VS Code for NVIDIA Omniverse
extension repo
materials
much wider range of aspects
extension that approximates real world sunlight
is likely to change
the Ingestion API
basic_writer_to_pascal_voc.py
label formats is located here
provided object detection Edge Impulse project
follow a guide to create a new FOMO project
CLI tool from Edge Impulse
COCO 2017 dataset
NVIDIA Isaac Sim
Edge Impulse extension
the documentation
create your own
Getting started with LoRaWAN
Arduino guide
Arduino guide
project code repository
Otii Arc from Qoitech
complex board such as the Arduino Portenta H7
Snow monitor
Mail box sensor
remotely control inference interval
API for daylight data
project code repository
any MQTT client to subscribe to topics
integration options
custom webhook using a standard HTTP/REST
see this project
how icicles are formed
https://studio.edgeimpulse.com/public/332581/live
https://github.com/eivholt/icicle-monitor
Downtown, photo: Avisa Nordland
Arduino Portenta H7
3D house model
3D icicle models purchased at Turbo Squid
3D icicle models exported from Blender
Semantics Schema Editor
Omniverse bounding box
Produced images
Random background color
Random background color
Random background grayscale
Random background texture
Random background texture, camera perspective
Random background texture
Sun Study
Sun Study
Sun Study
2000 images
6000 images
14000 images
26000 images
Model testing
Edge Impulse extension
Sun Study in Isaac Sim
Edge Impulse extension API key
Edge Impulse WebAssembly
Isaac Sim viewport resolution
Isaac Sim sensors
Isaac Sim model testing
Isaac Sim model testing
Isaac Sim model testing
Edge Impulse Studio Deployment OpenMV Firmware
Testing model on device with OpenMV
Deploy model as Arduino compatible library
Arduino compatible library example sketch
The Things Network
The Things Stack application
The Things Stack device
Arduino transmitting inference results over LoRaWAN
Arduino Portenta H7 power specs
Arduino Portenta H7 pin-out
Otii Arc hook-up
Otii Arc power profile
YR weather API
The Things Stack decoder
The Things Stack live data
Markers to avoid false negatives
Object scale
Icicle grouping
AULSSON_EBBA
Martin Cathrae
Grayscale