LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Synthetic Data
  • Omniverse™
  • Omniverse™ Replicator
  • Edge Impulse
  • Project
  • Hardware
  • Software
  • Platform
  • Installation
  • Omniverse™ Code
  • Project Code
  • Clone The Repository
  • Settings
  • Table
  • Lighting
  • Fruits
  • Camera
  • Basic Writer
  • Randomizing & Running
  • Creating Our Dataset
  • Visualize Our Dataset
  • Creating Our Model
  • Connect Your Device
  • Upload Data
  • Labelling Data
  • Create Impulse
  • Parameters & Features
  • Training
  • Testing
  • Conclusion

Was this helpful?

Edit on GitHub
Export as PDF
  1. Welcome
  2. Featured Machine Learning Projects

NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects

Create synthetic data to rapidly build object detection datasets with Nvidia Omniverse's Replicator API and Edge Impulse.

PreviousSurgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse ReplicatorNextCommunity Guide – Using Edge Impulse with Nvidia DeepStream

Last updated 1 year ago

Was this helpful?

Created By:

Public Project Link:

GitHub Repo:

Introduction

In the realm of machine learning, the availability of diverse and representative data is crucial for training models that can generalize well to real-world scenarios. However, obtaining such data can often be a complex and expensive endeavor, especially when dealing with complex environments or limited data availability. This is where synthetic data generation techniques, coupled with domain randomization, come into play, offering innovative solutions to overcome these obstacles.

Synthetic Data

Synthetic data refers to artificially generated data that emulates the statistical properties and patterns of real-world data. It is created through sophisticated algorithms and models that simulate the characteristics of the original data while maintaining control over its properties. Domain randomization, on the other hand, is a technique used in conjunction with synthetic data generation, where various parameters and attributes of the data are intentionally randomized within specified bounds. This randomized variation helps the model become more robust and adaptable to different environments.

Omniverse™

NVIDIA Omniverse™ represents a groundbreaking platform that is set to revolutionize the collaborative, design, and simulation processes within industries. This cutting-edge tool combines real-time rendering, physics simulation, and advanced AI capabilities to create a highly powerful and scalable solution.

Omniverse™ Replicator

Edge Impulse

The Edge Impulse platform, along with its integrated Edge Impulse Studio, is a comprehensive solution tailored for developing and deploying embedded machine learning models. It empowers developers to seamlessly gather, process, and analyze sensor data from various edge devices, such as microcontrollers and sensors. With Edge Impulse Studio, users can easily create and train machine learning models using a visual interface or code-based workflow.

Project

In this project we will use the Omniverse™ Replicator API inside of Omniverse™ Code to generate our synthetic dataset of fruits (apples, oranges, and limes). Once our dataset has been created we will import the dataset into Edge Impulse Studio, create and train an object detection model, and then deploy it an NVIDIA Jetson Nano.

Hardware

RTX-Enabled GPU

NVIDIA Jetson Nano

Software

Platform

Installation

Omniverse™ Code

You can think of Code as an IDE for building advanced 3D design and simulation tools. Head over to the Extensions tab and search for Code, then click on Code and install it.

Script Editor

Within Omniverse™ Code there is a feature called Script Editor. This editor allows us to load Python code into the IDE and execute it. This makes it very easy for us to set up our scenes and manipulate our assets.

Assets

For simplicity, in this tutorial we will use assets that are readily available in Omniverse™ Code. Within the IDE you will find a tab called NVIDIA Assets, opening this tab will provide you with a selection of ready to use assets. The assets are of type USD which stands for Universal Scene Description.

Project Code

For this tutorial, code has been provided that will work out of the box in the script editor, all you will have to do is modify the basepath variable and alternate the different datasets.

Clone The Repository

The first step is to clone the repository to a location on your machine.

git clone https://github.com/AdamMiltonBarker/omniverse-replicator-edge-impulse.git

You will find the provided code in the project root in the omniverse.py file.

Let's take a quick look at some of the key features of the code.

Settings

At the top of the code you will find the settings for the program. You don't have to use the same assets that I have used, but if you would like to quickly get set up it is easier to do so.

basepath = "c:\\Users\\adam\\Desktop\\Omniverse\\Src"
dataset = "All"
output_dir = basepath+'\\data\\rendered\\'+datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")+'\\'+dataset

TABLE_ASSET = "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/.../EastRural_Table.usd"
FRUIT = {
    "Apple":  "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/.../Apple.usd",
    "Orange":"http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/.../Orange_01.usd",
    "Lime": "http://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/.../Lime01.usd"
}

You should set the basepath variable to the path to the project root on your machine. If you are using Linux you will need to modify any path in the code as the paths have backslashes for directory separators. For the dataset variable you can use the following to generate your dataset:

  • All Will generate a dataset that includes images of all the fruit types on the table.

  • Apple Will generate a dataset that includes images of apples on the table.

  • Orange Will generate a dataset that includes images of oranges on the table.

  • Lime Will generate a dataset that includes images of limes on the table.

Together, these images will make up our entire dataset.

Table

The first function we come to in the code will create the table. Here we create the table from the USD file in the settings, ensure that items do not fall through it by using rep.physics.collider(), adds mass to the object with rep.physics.mass(mass=100), and then modifies the pose which includes position and rotation. Finally we register the randomizer.

def table():
    table = rep.create.from_usd(
        TABLE_ASSET, semantics=[('class', 'table')])

    with table:
        rep.physics.collider()
        rep.physics.mass(mass=100)
        rep.modify.pose(
            position=(0, 0, 0),
            rotation=(0, -90, -90),
        )
    return table
    
rep.randomizer.register(table)

Lighting

Next, the code will take care of the lighting.

def rect_lights(num=1):
    lights = rep.create.light(
        light_type="rect",
        temperature=rep.distribution.normal(5500, 500),
        intensity=rep.distribution.normal(0, 50),
        position=(0, 250, 0),
        rotation=(-90, 0, 0),
        count=num
    )
    return lights.node

rep.randomizer.register(rect_lights)

def dome_lights(num=1):
    lights = rep.create.light(
        light_type="dome",
        temperature=rep.distribution.normal(5500, 500),
        intensity=rep.distribution.normal(0, 100),
        position=(0, 200, 18),
        rotation=(225, 0, 0),
        count=num
    )
    return lights.node

rep.randomizer.register(dome_lights)

Fruits

The next function will take care of the fruits. Here you will notice we use a uniform distribution for the position, rotation and scale. This means that each number in the ranges has an equal chance of being chosen. Here we also define a class for the data.

def randomize_asset(fpath, fclass, maxnum = 1):
    instances = rep.randomizer.instantiate(
        fpath, size=maxnum, mode='scene_instance')
    with instances:
        rep.physics.collider()
        rep.physics.mass(mass=100)
        rep.modify.semantics([('class', fclass)])
        rep.modify.pose(
            position=rep.distribution.uniform(
                (-15, 90, -15), (20, 90, 20)),
            rotation=rep.distribution.uniform(
                (-90, -180, -90), (90, 180, 90)),
            scale=rep.distribution.uniform((2.5),(3.5)),
        )
    return instances.node
    
rep.randomizer.register(randomize_asset)

Camera

Next we set up the camera and set the value for focus distance, focal length, position, rotation, and f-stop.

camera = rep.create.camera(
    focus_distance=90, focal_length=35,
    position=(0, 285, 0), rotation=(-90, 0, 0), f_stop=16)
render_product = rep.create.render_product(camera, (512, 512))

# FOR LIMES
#camera = rep.create.camera(
#    focus_distance=90, focal_length=35,
#   position=(0, 300, 0), rotation=(-90, 0, 0), f_stop=16)
#render_product = rep.create.render_product(camera, (512, 512))

camera2 = rep.create.camera(
    focus_distance=90, focal_length=30,
    position=(0, 275, 0), rotation=(-85, 0, 0), f_stop=16)
render_product2 = rep.create.render_product(camera2, (512, 512))

Basic Writer

The next code will create the writer which writes our images to the specified location on our machine. Here we set the output_dir, rgb, and bounding box values.

writer = rep.WriterRegistry.get("BasicWriter")
writer.initialize(
    output_dir = basepath+'\\data\\rendered\\'+datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")+'\\'+dataset, 
    rgb=True, bounding_box_2d_tight=True)
writer.attach([render_product])

Randomizing & Running

Finally we set the randomizers to be triggered every frame, and then run the randomizers.

with rep.trigger.on_frame(num_frames=50):
    # Table
    rep.randomizer.table()
    # Lights
    rep.randomizer.rect_lights(1)
    rep.randomizer.dome_lights(1)
    # Fruit
    if dataset == "None":
        pass
    elif dataset == "All":
        for fclass, fpath in FRUIT.items():
            rep.randomizer.randomize_asset(fpath, fclass, 15)
    else:
        rep.randomizer.randomize_asset(FRUIT[dataset], dataset, 15)

rep.orchestrator.run()

Creating Our Dataset

Now we have explored the code and updated our settings, it is time to run the code and generate our dataset. Ensuring Omniverse™ Code is opened, copy the contents of omniverse.py and paste it into the script editor. Once you have done this press the Run button, or ctrl + enter.

Remember to change the dataset variable to the relevant class and run the script for each of the 3 classes.

Head over to the data/rendered directory and you will find all of your generated data. Navigate through the various folders to view the created datasets.

Visualize Our Dataset

Next we will visualize our dataset, including the bounding boxes that were generated by the writer. In Visual Studio Code, open the project root and open the visualize.py file. Once it is opened, open the terminal by clicking view -> Terminal.

Next, install the required software. In the terminal, enter the following commands:

pip3 install asyncio
pip3 install pillow
pip3 install numpy
pip3 install matplotlib

For each image you would like to visualize you will need to update the code with the path and number related to the image. At the bottom of visualize.py you will see the following code:

rgb_dir = "C:\\Users\\adam\\Desktop\\Omniverse\\Src\\data\\rendered\\V1\\2023-06-29-00-54-00\\Apple\\RenderProduct_Replicator\\rgb"
bbox_dir = "C:\\Users\\adam\\Desktop\\Omniverse\\Src\\data\\rendered\\V1\\2023-06-29-00-54-00\\Apple\\RenderProduct_Replicator\\bounding_box_2d_tight"
vis_out_dir = "C:\\Users\\adam\\Desktop\\Omniverse\\Src\\data\\visualize"

file_number = "0000"

The writer will save images with an incrementing number in the file name, such as rgb_0000.png, rgb_0001.png etc. To visualize your data simply increment the file_number variable.

You can now run the following code, ensuring you are in the project root directory.

python visualize.py

You should see similar to the following:

Creating Our Model

Log in or create an account on Edge Impulse and then create a new project. Once created scroll down on the project home page to the Project Info area and make sure to change Labeling method to Bounding Boxes (Object Detection) and Target Device to Jetson Nano. Now scroll down to the Performance Settings and ensure that Use GPU for training and Enterprise performance are selected if you have those options.

Connect Your Device

  • Running the Edge Impulse NVIDIA Jetson Nano setup script

  • Connecting your device to the Edge Impulse platform

Once the firmware has been installed enter the following command:

edge-impulse-linux

If you are already connected to an Edge Impulse project, use the following command:

edge-impulse-linux --clean

Follow the instructions to log in to your Edge Impulse account.

Once complete head over to the devices tab of your project and you should see the connected device.

Upload Data

Unfortunately Omniverse does not generate bounding boxes in the format that Edge Impulse requires, so for this project we will upload the data and then label it in Edge Impulse Studio.

We will start with the Apple class. Head over to the Data Aquisition page, select your 50 apple images, and click upload.

Labelling Data

Next head over to the Labelling Queue page. Here you can draw boxes around your data and add labels to each fruit in each image, then repeat these steps for each of the classes.

Note that the EI platform will attempt to track objects across frames, in some cases it makes duplicates or adds incorrect bounding boxes, ensure that you delete/modify these incorrect bounding boxes to avoid problems further down the line.

Once you have completed the apples data, repeat the steps for the oranges and limes images.

Once you have finished labelling the data you should have 150 images that each have around 15 pieces of fruit labelled, and a data split of 80/20.

Create Impulse

Now it is time to create our Impulse. Head over to the Impulse Design tab and click on the Create Impulse tab. Here you should set the Image Width and Image Height to 512. Next add an Image block in the Processing Blocks section, then select Yolov5 in the Learning Blocks section, and finally click Save Impulse.

Parameters & Features

Next click on the Images tab and click on Save Parameters, you will redirected to the features page. Once on the features page click Generate Features. You should see that your features are nicely grouped, this is what we are looking for to achieve satisfactory results.

Training

Now it is time to train our model, head over to the Yolov5 tab, leave all the settings as they are aside from training cycles which I set to 750, then click Start Training. This while take a while so grab a coffee.

Once training is finished we see we achieved an exceptional F1 Score of 97.2%.

Testing

Now it is time to test our model. There are a few ways we can test through Edge Impulse Studio before carrying out the ultimate test, on-device testing.

Platform Testing

Platform testing went very well, and our model achieved 99.24% on the Test (unseen) data.

Platform Live Testing

To carry out live testing through the Edge Impulse Studio, connect to your Jetson Nano and enter the following command:

edge-impulse-linux

Once your device is connected to the platform you can then access the camera and do some real-time testing via the platform.

In my case live testing through Edge Impulse Studio also did very well, classifying each fruit correctly.

On-Device Testing

The final test is the on-device testing. For this we need to download the model and build it on our Jetson Nano. Luckily, Edge Impulse makes this a very easy task. If you are still connected to the platform disconnect, and then enter the following command:

edge-impulse-linux-runner

This will download the model, build and then start classifying, ready for you to introduce some fruit.

In my case the model performed extremely well, easily classifying each fruit correctly.

Conclusion

In this project, we utilized NVIDIA's state-of-the-art technology to generate a fully synthetic fruit dataset. The dataset was imported into Edge Impulse Studio, where we developed a highly accurate object detection model. Finally, we deployed the model to our NVIDIA Jetson Nano.

The outcomes clearly demonstrate the effectiveness of NVIDIA's Replicator as a robust tool for domain randomization and the creation of synthetic datasets. This approach significantly accelerates the data collection process and facilitates the development of synthetic datasets that generalize well to real-world data.

By combining Replicator with Edge Impulse Studio, we have harnessed a cutting-edge solution that empowers us to rapidly and efficiently build reliable object detection solutions. This powerful combination holds immense potential for addressing various challenges across different domains.

Once again, a big thank you to NVIDIA for their support in this project. It has been an amazing experience learning about how to use Omniverse in an Edge Impulse pipeline, keep an eye out for future projects.

is a versatile collection of APIs designed to empower researchers and enterprise developers in generating synthetic data that closely resembles real-world scenarios. With its extensibility, Omniverse™ Replicator allows users to effortlessly construct custom synthetic data generation (SDG) tools, effectively expediting the training of computer vision networks.

For this project an is required. I was lucky enough to be given access by NVIDIA to a Windows 10 VM equipped with an (A very big thank you to Liz, Sunny, and all involved). This project can be run on an RTX 3060 and up, if you do not have access to your own RTX-enabled GPU, there are some well known cloud service providers that offer NVIDIA RTX GPUs in the cloud.

We will deploy our machine learning model to an .

To get started with NVIDIA Omniverse™, head over to the official . Once you have signed in you will be able to download the Omniverse™ launcher for Windows or Linux. Once downloaded, run the launcher and go through the settings options.

We are going to use to create our dataset.

For more information about using physics with Replicator, you can check out the .

For more information about using lights with Replicator, you can check out the .

For more information about using distributions with Replicator, you can check out the .

For more information about using cameras with Replicator, you can check out the .

For more information about using writers with Replicator, you can check out the .

Now it is time to head over to and create our machine learning pipeline.

You need to install the required dependencies that will allow you to connect your device to the Edge Impulse platform. This process is documented on the and includes:

NVIDIA Omniverse™ Replicator
RTX-enabled GPU
RTX A40
NVIDIA Jetson Nano
NVIDIA Omniverse™
NVIDIA Omniverse™ Replicator
NVIDIA Omniverse™ Code
Edge Impulse For Linux
Visual Studio Code
Edge Impulse
Omniverse™ download site
Omniverse™ Code
NVIDIA documentation
NVIDIA documentation
NVIDIA documentation
NVIDIA documentation
NVIDIA documentation
Edge Impulse
Edge Impulse Website
Adam Milton-Barker
https://studio.edgeimpulse.com/public/246023/latest
NVIDIA Omniverse™ Synthetic Data Generation For Edge Impulse Projects
Synthetic Data
NVIDIA Omniverse™
NVIDIA Omniverse™
Edge Impulse
On-Device Testing Results
GPU Requirements
NVIDIA Omniverse™ Launcher
NVIDIA Omniverse™ Code
Omniverse™ Code
Script Editor
Generate Data
Generated Data
Generated Data
Create EI Project
Connect device
Device connected to Edge Impulse
Upload Data
Labelling Data
Completed Data
Create Impulse
Parameters & Features
Training
Training Results
Platform Testing
Platform Testing
On-Device Testing Results