LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • 1. Setting up your environment
  • 2. Collecting images
  • 3. Sending the dataset to Edge Impulse
  1. Tutorials
  2. End-to-end tutorials
  3. Adding sight to your sensors

Collecting image data with the OpenMV Cam H7 Plus

PreviousCollecting image data with your mobile phoneNextObject detection

Last updated 6 months ago

This page is part of and describes how you can use the OpenMV Cam H7 Plus to build a dataset, and import the data into Edge Impulse.

1. Setting up your environment

To set up your OpenMV camera, and collect some data:

  1. Install the .

  2. Follow the to clean the sensor and focus the lens.

  3. Connect a micro-USB cable to the camera, and open the OpenMV IDE. The camera should automatically update to the latest firmware.

  4. Verify that the camera can capture live images, by clicking on the Connect button in the bottom left corner, then pressing Play to run the application.

A live feed from your camera will be displayed in the top right corner of the IDE.

2. Collecting images

Once your camera is up and running, it's time to start capturing some images and build our dataset.

First, set up a new dataset via Tools -> Dataset Editor, select New Dataset.

This opens the 'Dataset editor' panel on the left side, and the 'dataset capture script' in the main panel of the IDE. Here, create three classes: "plant", "lamp" and "unknown". It's important to add an unknown class that contains random images which are neither lamps nor plants.

As we'll build a model that takes in square images, change the 'Dataset capture script' to read:

import sensor, image, time

sensor.reset()
sensor.set_pixformat(sensor.RGB565) # Modify as you like.
sensor.set_framesize(sensor.QVGA) # Modify as you like.
sensor.set_windowing((240, 240)) # Modify as you like.
sensor.skip_frames(time = 2000)

clock = time.clock()

while(True):
    clock.tick()
    img = sensor.snapshot()
    print(clock.fps())

Now you can capture data for the three classes.

  1. Click the Play icon to run the 'dataset capture script' on your OpenMV camera.

  2. Select one of the classes by clicking on the folder name in the 'Dataset editor'.

  3. Take a snap by clicking the Capture data (camera icon) button.

Do this until you have captured 30 images per class from a variety of angles. Also make sure to vary the things you capture for the unknown class.

3. Sending the dataset to Edge Impulse

To import the dataset into Edge Impulse go to Tools > Dataset Editor > Export > Upload to Edge Impulse project.

Then, choose the project name, and the split between training and testing data (recommended to keep this to 80/20).

A duplicate check runs when you upload new data, so you can upload your dataset multiple times (for example, when you've added new files) without adding the same data twice.

Training and testing data split

The split between training and testing data is based on the hash of the file in order to have a deterministic process. As a consequence you may not have a perfect 80/20 split between training and testing, but this process ensures samples are always placed in the same category.

Our dataset now appears under the Data acquisition section of our project.

You can now go back to the tutorial to build your machine learning model.

Image classification
Adding sight to your sensors
OpenMV IDE
OpenMV hardware setup guide
Set up your OpenMV camera. Press the 'Connect' button, then press 'Play' to run the application.
Creating a new dataset in the OpenMV IDE
Create three classes in the OpenMV IDE by clicking the 'New class folder' (highlighted in yellow).
Capturing data (a plant image shown on the left) into a dataset using the OpenMV camera
Synchronize your dataset with Edge Impulse straight from the OpenMV IDE
Choose a project, and then the dataset split to upload your data
Collected data