LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • Viewing and editing metadata in the Studio
  • Adding metadata when adding data
  • Reading and writing metadata through the API
  • Using metadata to control your train/validation split
  1. Edge Impulse Studio
  2. Data acquisition

Metadata

PreviousTabular data (Pre-processed & Non-time-series)NextAuto-labeler [Deprecated]

Last updated 6 months ago

You can add arbitrary metadata to data items. You can use this for example to track on which site data was collected, where data was imported from, or where the machine that generated the data was placed. Some key use cases for metadata are:

  1. Prevent leaking data between your train and validation set. See: below.

  2. Synchronisation actions in , for example to remove data in a project if the source data was deleted in the cloud.

  3. Get a better understanding of real-world accuracy by seeing how well your model performs when grouped by a metadata key. E.g. whether data on site A performs better than site B.

Viewing and editing metadata in the Studio

Metadata is shown on Data acquisition when you click on a data item. From here you can add, edit and remove metadata keys.

Adding metadata when adding data

It's pretty unpractical to manually add metadata to each data item, so the easiest way is to add metadata when you upload data. You can do this either by:

  1. Setting the x-metadata header to a JSON string when calling the ingestion service:

curl -X POST \
     -H "x-api-key: ei_238fae..." \
     -H "x-label: car" \
     -H "x-metadata: '{\"site\":\"Paris\"}' \
     -H "Content-Type: multipart/form-data" \
     -F "data=@one.png" \
     https://ingestion.edgeimpulse.com/api/training/files

Reading and writing metadata through the API

import edgeimpulse_api as ei

# update project ID / API Key
EI_PROJECT_ID = 1
EI_API_KEY = "ei_8b8..."

# instantiate the API client
configuration = ei.Configuration()
configuration.api_key["ApiKeyAuthentication"] = EI_API_KEY

api = ei.ApiClient(configuration)
raw_data = ei.RawDataApi(api)

# fetch the first page of data
samples = raw_data.list_samples(project_id=EI_PROJECT_ID, category='training', offset=0, limit=20)

# grab the current metadata
metadata = samples.samples[0].metadata
print('first sample metadata is', metadata)

# add an extra key
metadata = metadata if metadata else {}
metadata['hello'] = 'world'

# update metadata
raw_data.set_sample_metadata(project_id=EI_PROJECT_ID,
                             sample_id=samples.samples[0].id,
                             set_sample_metadata_request=ei.SetSampleMetadataRequest(metadata=metadata))
print('updated metadata!')

Using metadata to control your train/validation split

When training an ML model we split your data into a train and a validation set. This is done so that during training you can evaluate whether your model works on data that it has seen before (train set) and on data that it has never seen before (validation set) - ideally your model performs similarly well on both data sets: a sign that your model will perform well in the field on completely novel data.

However, this can give a false sense of security if data that is very similar ends up in both your train and validation set ("data leakage"). For example:

  • You split a video into individual frames. These images don't differ much from frame to frame; and you don't want some frames in the train, and some in the validation set.

  • You're building a sleep staging algorithm, and look at 30 second windows. From window to window the data for one person will look similar, so you don't want one window in the train, another in the validation set for the same person in the same night.

By default we split your training data randomly in a train and validation set (80/20 split) - which does not prevent data leakage, but if you tag your data items with metadata you can avoid this. To do so:

  1. Tag all your data items with metadata.

  2. Go to any ML block and under Advanced training settings set 'Split train/validation set on metadata key' to a metadata key (f.e. video_file).

Now every data item with the same metadata value for video_file will always be grouped together in either the train or the validation set; so no more data leakage.

Providing an file when uploading data (this works both in the CLI and in the Studio).

You can read samples, including their metadata via the API call, and then use the API to update the metadata. For example, this is how you add a metadata field to the first data sample in your project using the :

List samples
Set sample metadata
Python API Bindings
data pipelines
Using metadata to control your train/validation split
info file
Metadata shown on Data acquisition
Controlling the train/validation split with a metadata key