LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • Add a data source
  • Run the pipeline
  • Edit your pipeline
  1. Edge Impulse Studio
  2. Data acquisition

Data sources

PreviousData explorerNextSynthetic data

Last updated 6 months ago

The data sources page is much more than just adding data from external sources. It lets you create complete automated data pipelines so you can work on your active learning strategies.

From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.

Run transformation jobs directly from your projects

You can also trigger cloud jobs, known as , these are particularly useful if you want to generate synthetic datasets or automate tasks using the Edge Impulse API. We provide several pre-built transformation blocks available for organizations' projects:

This view, originally accessible from the main left menu, has been moved to the Data acquisition tab for better clarity. The screenshots have not yet been updated.

Add a data source

Click in + Add new data source and select where your data lives:

You can either use:

  • AWS S3 buckets

  • Google Cloud Storage

  • Any S3-compatible bucket

  • Don't import data (if you just need to create a pipeline)

Click on Next, provide credentials:

Click on Verify credentials:

Here, you have several options to automatically label your data:

Infer from folder name

In the example above, the structure of the folder is the following:

.
├── cars
│   ├── cars.01741.jpg
│   ├── cars.01743.jpg
│   ├── cars.01745.jpg
│   ├── ... (400 items)
├── unknown
│   ├── unknown.test_2547.jpg
│   ├── unknown.test_2548.jpg
│   ├── unknown.test_2549.jpg
│   ├── ... (400 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

3 directories, 814 files

The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20.

The samples present in an unlabeled/ folder will be kept unlabeled in Edge Impulse Studio.

Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:

.
├── testing
│   ├── cars
│   │   ├── cars.00012.jpg
│   │   ├── cars.00031.jpg
│   │   ├── cars.00035.jpg
│   │   └── ... (~150 items)
│   └── unknown
│       ├── unknown.test_1012.jpg
│       ├── unknown.test_1026.jpg
│       ├── unknown.test_1027.jpg
│       ├── ... (~150 items)
├── training
│   ├── cars
│   │   ├── cars.00006.jpg
│   │   ├── cars.00025.jpg
│   │   ├── cars.00065.jpg
│   │   └── ... (~600 items)
│   └── unknown
│       ├── unknown.test_1002.jpg
│       ├── unknown.test_1005.jpg
│       └── unknown.test_46.jpg
│       └── ... (~600 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

7 directories, 1512 files

Infer from file name

When using this option, only the file name is taken into account. The part before the first . will be used to set the label. E.g. cars.01741.jpg will set the label to cars.

Keep the data unlabeled

All the data samples will be unlabeled, you will need to label them manually before using them.

Finally, click on Next, post-sync actions.

From this view, you can automate several actions:

  • Recreate data explorer

  • Retrain model

    If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.

    Note: You will need to have trained your project at least once.

  • Create new version

    Store all data, configuration, intermediate results and final models.

  • Create new deployment

    Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.

Run the pipeline

Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.

Run the pipeline from the UI

To run your pipeline from Edge Impulse studio, click on the ⋮ button and select Run pipeline now.

Run the pipeline from code

To run your pipeline from Edge Impulse studio, click on the ⋮ button and select Run pipeline from code. This will display an overlay with curl, Node.js and Python code samples.

You will need to create an API key to run the pipeline from code.

Schedule your pipeline jobs

By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮ button and select Edit pipeline.

Free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.

Once the pipeline has successfully finish, you will receive an email like the following:

Webhooks

Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:

{
    "organizationId":XX,
    "pipelineId":XX,
    "pipelineName":"Import data from portal \"Data sources demo\"",
    "projectId":XXXXX,
    "success":true,
    "newItems":0,
    "newChecklistOK":0,
    "newChecklistFail":0
}

Edit your pipeline

As of today, if you want to update your pipeline, you need to edit the configuration json available in ⋮ -> Run pipeline from code.

Here is an example of what you can get if all the actions have been selected:

[
    {
        "name": "Fetch data from s3://data-pipeline/data-pipeline-example/infer-from-folder/",
        "builtinTransformationBlock": {
            "type": "s3-to-project",
            "endpoint": "https://s3.your-endpoint.com",
            "path": "s3://data-pipeline/data-pipeline-example/infer-from-folder/",
            "region": "fr-par",
            "accessKey": "XXXXX",
            "category": "split",
            "labelStrategy": "infer-from-folder-name",
            "secretKeyEncrypted": "xxxxxx"
        }
    },
    {
        "name": "Refresh data explorer",
        "builtinTransformationBlock": {
            "type": "project-action",
            "refreshDataExplorer": true
        }
    },
    {
        "name": "Retrain model",
        "builtinTransformationBlock": {
            "type": "project-action",
            "retrainModel": true
        }
    },
    {
        "name": "Create new version",
        "builtinTransformationBlock": {
            "type": "project-action",
            "createVersion": true
        }
    },
    {
        "name": "Create on-device deployment (C++ library)",
        "builtinTransformationBlock": {
            "type": "project-action",
            "buildBinary": "zip",
            "buildBinaryModelType": "int8"
        }
    }
]

Free projects have only access to the above builtinTransformationBlock.

Select Copy as pipeline step and paste it to the configuration json file.

(enterprise feature)

(enterprise feature)

The gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.

You can also define who can receive the email. The users have to be part of your project. See: .

If you are part of an , you can use your custom transformation jobs in the pipeline. In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.

Upload portals
Transformation blocks
data explorer
organization
transformation blocks
DALL-E 3 Image Generation Block
Whisper Voice Synthesis Block
Find best Visual AD model
Dashboard -> Collaboration
Data sources
Add new data source
Provide your credentials
Automatically label your data
Trigger actions
Run your pipeline
Run the pipeline from code
Edit pipeline
Email example containing the full results
Data sources webhooks
Transformation blocks
Copy