LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • 1. Obtain an API key from your project
  • 2. Connect your development kit to your project
  • 3. Obtain your project's ID
  • 4. Setup API Connection
  • 5. Get the ID of the connected device
  • 6. Trigger data sampling
  1. Tutorials
  2. API examples

Trigger connected board data sampling

PreviousIngest multi-labeled data using the APINextML & data engineering

Last updated 1 year ago

1. Obtain an API key from your project

Your project API key can be used to enable programmatic access to Edge Impulse. You can create and/or obtain a key from your project's Dashboard, under the Keys tab. API keys are long strings, and start with ei_:

Project API Key

2. Connect your development kit to your project

Open a terminal and run the Edge Impulse daemon. The daemon is the service that connects your hardware with any Edge Impulse project:

edge-impulse-daemon --api-key <your project API key>

3. Obtain your project's ID

Copy your project's ID from the project's Dashboard under the Project Info section:

4. Setup API Connection

Replace the PROJECT_ID below with the ID of your project you selected and enter your API key when prompted:

import requests
import getpass
import json

URL_STUDIO = "https://studio.edgeimpulse.com/v1/api/"
PROJECT_ID = int(input('Enter your Project ID: '))
AUTH_KEY = getpass.getpass('Enter your API key: ')


def check_response(response, debug=False):
    if not response.ok:
        raise RuntimeError("⛔️ Error\n%s" % response.text)
    else:
        if debug:
            print(response)
        return response


def do_get(url, auth, debug=False):
    if debug:
        print(url)
    response = requests.get(url,
                            headers={
                                "Accept": "application/json",
                                "x-api-key": auth
                            })
    return check_response(response, debug)


def parse_response(response, key=""):
    parsed = json.loads(response.text)
    if not parsed["success"]:
        raise RuntimeError(parsed["error"])
    if key == "":
        return json.loads(response.text)
    return json.loads(response.text)[key]


def get_project(project_id, project_auth, debug=False):
    response = do_get(URL_STUDIO + str(project_id), project_auth)
    return parse_response(response, "project")


print("Project %s is accessible" % get_project(PROJECT_ID, AUTH_KEY)["name"])

5. Get the ID of the connected device

# https://studio.edgeimpulse.com/v1/api/{projectId}/devices

def get_devices(project_id, project_auth, debug=False):
    response = do_get(URL_STUDIO + str(project_id) + "/devices", project_auth)
    return parse_response(response, "devices")


device_id = ""
for device in get_devices(PROJECT_ID, AUTH_KEY):
    # if device["remote_mgmt_connected"] and device["supportsSnapshotStreaming"]:
    if device["remote_mgmt_connected"]:
        device_id = device["deviceId"]
        print("Found %s (type %s, id: %s)" %
              (device["name"], device["deviceType"], device_id))
        break
if device_id == "":
    print(
        "Could not find a connected device that supports snapshot streaming!")

6. Trigger data sampling

# https://studio.edgeimpulse.com/v1/api/{projectId}/device/{deviceId}/start-sampling

SAMPLE_CATEGORY = "testing"
SAMPLE_LENGTH_MS = 20000
SAMPLE_LABEL = "squat"

def do_post(url, payload, auth, debug=False):
    if debug:
        print(url)
    response = requests.post(url,
                             headers={
                                 "Accept": "application/json",
                                 "x-api-key": auth
                             },
                             json=payload)
    return check_response(response, debug)


def collect_sample(project_id, device_id, project_auth, debug=False):
    payload = {
        "category": SAMPLE_CATEGORY,
        # "Microphone", "Inertial", "Environmental" or "Inertial + Environmental"
        "sensor": "Inertial",
        # The inverse of frequency in Hz
        "intervalMs": 10,
        "label": SAMPLE_LABEL,
        "lengthMs": SAMPLE_LENGTH_MS
    }
    response = do_post(
        URL_STUDIO + str(project_id) + "/device/" + str(device_id) +
        "/start-sampling", payload, project_auth, debug)
    return parse_response(response, "id")


print("Sample request returned", collect_sample(PROJECT_ID, device_id, AUTH_KEY))
Project API Key