LogoLogo
HomeAPI & SDKsProjectsForumStudio
  • Getting started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions (FAQ)
  • Tutorials
    • End-to-end tutorials
      • Computer vision
        • Image classification
        • Object detection
          • Object detection with bounding boxes
          • Detect objects with centroid (FOMO)
        • Visual anomaly detection
        • Visual regression
      • Audio
        • Sound recognition
        • Keyword spotting
      • Time-series
        • Motion recognition + anomaly detection
        • Regression + anomaly detection
        • HR/HRV
        • Environmental (Sensor fusion)
    • Data
      • Data ingestion
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
        • Using the Edge Impulse Python SDK to upload and download data
        • Trigger connected board data sampling
        • Ingest multi-labeled data using the API
      • Synthetic data
        • Generate audio datasets using Eleven Labs
        • Generate image datasets using Dall-E
        • Generate keyword spotting datasets using Google TTS
        • Generate physics simulation datasets using PyBullet
        • Generate timeseries data with MATLAB
      • Labeling
        • Label audio data using your existing models
        • Label image data using GPT-4o
      • Edge Impulse Datasets
    • Feature extraction
      • Building custom processing blocks
      • Sensor fusion using embeddings
    • Machine learning
      • Classification with multiple 2D input features
      • Visualize neural networks decisions with Grad-CAM
      • Sensor fusion using embeddings
      • FOMO self-attention
    • Inferencing & post-processing
      • Count objects using FOMO
      • Continuous audio sampling
      • Multi-impulse (C++)
      • Multi-impulse (Python)
    • Lifecycle management
      • CI/CD with GitHub Actions
      • Data aquisition from S3 object store - Golioth on AI
      • OTA model updates
        • with Arduino IDE (for ESP32)
        • with Arduino IoT Cloud
        • with Blues Wireless
        • with Docker on Allxon
        • with Docker on Balena
        • with Docker on NVIDIA Jetson
        • with Espressif IDF
        • with Nordic Thingy53 and the Edge Impulse app
        • with Particle Workbench
        • with Zephyr on Golioth
    • API examples
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Python API bindings example
      • Running jobs using the API
      • Trigger connected board data sampling
    • Python SDK examples
      • Using the Edge Impulse Python SDK to run EON Tuner
      • Using the Edge Impulse Python SDK to upload and download data
      • Using the Edge Impulse Python SDK with Hugging Face
      • Using the Edge Impulse Python SDK with SageMaker Studio
      • Using the Edge Impulse Python SDK with TensorFlow and Keras
      • Using the Edge Impulse Python SDK with Weights & Biases
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
        • Cloud data storage
      • Data pipelines
      • Data transformation
        • Transformation blocks
      • Upload portals
      • Custom blocks
        • Custom AI labeling blocks
        • Custom deployment blocks
        • Custom learning blocks
        • Custom processing blocks
        • Custom synthetic data blocks
        • Custom transformation blocks
      • Health reference design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
    • Project dashboard
      • Select AI hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (time-series)
      • Multi-label (time-series)
      • Tabular data (pre-processed & non-time-series)
      • Metadata
      • Auto-labeler | deprecated
    • Impulses
    • EON Tuner
      • Search space
    • Processing blocks
      • Audio MFCC
      • Audio MFE
      • Audio Syntiant
      • Flatten
      • HR/HRV features
      • Image
      • IMU Syntiant
      • Raw data
      • Spectral features
      • Spectrogram
      • Custom processing blocks
      • Feature explorer
    • Learning blocks
      • Anomaly detection (GMM)
      • Anomaly detection (K-means)
      • Classification
      • Classical ML
      • Object detection
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • Object tracking
      • Regression
      • Transfer learning (images)
      • Transfer learning (keyword spotting)
      • Visual anomaly detection (FOMO-AD)
      • Custom learning blocks
      • Expert mode
      • NVIDIA TAO | deprecated
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
    • Bring your own model (BYOM)
    • File specifications
      • deployment-metadata.json
      • ei-metadata.json
      • ids.json
      • parameters.json
      • sample_id_details.json
      • train_input.json
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
      • Rust Library
    • Rust Library
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On Android
      • On your desktop computer
      • On your Alif Ensemble Series Device
      • On your Espressif ESP-EYE (ESP32) development board
      • On your Himax WE-I Plus
      • On your Raspberry Pi Pico (RP2040) development board
      • On your SiLabs Thunderboard Sense 2
      • On your Spresense by Sony development board
      • On your Syntiant TinyML Board
      • On your TI LaunchPad using GCC and the SimpleLink SDK
      • On your Zephyr-based Nordic Semiconductor development board
    • Arm Keil MDK CMSIS-PACK
    • Arduino library
      • Arduino IDE 1.18
    • Cube.MX CMSIS-PACK
    • Docker container
    • DRP-AI library
      • DRP-AI on your Renesas development board
      • DRP-AI TVM i8 on Renesas RZ/V2H
    • IAR library
    • Linux EIM executable
    • OpenMV
    • Particle library
    • Qualcomm IM SDK GStreamer
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Edge Impulse firmwares
    • Hardware specific tutorials
      • Image classification - Sony Spresense
      • Audio event detection with Particle boards
      • Motion recognition - Particle - Photon 2 & Boron
      • Motion recognition - RASynBoard
      • Motion recognition - Syntiant
      • Object detection - SiLabs xG24 Dev Kit
      • Sound recognition - TI LaunchXL
      • Keyword spotting - TI LaunchXL
      • Keyword spotting - Syntiant - RC Commands
      • Running NVIDIA TAO models on the Renesas RA8D1
      • Two cameras, two models - running multiple object detection models on the RZ/V2L
  • Edge AI Hardware
    • Overview
    • Production-ready
      • Advantech ICAM-540
      • Seeed SenseCAP A1101
      • Industry reference design - BrickML
    • MCU
      • Ambiq Apollo4 family of SoCs
      • Ambiq Apollo510
      • Arducam Pico4ML TinyML Dev Kit
      • Arduino Nano 33 BLE Sense
      • Arduino Nicla Sense ME
      • Arduino Nicla Vision
      • Arduino Portenta H7
      • Blues Wireless Swan
      • Espressif ESP-EYE
      • Himax WE-I Plus
      • Infineon CY8CKIT-062-BLE Pioneer Kit
      • Infineon CY8CKIT-062S2 Pioneer Kit
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
      • Open MV Cam H7 Plus
      • Particle Photon 2
      • Particle Boron
      • RAKwireless WisBlock
      • Raspberry Pi RP2040
      • Renesas CK-RA6M5 Cloud Kit
      • Renesas EK-RA8D1
      • Seeed Wio Terminal
      • Seeed XIAO nRF52840 Sense
      • Seeed XIAO ESP32 S3 Sense
      • SiLabs Thunderboard Sense 2
      • Sony's Spresense
      • ST B-L475E-IOT01A
      • TI CC1352P Launchpad
    • MCU + AI accelerators
      • Alif Ensemble
      • Arduino Nicla Voice
      • Avnet RASynBoard
      • Seeed Grove - Vision AI Module
      • Seeed Grove Vision AI Module V2 (WiseEye2)
      • Himax WiseEye2 Module and ISM Devboard
      • SiLabs xG24 Dev Kit
      • STMicroelectronics STM32N6570-DK
      • Synaptics Katana EVK
      • Syntiant Tiny ML Board
    • CPU
      • macOS
      • Linux x86_64
      • Raspberry Pi 4
      • Raspberry Pi 5
      • Texas Instruments SK-AM62
      • Microchip SAMA7G54
      • Renesas RZ/G2L
    • CPU + AI accelerators
      • AVNET RZBoard V2L
      • BrainChip AKD1000
      • i.MX 8M Plus EVK
      • Digi ConnectCore 93 Development Kit
      • MemryX MX3
      • MistyWest MistySOM RZ/V2L
      • Qualcomm Dragonwing RB3 Gen 2 Dev Kit
      • Renesas RZ/V2L
      • Renesas RZ/V2H
      • IMDT RZ/V2H
      • Texas Instruments SK-TDA4VM
      • Texas Instruments SK-AM62A-LP
      • Texas Instruments SK-AM68A
      • Thundercomm Rubik Pi 3
    • GPU
      • Advantech ICAM-540
      • NVIDIA Jetson
      • Seeed reComputer Jetson
    • Mobile phone
    • Porting guide
  • Integrations
    • Arduino Machine Learning Tools
    • AWS IoT Greengrass
    • Embedded IDEs - Open-CMSIS
    • NVIDIA Omniverse
    • Scailable
    • Weights & Biases
  • Tips & Tricks
    • Combining impulses
    • Increasing model performance
    • Optimizing compute time
    • Inference performance metrics
  • Concepts
    • Glossary
    • Course: Edge AI Fundamentals
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • Data engineering
      • Audio feature extraction
      • Motion feature extraction
    • Machine learning
      • Data augmentation
      • Evaluation metrics
      • Neural networks
        • Layers
        • Activation functions
        • Loss functions
        • Optimizers
          • Learned optimizer (VeLO)
        • Epochs
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • Initialize API clients
  • Initialize project
  • Upload dataset
  • Create an impulse
  • Configure processing block
  • Run processing block to generate features
  • Use learning block to train model
  • Test the impulse
  • Deploy the impulse

Was this helpful?

Export as PDF
  1. Tutorials
  2. API examples

Python API bindings example

PreviousCustomize the EON TunerNextRunning jobs using the API

Last updated 3 months ago

Was this helpful?

The is built on top of the , which is known as the edgeimpulse_api package. These are Python wrappers for all of the that you can use to interact with Edge Impulse projects programmatically (i.e. without needing to use the Studio graphical interface).

The API reference guide for using the Python API bindings can be found .

This example will walk you through the process of using the Edge Impulse API bindings to upload data, define an impulse, process features, train a model, and deploy the impulse as a C++ library.

After creating your project and copying the API key, feel free to leave the project open in a browser window so you can watch the changes as we make API calls. You might need to refresh the browser after each call to see the changes take affect.

Important! This project will add data and remove any current features and models in a project. We highly recommend creating a new project when running this notebook! Don't say we didn't warn you if you mess up an existing project.

# Install the Edge Impulse API bindings and the requests package
!python -m pip install edgeimpulse-api requests
import json
import re
import os
import pprint
import time

import requests
# Import the API objects we plan to use
from edgeimpulse_api import (
    ApiClient,
    BuildOnDeviceModelRequest,
    Configuration,
    DeploymentApi,
    DSPApi,
    DSPConfigRequest,
    GenerateFeaturesRequest,
    Impulse,
    ImpulseApi,
    JobsApi,
    ProjectsApi,
    SetKerasParameterRequest,
    StartClassifyJobRequest,
    UpdateProjectRequest,
)

Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.

Paste that API key string in the EI_API_KEY value in the following cell:

# Settings
API_KEY = "ei_dae2..." # Change this to your Edge Impulse API key
API_HOST = "https://studio.edgeimpulse.com/v1"
DATASET_PATH = "dataset/gestures"
OUTPUT_PATH = "."

Initialize API clients

The Python API bindings use a series of submodules, each encapsulating one of the API subsections (e.g. Projects, DSP, Learn, etc.). To use these submodules, you need to instantiate a generic API module and use that to instantiate the individual API objects. We'll use these objects to make the API calls later.

To configure a client, you generally create a configuration object (often from a dict) and then pass that object as an argument to the client.

# Create top-level API client
config = Configuration(
    host=API_HOST,
    api_key={"ApiKeyAuthentication": API_KEY}
)
client = ApiClient(config)

# Instantiate sub-clients
deployment_api = DeploymentApi(client)
dsp_api = DSPApi(client)
impulse_api = ImpulseApi(client)
jobs_api = JobsApi(client)
projects_api = ProjectsApi(client)

Initialize project

API calls (links to associated documentation):

# Get the project ID, which we'll need for future API calls
response = projects_api.list_projects()
if not hasattr(response, "success") or getattr(response, "success") == False:
    raise RuntimeError("Could not obtain the project ID.")
else:
    project_id = response.projects[0].id

# Print the project ID
print(f"Project ID: {project_id}")
# Create request object with the required parameters
update_project_request = UpdateProjectRequest.from_dict({
    "inPretrainedModelFlow": False,
})

# Update the project and check the response for errors
response = projects_api.update_project(
    project_id=project_id,
    update_project_request=update_project_request,
)
if not hasattr(response, "success") or getattr(response, "success") == False:
    raise RuntimeError("Could not obtain the project ID.")
else:
    print("Project is now in impulse workflow.")

Upload dataset

We rely on the ingestion service using the string before the first period in the filename to determine the label. For example, "idle.1.cbor" will be automatically assigned the label "idle." If you wish to set a label manually, you must specify the x-label parameter in the headers. Note that you can only define a label this way when uploading a group of data at a time. For example, setting "x-label": "idle" in the headers would give all data uploaded with that call the label "idle."

API calls used with associated documentation:

# Download and unzip gesture dataset
!mkdir -p dataset/
!wget -P dataset -q https://cdn.edgeimpulse.com/datasets/gestures.zip
!unzip -q dataset/gestures.zip -d {DATASET_PATH}
def upload_files(api_key, path, subset):
    """
    Upload files in the given path/subset (where subset is "training" or
    "testing")
    """

    # Construct request
    url = f"https://ingestion.edgeimpulse.com/api/{subset}/files"
    headers = {
        "x-api-key": api_key,
        "x-disallow-duplicates": "true",
    }

    # Get file handles and create dataset to upload
    files = []
    file_list = os.listdir(os.path.join(path, subset))
    for file_name in file_list:
        file_path = os.path.join(path, subset, file_name)
        if os.path.isfile(file_path):
            file_handle = open(file_path, "rb")
            files.append(("data", (file_name, file_handle, "multipart/form-data")))

    # Upload the files
    response = requests.post(
        url=url,
        headers=headers,
        files=files,
    )

    # Print any errors for files that did not upload
    upload_responses = response.json()["files"]
    for resp in upload_responses:
        if not resp["success"]:
            print(resp)

    # Close all the handles
    for handle in files:
        handle[1][1].close()
# Upload the dataset to the project
print("Uploading training dataset...")
upload_files(API_KEY, DATASET_PATH, "training")
print("Uploading testing dataset...")
upload_files(API_KEY, DATASET_PATH, "testing")

Create an impulse

Now that we uploaded our data, it's time to create an impulse. An "impulse" is a combination of processing (feature extraction) and learning blocks. The general flow of data is:

data -> input block -> processing block(s) -> learning block(s)

Only the processing and learning blocks make up the "impulse." However, we must still specify the input block, as it allows us to perform preprocessing, like windowing (for time series data) or cropping/scaling (for image data).

Your project will have one input block, but it can contain multiple processing and learning blocks. Specific outputs from the processing block can be specified as inputs to the learning blocks. However, for simplicity, we'll just show one processing block and one learning block.

Note: Historically, processing blocks were called "DSP blocks," as they focused on time series data. In Studio, the name has been changed to "Processing block," as the blocks work with different types of data, but you'll see it referred to as "DSP block" in the API.

It's important that you define the input block with the same parameters as your captured data, especially the sampling rate! Additionally, the processing block axes names must match up with their names in the dataset.

API calls (links to associated documentation):

# To start, let's fetch a list of all the available blocks
response = impulse_api.get_impulse_blocks(
    project_id=project_id
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not get impulse blocks.")
# Print the available input blocks
print("Input blocks")
print(json.dumps(json.loads(response.to_json())["inputBlocks"], indent=2))
# Print the available processing blocks
print("Processing blocks")
print(json.dumps(json.loads(response.to_json())["dspBlocks"], indent=2))
# Print the available learning blocks
print("Learning blocks")
print(json.dumps(json.loads(response.to_json())["learnBlocks"], indent=2))
# Give our impulse blocks IDs, which we'll use later
processing_id = 2
learning_id = 3

# Impulses (and their blocks) are defined as a collection of key/value pairs
impulse = Impulse.from_dict({
    "inputBlocks": [
        {
            "id": 1,
            "type": "time-series",
            "name": "Time series",
            "title": "Time series data",
            "windowSizeMs": 1000,
            "windowIncreaseMs": 500,
            "frequencyHz": 62.5,
            "padZeros": True,
        }
    ],
    "dspBlocks": [
        {
            "id": processing_id,
            "type": "spectral-analysis",
            "name": "Spectral Analysis",
            "implementationVersion": 4,
            "title": "processing",
            "axes": ["accX", "accY", "accZ"],
            "input": 1,
        }
    ],
    "learnBlocks": [
        {
            "id": learning_id,
            "type": "keras",
            "name": "Classifier",
            "title": "Classification",
            "dsp": [processing_id],
        }
    ],
})
# Delete the current impulse in the project
response = impulse_api.delete_impulse(
    project_id=project_id
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not delete current impulse.")

# Add blocks to impulse
response = impulse_api.create_impulse(
    project_id=project_id,
    impulse=impulse
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not create impulse.")

Configure processing block

Before generating features, we need to configure the processing block. We'll start by printing all the available parameters for the spectral-analysis block, which we set when we created the impulse above.

API calls (links to associated documentation):

# Get processing block config
response = dsp_api.get_dsp_config(
    project_id=project_id,
    dsp_id=processing_id
)

# Construct user-readable parameters
settings = []
for group in response.config:
    for item in group.items:
        element = {}
        element["parameter"] = item.param
        element["description"] = item.help
        element["currentValue"] = item.value
        element["defaultValue"] = item.default_value
        element["type"] = item.type
        if hasattr(item, "select_options") and \
            getattr(item, "select_options") is not None:
            element["options"] = [i.value for i in item.select_options]
        settings.append(element)

# Print the settings
print(json.dumps(settings, indent=2))
# Define processing block configuration
config_request = DSPConfigRequest.from_dict({
    "config": {
        "scale-axes": 1.0,
        "input-decimation-ratio": 1,
        "filter-type": "none",
        "analysis-type": "FFT",
        "fft-length": 16,
        "do-log": True,
        "do-fft-overlap": True,
        "extra-low-freq": False,
    }
})

# Set processing block configuration
response = dsp_api.set_dsp_config(
    project_id=project_id,
    dsp_id=processing_id,
    dsp_config_request=config_request
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not start feature generation job.")
else:
    print("Processing block has been configured.")

Run processing block to generate features

After we've defined the impulse, we then want to use our processing block(s) to extract features from our data. We'll skip feature importance and feature explorer to make this go faster.

Generating features kicks off a job in Studio. A "job" involves instantiating a Docker container and running a custom script in the container to perform some action. In our case, that involves reading in data, extracting features from that data, and saving those features as Numpy (.npy) files in our project.

Because jobs can take a while, the API call will return immediately. If the call was successful, the response will contain a job number. We can then monitor that job and wait for it to finish before continuing.

API calls (links to associated documentation):

def poll_job(jobs_api, project_id, job_id):
    """Wait for job to complete"""

    # Wait for job to complete
    while True:

        # Check on job status
        response = jobs_api.get_job_status(
            project_id=project_id,
            job_id=job_id
        )
        if not hasattr(response, "success") or getattr(response, "success") is False:
            print("ERROR: Could not get job status")
            return False
        else:
            if hasattr(response, "job") and hasattr(response.job, "finished"):
                if response.job.finished:
                    print(f"Job completed at {response.job.finished}")
                    return response.job.finished_successful
            else:
                print("ERROR: Response did not contain a 'job' field.")
                return False

        # Print that we're still running and wait
        print(f"Waiting for job {job_id} to finish...")
        time.sleep(2.0)
# Define generate features request
generate_features_request = GenerateFeaturesRequest.from_dict({
    "dspId": processing_id,
    "calculate_feature_importance": False,
    "skip_feature_explorer": True,
})

# Generate features
response = jobs_api.generate_features_job(
    project_id=project_id,
    generate_features_request=generate_features_request,
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not start feature generation job.")

# Extract job ID
job_id = response.id

# Wait for job to complete
success = poll_job(jobs_api, project_id, job_id)
if success:
    print("Features have been generated.")
else:
    print(f"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.")
# Optional: download NumPy features (x: training data, y: training labels)
print("Go here to download the generated features in NumPy format:")
print(f"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/x/training")
print(f"https://studio.edgeimpulse.com/v1/api/{project_id}/dsp-data/{processing_id}/y/training")

Use learning block to train model

Now that we have trained features, we can run the learning block to train the model on those features. Note that Edge Impulse has a number of learning blocks, each with different methods of configuration. We'll be using the "keras" block, which uses TensorFlow and Keras under the hood.

API calls (links to associated documentation):

 # Define training request
keras_parameter_request = SetKerasParameterRequest.from_dict({
    "mode": "visual",
    "training_cycles": 10,
    "learning_rate": 0.001,
    "train_test_split": 0.8,
    "skip_embeddings_and_memory": True,
})

# Train model
response = jobs_api.train_keras_job(
    project_id=project_id,
    learn_id=learning_id,
    set_keras_parameter_request=keras_parameter_request,
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not start training job.")

# Extract job ID
job_id = response.id

# Wait for job to complete
success = poll_job(jobs_api, project_id, job_id)
if success:
    print("Model has been trained.")
else:
    print(f"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.")

Now that the model has been trained, we can go back to the job logs to find the accuracy metrics for both the float32 and int8 quantization levels. We'll need to parse the logs to find these. Because the logs are printed with the most recent events first, we'll work backwards through the log to find these metrics.

def get_metrics(response, quantization=None):
    """
    Parse the response to find the accuracy/training metrics for a given
    quantization level. If quantization is None, return the first set of metrics
    found.
    """
    metrics = None
    delimiter_str = "calculate_classification_metrics"

    # Skip finding quantization metrics if not given
    if quantization:
        quantization_found = False
    else:
        quantization_found = True

    # Parse logs
    for log in reversed(response.to_dict()["stdout"]):
        data_field = log["data"]
        if quantization_found:
            substrings = data_field.split("\n")
            for substring in substrings:
                substring = substring.strip()
                if substring.startswith(delimiter_str):
                    metrics = json.loads(substring[len(delimiter_str):])
                    break
        else:
            if data_field.startswith(f"Calculating {quantization} accuracy"):
                quantization_found = True

    return metrics
# Get the job logs for the previous job
response = jobs_api.get_jobs_logs(
    project_id=project_id,
    job_id=job_id
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not get job log.")

# Print training metrics (quantization is "float32" or "int8")
quantization = "float32"
metrics = get_metrics(response, quantization)
if metrics:
    print(f"Training metrics for {quantization} quantization:")
    pprint.pprint(metrics)
else:
    print("ERROR: Could not get training metrics.")

Test the impulse

As with any good machine learning project, we should test the accuracy of the model using our holdout ("testing") set. We'll call the classify API function to make that happen and then parse the job logs to get the results.

In most cases, using int8 quantization will result in a faster, smaller model, but you will slightly lose some accuracy.

API calls (links to associated documentation):

 # Set the model quantization level ("float32", "int8", or "akida")
quantization = "int8"
classify_request = StartClassifyJobRequest.from_dict({
    "model_variants": quantization
})

# Start model testing job
response = jobs_api.start_classify_job(
    project_id=project_id,
    start_classify_job_request=classify_request
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not start classify job.")

# Extract job ID
job_id = response.id

# Wait for job to complete
success = poll_job(jobs_api, project_id, job_id)
if success:
    print("Inference performed on test set.")
else:
    print(f"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.")
# Get the job logs for the previous job
response = jobs_api.get_jobs_logs(
    project_id=project_id,
    job_id=job_id
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not get job log.")

# Print
metrics = get_metrics(response)
if metrics:
    print(f"Test metrics for {quantization} quantization:")
    pprint.pprint(metrics)
else:
    print("ERROR: Could not get test metrics.")

Deploy the impulse

Now that you've trained the model, let's build it as a C++ library and download it. We'll start by printing out the available target devices. Note that this list changes depending on how you've configured your impulse. For example, if you use a Syntiant-specific learning block, then you'll see Syntiant boards listed. We'll use the "zip" target, which gives us a generic C++ library that we can use for nearly any hardware.

The engine must be one of:

tflite
tflite-eon
tflite-eon-ram-optimized
tensorrt
tensaiflow
drp-ai
tidl
akida
syntiant
memryx

We'll use tflite, as that's the most ubiquitous.

modelType is the quantization level. Your options are:

float32
int8

In most cases, using int8 quantization will result in a faster, smaller model, but you will slightly lose some accuracy.

API calls (links to associated documentation):

# Get the available devices
response = deployment_api.list_deployment_targets_for_project_data_sources(
    project_id=project_id
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not get device list.")

# Print the available devices
targets = [x.to_dict()["format"] for x in response.targets]
for target in targets:
    print(target)
# Choose the target hardware (from the list above), engine,
target_hardware = "zip"
engine = "tflite"
quantization = "int8"

# Construct request
device_model_request = BuildOnDeviceModelRequest.from_dict({
    "engine": engine,
    "modelType": quantization
})

# Start build job
response = jobs_api.build_on_device_model_job(
    project_id=project_id,
    type=target_hardware,
    build_on_device_model_request=device_model_request,
)
if not hasattr(response, "success") or getattr(response, "success") is False:
    raise RuntimeError("Could not start feature generation job.")

# Extract job ID
job_id = response.id

# Wait for job to complete
success = poll_job(jobs_api, project_id, job_id)
if success:
    print("Impulse built.")
else:
    print(f"ERROR: Job failed. See https://studio.edgeimpulse.com/studio/{project_id}/jobs#show-job-{job_id} for more details.")
# Get the download link information
response = deployment_api.download_build(
    project_id=project_id,
    type=target_hardware,
    model_type=quantization,
    engine=engine,
    _preload_content=False,
)
if response.status != 200:
    raise RuntimeError("Could not get download information.")

# Find the file name in the headers
file_name = re.findall(r"filename\*?=(.+)", response.headers["Content-Disposition"])[0].replace("utf-8''", "")
file_path = os.path.join(OUTPUT_PATH, file_name)

# Write the contents to a file
with open(file_path, "wb") as f:
    f.write(response.data)

You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.

Copy API key from Edge Impulse project

Before uploading data, we should make sure the project is in the regular impulse flow mode, rather than . We'll also need the project ID for most of the other API calls in the future.

Notice that the general pattern for calling API functions is to instantiate a configuration/request object and pass it to the API method that's part of the submodule. You can find which parameters a specific API call expects by looking at .

We'll start by downloading the gesture dataset from . Note that the is separate from the regular Edge Impulse API: the URL and interface are different. As a result, we must construct the request manually and cannot rely on the Python API bindings.

You can use the and functions to configure the granular settings. We'll use the defaults for that block and just set the number of epochs and learning rate for training.

You should have a .zip file in the same directory as this notebook. Download or move it to somewhere else on your computer and unzip it. You can now follow to link and compile the library as part of an application.

edgeimpulse.com
BYOM mode
the call's documentation page
Projects / List (active) projects
Projects / Update project
this link
ingestion API
Ingestion service
Impulse / Get impulse blocks
Impulse / Delete impulse
Impulse / Create impulse
DSP / Get config
DSP / Set config
Jobs / Generate features
Jobs / Get job status
get_keras
set_keras
Jobs / Train model (Keras)
Jobs / Get job status
Jobs / Get logs
Jobs / Classify
Jobs / Get job status
Jobs / Get logs
Deployment / Deployment targets (data sources)
Jobs / Build on-device model
Deployment / Download
this guide
Python SDK
Edge Impulse Python API bindings
web API calls
here