The Edge Impulse API exposes programmatic access to most functionality in the studio and it is particularly useful when it comes to automating tasks. You can use the API to edit the labels of many samples at once, train models, or create new impulses. In addition, you can subscribe to events, such as when a new file is processed by the ingestion service. An Edge Impulse Python API bindings is also available as the edgeimpulse-api pip package.
See the following examples:
The Edge Impulse API gives programmatic access to all features in the studio, and many tasks that might normally have to be performed manually can thus be automated. In this tutorial you'll create a job that deploys your model (as a ZIP file), you monitor the job status, and then finally download the deployed model. The tutorial uses Python, but you can use any environment capable of scripting HTTP requests.
To run this script you'll need:
A recent version of Python 3.
The requests
module:
Your project ID (can be found on Dashboard in the Edge Impulse Studio).
An API Key (under Dashboard > Keys).
Then, create the following script build-model.py:
When you now run python3 download-model.py
you'll see something like:
The EON Tuner is Edge Impulse's AutoML (automated machine learning) tool to help you find and select the best embedded machine learning model for your application within the constraints of your target device.
This notebook will show you how to configure and run the EON Tuner programmatically using the Edge Impulse API!
This section will set up your environment and API credentials so that you can start making calls to the Edge Impulse API from this notebook. Run this block only once per runtime session, or every time you:
Open the notebook on your browser or IDE to start working on it, or
restart the runtime, or
change the project you are working on
API documentation is available at https://docs.edgeimpulse.com/reference/edge-impulse-api
PROJECT_ID
You will need to enter the correct PROJECT_ID
for the project you want to work with, in the code in section 1.3 below. The project ID can be obtained from your Edge Impulse Project's Dashboard under the Project Info section.
The block below will prompt you for your project's API Key. You can obtain this key from your Project's Dashboard, by selecting the Keys tab from the top navigation bar.
Run the block below and enter your API key when prompted. Then continue to the next section.
You can use the code in section 2.2 below to programmatically update the configuration of the EON Tuner.
In basic mode (the default) you will be able to modify the datasetCategory
, targetLatency
and targetDevice
. For additional control, ask your User Success or Solutions Engineer to enable the EON Tuner advanced mode for you.
Run the cell below to start spinning up EON Tuner optimization jobs. If your project is already running an EON Tuner optimization, go instead to section 2.4 to track the job's progress.
Run the cell below to track the progress of your EON Tuner job. You can safely stop and restart the cell at any time since this will not affect the running EON Tuner jobs.
Use the cell below to retrieve the EON Tuner optimization results and save them to the trials
variable.
The Python SDK is built on top of the Edge Impulse Python API bindings, which is known as the edgeimpulse_api package. These are Python wrappers for all of the web API calls that you can use to interact with Edge Impulse projects programmatically (i.e. without needing to use the Studio graphical interface).
The API reference guide for using the Python API bindings can be found here.
This example will walk you through the process of using the Edge Impulse API bindings to upload data, define an impulse, process features, train a model, and deploy the impulse as a C++ library.
After creating your project and copying the API key, feel free to leave the project open in a browser window so you can watch the changes as we make API calls. You might need to refresh the browser after each call to see the changes take affect.
Important! This project will add data and remove any current features and models in a project. We highly recommend creating a new project when running this notebook! Don't say we didn't warn you if you mess up an existing project.
You will need to obtain an API key from an Edge Impulse project. Log into edgeimpulse.com and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the EI_API_KEY
value in the following cell:
The Python API bindings use a series of submodules, each encapsulating one of the API subsections (e.g. Projects, DSP, Learn, etc.). To use these submodules, you need to instantiate a generic API module and use that to instantiate the individual API objects. We'll use these objects to make the API calls later.
To configure a client, you generally create a configuration object (often from a dict) and then pass that object as an argument to the client.
Before uploading data, we should make sure the project is in the regular impulse flow mode, rather than BYOM mode. We'll also need the project ID for most of the other API calls in the future.
Notice that the general pattern for calling API functions is to instantiate a configuration/request object and pass it to the API method that's part of the submodule. You can find which parameters a specific API call expects by looking at the call's documentation page.
API calls (links to associated documentation):
We'll start by downloading the gesture dataset from https://docs.edgeimpulse.com/docs/pre-built-datasets/continuous-gestures. Note that the ingestion API is separate from the regular Edge Impulse API: the URL and interface are different. As a result, we must construct the request manually and cannot rely on the Python API bindings.
We rely on the ingestion service using the string before the first period in the filename to determine the label. For example, "idle.1.cbor" will be automatically assigned the label "idle." If you wish to set a label manually, you must specify the x-label
parameter in the headers. Note that you can only define a label this way when uploading a group of data at a time. For example, setting "x-label": "idle"
in the headers would give all data uploaded with that call the label "idle."
API calls used with associated documentation:
Now that we uploaded our data, it's time to create an impulse. An "impulse" is a combination of processing (feature extraction) and learning blocks. The general flow of data is:
data -> input block -> processing block(s) -> learning block(s)
Only the processing and learning blocks make up the "impulse." However, we must still specify the input block, as it allows us to perform preprocessing, like windowing (for time series data) or cropping/scaling (for image data).
Your project will have one input block, but it can contain multiple processing and learning blocks. Specific outputs from the processing block can be specified as inputs to the learning blocks. However, for simplicity, we'll just show one processing block and one learning block.
Note: Historically, processing blocks were called "DSP blocks," as they focused on time series data. In Studio, the name has been changed to "Processing block," as the blocks work with different types of data, but you'll see it referred to as "DSP block" in the API.
It's important that you define the input block with the same parameters as your captured data, especially the sampling rate! Additionally, the processing block axes names must match up with their names in the dataset.
API calls (links to associated documentation):
Before generating features, we need to configure the processing block. We'll start by printing all the available parameters for the spectral-analysis
block, which we set when we created the impulse above.
API calls (links to associated documentation):
After we've defined the impulse, we then want to use our processing block(s) to extract features from our data. We'll skip feature importance and feature explorer to make this go faster.
Generating features kicks off a job in Studio. A "job" involves instantiating a Docker container and running a custom script in the container to perform some action. In our case, that involves reading in data, extracting features from that data, and saving those features as Numpy (.npy) files in our project.
Because jobs can take a while, the API call will return immediately. If the call was successful, the response will contain a job number. We can then monitor that job and wait for it to finish before continuing.
API calls (links to associated documentation):
Now that we have trained features, we can run the learning block to train the model on those features. Note that Edge Impulse has a number of learning blocks, each with different methods of configuration. We'll be using the "keras" block, which uses TensorFlow and Keras under the hood.
You can use the get_keras and set_keras functions to configure the granular settings. We'll use the defaults for that block and just set the number of epochs and learning rate for training.
API calls (links to associated documentation):
Now that the model has been trained, we can go back to the job logs to find the accuracy metrics for both the float32 and int8 quantization levels. We'll need to parse the logs to find these. Because the logs are printed with the most recent events first, we'll work backwards through the log to find these metrics.
As with any good machine learning project, we should test the accuracy of the model using our holdout ("testing") set. We'll call the classify
API function to make that happen and then parse the job logs to get the results.
In most cases, using int8
quantization will result in a faster, smaller model, but you will slightly lose some accuracy.
API calls (links to associated documentation):
Now that you've trained the model, let's build it as a C++ library and download it. We'll start by printing out the available target devices. Note that this list changes depending on how you've configured your impulse. For example, if you use a Syntiant-specific learning block, then you'll see Syntiant boards listed. We'll use the "zip" target, which gives us a generic C++ library that we can use for nearly any hardware.
The engine
must be one of:
We'll use tflite
, as that's the most ubiquitous.
modelType
is the quantization level. Your options are:
In most cases, using int8
quantization will result in a faster, smaller model, but you will slightly lose some accuracy.
API calls (links to associated documentation):
You should have a .zip file in the same directory as this notebook. Download or move it to somewhere else on your computer and unzip it. You can now follow this guide to link and compile the library as part of an application.
This guide provides a quick walk-through on how to upload and update time-series data with multiple labels using the Edge Impulse API.
API Key - Obtain from your project settings on Edge Impulse.
Example Github repository - Clone the following repository, it contains the scripts and the example files:
Export your API key to use in the upload scripts:
Data File: JSON formatted time-series data. See the
structured_labels.labels
File: JSON with structured labels. See the
Use the upload.sh
script to send your data and labels to Edge Impulse:
To update a sample, run update-sample.sh
with the required project and sample IDs:
Your project API key can be used to enable programmatic access to Edge Impulse. You can create and/or obtain a key from your project's Dashboard, under the Keys
tab. API keys are long strings, and start with ei_
:
Open a terminal and run the Edge Impulse daemon. The daemon is the service that connects your hardware with any Edge Impulse project:
Copy your project's ID from the project's Dashboard under the Project Info
section:
Replace the PROJECT_ID
below with the ID of your project you selected and enter your API key when prompted:
We hope this tutorial has helped you to understand how to ingest multi-label data samples to your Edge Impulse project. If you have any questions, please reach out to us on our .