Important! This project will add data and remove any current features and models in a project. We highly recommend creating a new project when running this notebook! Don’t say we didn’t warn you if you mess up an existing project.
Copy API key from Edge Impulse project
EI_API_KEY
value in the following cell:
x-label
parameter in the headers. Note that you can only define a label this way when uploading a group of data at a time. For example, setting "x-label": "idle"
in the headers would give all data uploaded with that call the label “idle.”
API calls used with associated documentation:
data -> input block -> processing block(s) -> learning block(s)Only the processing and learning blocks make up the “impulse.” However, we must still specify the input block, as it allows us to perform preprocessing, like windowing (for time series data) or cropping/scaling (for image data). Your project will have one input block, but it can contain multiple processing and learning blocks. Specific outputs from the processing block can be specified as inputs to the learning blocks. However, for simplicity, we’ll just show one processing block and one learning block.
Note: Historically, processing blocks were called “DSP blocks,” as they focused on time series data. In Studio, the name has been changed to “Processing block,” as the blocks work with different types of data, but you’ll see it referred to as “DSP block” in the API.It’s important that you define the input block with the same parameters as your captured data, especially the sampling rate! Additionally, the processing block axes names must match up with their names in the dataset. API calls (links to associated documentation):
spectral-analysis
block, which we set when we created the impulse above.
API calls (links to associated documentation):
classify
API function to make that happen and then parse the job logs to get the results.
In most cases, using int8
quantization will result in a faster, smaller model, but you will slightly lose some accuracy.
API calls (links to associated documentation):
engine
must be one of:
tflite
, as that’s the most ubiquitous.
modelType
is the quantization level. Your options are:
int8
quantization will result in a faster, smaller model, but you will slightly lose some accuracy.
API calls (links to associated documentation):