Edge Impulse Docs

Edge Impulse Documentation

Welcome to the Edge Impulse documentation. You'll find comprehensive guides and documentation to help you start working with Edge Impulse as quickly as possible, as well as support if you get stuck. Let's jump right in!

Parameter search with Python

The Edge Impulse API gives programmatic access to all features in the studio, and many tasks that might normally have to be performed manually can thus be automated. One of these tasks is finding the best parameters for your signal processing and learning blocks. Manually testing all parameters is slow and error-prone, but through the API you can test an unlimited number of blocks, parameters and neural network architectures. By utilizing the "model testing" tools you can also automatically score the full impulse, quickly verifying whether you've selected optimized parameters, or reduce your search.

This tutorial uses the Edge Impulse Python SDK to find the optimal value of the cut-off frequency of the Spectral Analysis block. Going through Continuous motion recognition tutorial is a prerequisite. The same principles that are used in this tutorial also apply to selecting other parameters, such as the best neural network architecture for your training set.


Usage limits

Be aware that jobs started through the API are subject to the same usage limits as other jobs.

Installing the Edge Impulse Python SDK

To install the SDK, run:

pip3 install edge-impulse-sdk --upgrade

Running the example

First, make sure you have an API key (get it from the studio dashboard), and note your project ID (in the studio URL). Then, create a file called automate-search.py and enter:

import time
import edge_impulse_sdk as ei_sdk

# !!! Set your own API key and Project ID here !!!
API_KEY = 'ei_d818852935107f04ffc5e937190f62b77f79c8ea686c643cf4c9715971a67f5e'
# !!! Set your own API key and Project ID here !!!

configuration = ei_sdk.Configuration()
configuration.api_key['x-api-key'] = API_KEY

# Instantiate API clients
raw_data_api = ei_sdk.RawDataApi(ei_sdk.ApiClient(configuration))
classify_api = ei_sdk.ClassifyApi(ei_sdk.ApiClient(configuration))
impulse_api = ei_sdk.ImpulseApi(ei_sdk.ApiClient(configuration))
jobs_api = ei_sdk.JobsApi(ei_sdk.ApiClient(configuration))
learn_api = ei_sdk.LearnApi(ei_sdk.ApiClient(configuration))
dsp_api = ei_sdk.DSPApi(ei_sdk.ApiClient(configuration))

# Classify all samples in the "testing" category against the current impulse
# to validate how well our model performs
def score_current_impulse():
    samples = raw_data_api.list_samples(PROJECT_ID, 'testing', limit=1000, offset=0, exclude_sensors=True)
    if (samples.error):
        print('Failed to list samples', samples.error)

    j = jobs_api.start_classify_job(PROJECT_ID)

    result = classify_api.get_classify_job_result(PROJECT_ID)
    if (not result.success):
        print('Failed to get classification results', result.error)

    total_readings = 0
    total_correct = 0

    for r in result.result:
        sample = list(filter(lambda s: s.id == r.sample_id, samples.samples))[0]
        if not sample:
            print('Could not find sample', r.sample_id)

        # if we're under 'minimum_confidence_rating' we mark the reading as uncertain
        # you can change this confidence rating in the studio UI
        uncertain = 0
        incorrect = 0
        correct = 0

        # print('Classification result', sample.filename, 'expected', sample.label)
        for c in r.classifications:
            # loop through all the classifications and report back what we thought the score should be
            min_confidence = c.minimum_confidence_rating

            for r in c.result:
                handled = False
                for k in r.keys():
                    if (r[k] > min_confidence):
                        if (k == sample.label):
                            correct = correct + 1
                            handled = True
                            incorrect = incorrect + 1
                            handled = True
                if (not handled):
                    uncertain = uncertain + 1

        # print('correct', correct, 'incorrect', incorrect, 'uncertain', uncertain)

        total_readings = total_readings + correct + incorrect + uncertain
        total_correct = total_correct + correct

    # total accuracy is the number of correct predictions / total readings
    print('Accuracy', str(round(total_correct / total_readings * 100, 2)) + '%')

    return round(total_correct / total_readings * 100, 2)

# Retrain a model
def retrain(filter_cutoff):
    print('Trying filter cutoff', filter_cutoff)

    curr_impulse = impulse_api.get_impulse(PROJECT_ID)
    if (curr_impulse.error):
        print('Failed to get current impulse', curr_impulse.error)

    for dsp in curr_impulse.impulse.dsp_blocks:
        # only interested in spectral-analysis blocks
        if (not dsp.type == 'spectral-analysis'): continue

        # update the filter-cutoff configuration option, other options are not affected
        req = ei_sdk.DSPConfigRequest(config={ 'filter-cutoff': filter_cutoff })
        cr = dsp_api.set_dsp_config(PROJECT_ID, int(dsp.id), req)
        if (not cr.success):
            print('Failed to set filter cutoff', cr.error)

    j = jobs_api.start_retrain_job(PROJECT_ID)

    # if we have any Keras blocks, show the loss and accuracy
    for learn in curr_impulse.impulse.learn_blocks:
        if (learn.type == 'keras'):
            metadata = learn_api.get_keras_metadata(PROJECT_ID, int(learn.id))
            if (metadata.error):
                print('Failed to load metadata', metadata.error)

            print('Keras block', learn.name, 'returned', 'loss',
                metadata.metrics.loss, 'accuracy', metadata.metrics.acc)

# Wait for a job to finish
# You can use the websocket interface for updates on the state
# (or keep the retrain tab open in studio)
def wait_for_job(id):
    print('Created job', id, '- waiting to finish...')
    while (1):
        jobs = jobs_api.list_active_jobs(PROJECT_ID)
        if (jobs.error):
            print('Failed to list jobs', jobs.error)

        lst = list(filter(lambda j: j.id == id, jobs.jobs))
        if (len(lst) > 0):
            print('Not finished...')
            print('Job finished')

cutoff_freqs = [ 2.0, 2.5, 3.0, 3.5, 4, 4.5, 5 ]
best_score = 0
best_cutoff = 0

for cutoff in cutoff_freqs:
    score = score_current_impulse()
    if (score > best_score):
        best_score = score
        best_cutoff = cutoff

print('Best cutoff is', cutoff, 'with accuracy', best_score)

Run the example via:

python3 automate-search.py

Example output

Trying filter cutoff 3.5
Created job job-379 - waiting to finish...
Not finished...
Not finished...
Not finished...
Not finished...
Job finished
Keras block NN Classifier returned loss 0.6003718898780104 accuracy 0.8297101431998654
Classification result sleep-stage-r38.s3j8f7o expected sleep-stage-r: correct 29 incorrect 0 uncertain 22
Classification result sleep-stage-r30.s3j8c88 expected sleep-stage-r: correct 32 incorrect 0 uncertain 19
Classification result sleep-stage-two9.s3j7ubv expected sleep-stage-two: correct 48 incorrect 0 uncertain 3
Accuracy 71.24%

Trying filter cutoff 4


Best cutoff is 4.5 with accuracy 86.16

Updated 7 months ago

Parameter search with Python

Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.