EON Tuner

The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.

EON Tuner Search Space

For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, by your customers or by your internal knowledge.

For example, you can be constrained to use a grayscale camera, your engineers have already worked on a dedicated digital signal processing method to pre-process your sensor data or you just have the feeling that a particular neural network architecture will be more suited for a project.

In those cases, you can use the EON Tuner Search Space to define the scope of your project.

Getting Started

First, make sure you have an audio, motion, image classification, or object detection project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:

  1. Log in to the Edge Impulse Studio and open a project.

    Edge Impulse Project
  2. Select the EON Tuner tab.

    Empty EON Tuner
  3. Click the Configure target button to select your model’s task category, target device, and time per inference (in ms).

    EON Tuner Settings
  4. Click on the Task category dropdown and select the use case unique to your motion, audio, object detection, or image classification project.

    Task categories for audio projects. Motion, object detection, and image projects have their own category
  5. Click Save and then select Start EON Tuner

    Start EON Tuner
  6. Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:

    EON Tuner searching best parameters
  7. Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware.

    Neural Network (Keras) Impulse design page
  8. Now you’re ready to deploy your automatically configured Edge Impulse model to your target edge device!

Features

The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.

Targets

The Tuner can directly analyze the performance on any device fully supported by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.

Task Categories

The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or task categories.

Object detection categories
Motion detection categories

Input

The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.

Example input settings selected by the tuner, using a one second window size and one second increment

Processing Blocks

Depending on the selected task category, the EON Tuner considers a variety of Processing blocks when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.

Example processing block selected by the tuner, using a 32ms frame length and stride with a 40 count filter bank

Learning Blocks

Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance

Example learning block selected by the tuner, showing a convolutional network architecture with data augmentation

Tuner Operation and Results

During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.

Detailed logs of the run are also available. To view them, click on the button next to Target shown below.

As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.

Accuracy and performance results

Filters

While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.

Filter options for an audio project

Views

View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32, or the quantized int8, version of the neural network.

Sort

Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.

The selected sorting criteria will be shown in the top left corner of each result.

Sorting options for an audio project

Last updated

Revision created

fix