For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.
For example:
Your project requires to use a grayscale camera because you already purchased the hardware.
Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.
You have the feeling that a particular neural network architecture will be more suited for your project.
This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.
Please read first the EON Tuner documentation to configure your Target, Task category and desired Time per inference.
The EON Tuner Search Space allows you to define the structure and constraints of your machine learning projects through the use of templates.
The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!
A blank template looks like the following:
To understand the core concepts, we recommend having a look at the available templates. We provide templates for different task categories as well as one for your current impulse if it has already been trained.
Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks
in your templates and each block
can contain several elements:
or
You can easily add pre-defined blocks using the + Add block section.
inputBlocks
)id
: Unique identifier for the block.
Type: number
type
: The nature of the input data.
Type: string
Valid Options: time-series
, image
title
: Optional descriptive title for the block.
Type: string
dimension
: Dimensions of the images.
Type: array
of array
of number
Example Valid Values: [[32, 32], [64, 64], [96, 96], [128, 128], [160, 160], [224, 224], [320, 320]]
Enterprise Option: All dimensions available with full enterprise search space.
resizeMode
: How the image should be resized to fit the specified dimensions.
Type: array
of string
Valid Options: squash
, fit-short
, fit-long
resizeMethod
: Method used for resizing the image.
Type: array
of string
Valid Options: nearest
, lanczos3
cropAnchor
: Position on the image where cropping is anchored.
Type: array
of string
Valid Options: top-left
, top-center
, top-right
, middle-left
, middle-center
, middle-right
, bottom-left
, bottom-center
, bottom-right
window
: Details about the windowing approach for time-series data.
Type: array
of object
with fields:
windowSizeMs
: The size of the window in milliseconds.
Type: number
windowIncreaseMs
: The step size to increase the window in milliseconds.
Type: number
windowSizeMs
: Size of the window in milliseconds if not specified in the window
field.
Type: array
of number
windowIncreasePct
: Percentage to increase the window size each step.
Type: array
of number
frequencyHz
: Sampling frequency in Hertz.
Type: array
of number
padZeros
: Whether to pad the time-series data with zeros.
Type: array
of boolean
dspBlocks
)id
: Unique identifier for the DSP block.
Type: number
type
: The type of Digital Signal Processing to apply.
Type: string
Valid Options: raw
, spectral-analysis
, mfe
, mfcc
, spectrogram
, image
, flatten
, organization
(the last one available only if full enterprise search space is enabled)
axes
: Name of the data axes in the project.
Type: array
of string
implementationVersion
: Version of the DSP method used.
Type: number
title
: Optional title for the DSP block.
Type: string
For image
Type
channels
: Color channels used in the image.
Type: array
of string
Valid Options: RGB
, Grayscale
For spectral-analysis
Type
fft-length
: Length of the Fast Fourier Transform applied.
Type: array
of number
Enterprise-specific Valid Options: [16, 64]
scale-axes
: Scale factor for the axes.
Type: array
of number
Enterprise-specific Valid Options: [1]
filter-type
: Type of filter applied.
Type: array
of string
Valid Options: low
, high
, none
filter-cutoff
: Cutoff frequency for the filter.
Type: array
of number
filter-order
: Order of the filter.
Type: array
of number
do-log
: Whether to apply logarithmic scaling.
Type: array
of boolean
do-fft-overlap
: Whether to overlap FFT windows.
Type: array
of boolean
spectral-peaks-count
: Number of spectral peaks to identify.
Type: array
of number
spectral-peaks-threshold
: Threshold for identifying spectral peaks.
Type: array
of number
spectral-power-edges
: Defines the spectral edges for power calculation.
Type: array
of string
autotune
: Whether to enable automatic tuning of parameters.
Type: array
of boolean
analysis-type
: Type of spectral analysis.
Type: array
of string
Valid Options: FFT
, Wavelet
wavelet-level
: Level of wavelet transformation.
Type: array
of number
wavelet
: Type of wavelet used.
Type: array
of string
extra-low-freq
: Whether to include extra low frequencies in analysis.
Type: array
of boolean
input-decimation-ratio
: Ratio for input decimation.
Type: array
For mfcc
, mfe
Types
num_filters
: Number of filters used in MFCC or MFE.
Type: array
of number
num_cepstral
: Number of cepstral coefficients in MFCC.
Type: array
of number
win_size
: Window size for the analysis.
Type: array
of number
low_frequency
: Lower bound of the frequency range.
Type: array
of number
high_frequency
: Upper bound of the frequency range.
Type: array
of number
pre_cof
: Pre-emphasis coefficient.
Type: array
of number
pre_shift
: Shift applied before analysis.
Type: array
of number
For raw
Type
scale-axes
: Scale factor for the axes.
Type: array
of number
average
, minimum
, maximum
, rms
, stddev
, skewness
, kurtosis
: Statistical measures applied to raw data.
Type: array
of boolean
For custom
or organization
Type
organizationId
: Identifier for the organization.
Type: number
organizationDSPId
: Specific DSP ID within the organization.
Type: number
learnBlocks
)id
: Unique identifier for the learning block.
Type: number
type
: The type of machine learning model to use.
Type: string
Valid Options: keras
, keras-regression
, keras-transfer-regression
, keras-transfer-image
, keras-transfer-kws
, keras-object-detection
, keras-transfer-other
, keras-akida
, keras-akida-transfer-image
, keras-akida-object-detection
, keras-visual-anomaly
dsp
: Links to DSP blocks by their IDs indicating which DSP outputs are used as inputs for this learning model.
Type: array
of array
of number
title
: Optional title for the learning block.
Type: string
implementationVersion
: Version of the learning algorithm used.
Type: number
Dimension and Architecture
dimension
: Specifies the type of neural network architecture.
Type: array
of string
Valid Options: dense
, conv1d
, conv2d
dropout
: Specifies the dropout rate to prevent overfitting.
Type: array
of number
denseBaseNeurons
, denseNeurons
: Specifies the number of neurons in dense layers.
Type: array
of number
denseLayers
: Specifies the number of dense layers.
Type: array
of number
convBaseFilters
: Base number of filters in convolutional layers.
Type: array
of number
convLayers
: Number of convolutional layers.
Type: array
of number
Training Configuration
trainingCycles
: Number of training cycles.
Type: array
of number
trainTestSplit
: The ratio of training to test data.
Type: array
of number
autoClassWeights
: Whether to automatically adjust class weights.
Type: array
of boolean
minimumConfidenceRating
: The minimum confidence threshold for class predictions.
Type: array
of number
learningRate
: The learning rate for the optimizer.
Type: array
of number
batchSize
: Number of samples per batch during training.
Type: array
of number
Augmentation and Model Policies
augmentationPolicySpectrogram
: Defines the data augmentation strategies for spectrogram data.
Type: object
Fields within the object:
enabled
: Whether to apply augmentation.
Type: array
of boolean
gaussianNoise
: Level of Gaussian noise to add.
Type: array
of string
Valid Options: none
, low
, high
timeMasking
: Extent of time masking to apply.
Type: array
of string
Valid Options: none
, low
, high
freqMasking
: Extent of frequency masking to apply.
Type: array
of string
Valid Options: none
, low
, high
warping
: Whether to apply time warping.
Type: array
of boolean
augmentationPolicyImage
: Defines the data augmentation strategies for image data.
Type: array
of string
Valid Options:
all
: Apply all available image augmentations.
none
: Do not apply any image augmentations.
Advanced Configurations
layers
: Specifies the configuration of each layer within the learning model.
Type: array
of object
Fields within each layer object:
type
: The type of layer (e.g., conv2d
, dense
).
Type: string
neurons
: Specifies the number of neurons for dense layers or number of filters for convolutional layers.
Type: array
of number
Valid Options: [8, 16, 32, 64, 10, 20, 40], can vary depending on the full Eon Tuner search space availability.
kernelSize
: Size of the kernel in convolutional layers.
Type: array
of number
Valid Options: [1, 3, 5], specific to the project’s tuner space.
dropoutRate
: Dropout rate for the layer to prevent overfitting.
Type: array
of number
Valid Options: [0.1, 0.25, 0.5], determined by the project settings.
columns
: Optional field typically used in tabular data or custom setups.
Type: array
of number
stack
: Defines how many times the layer configuration should be repeated.
Type: array
of number
enabled
: Flag to enable or disable the layer.
Type: array
of boolean
organizationModelId
: If using a custom model from an organization, this is the identifier.
Type: number
model
: Specifies the base model for transfer learning scenarios.
Type: array
of string
Valid Options:
transfer_mobilenetv2_a35
transfer_mobilenetv2_a1
transfer_mobilenetv2_a05
transfer_mobilenetv1_a2_d100
transfer_mobilenetv1_a1_d100
transfer_mobilenetv1_a25_d100
transfer_mobilenetv2_160_a1
transfer_mobilenetv2_160_a75
transfer_mobilenetv2_160_a5
transfer_mobilenetv2_160_a35
fomo_mobilenet_v2_a01
fomo_mobilenet_v2_a35
object_ssd_mobilenet_v2_fpnlite_320x320
transfer_kws_mobilenetv1_a1_d100
transfer_kws_mobilenetv2_a35_d100
transfer_akidanet_imagenet_160_a50
transfer_akidanet_imagenet_224_a50
fomo_akidanet_a50
customValidationMetadataKey
: Key for custom metadata used in validation.
Type: array
of string
profileInt8
: Specifies whether to use INT8 quantization.
Type: array
of boolean
skipEmbeddingsAndMemory
: Whether to skip certain processing steps to optimize memory usage.
Type: array
of boolean
useLearnedOptimizer
: Whether to use a learned optimizer during training.
Type: array
of boolean
anomalyCapacity
: Specifies the model's capacity to handle anomalies.
Type: array
of string
Valid Options: low
, medium
, high
customParameters
: Allows for additional custom parameters if full Eon Tuner search space is enabled.
Type: array
of object
The actual availability of certain dimensions or options can depend on whether your project has full enterprise capabilities (projectHasFullEonTunerSearchSpace
). This might unlock additional valid values or remove restrictions on certain fields.
Fields within array
of array
structures (like dimension
or window
) allow for multi-dimensional setups where each sub-array represents a different configuration that the EON Tuner can evaluate.
Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:
Public project: Cars binary classifier - EON Tuner Search Space
Object detection models can use either bounding boxes (object location and size) or centroids (object location only). See the object detection documentation to learn more about the differences between these two task categories.
Example of a template where we search for object detection models using bounding boxes (e.g. MobileNet V2 SSD FPN-Lite):
Example of a template where we search for object detection models using centroids (e.g. FOMO):
Should you wish to compare models using bounding boxes with models using centroids, you can customize the search space to include impulses for both model types.
Example of a template where we want to compare, on the one side, MFCC vs MFE pre-processing with a custom NN architecture and on the other side, keyword spotting transfer learning architecture:
Public Project: Keywords Detection - EON Tuner Search Space
Example of a template where we utilize the Akida Learning Blocks for Brainchip's Akida architecture.
Public project: Akida Image Classification
Only available with Edge Impulse Professional and Enterprise Plans
Try our Professional Plan or FREE Enterprise Trial today.
The parameters set in the custom DSP block are automatically retrieved.
Example using a custom ToF (Time of Flight) pre-processing block:
Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:
The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.
EON Tuner Search Space
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, by your customers or by your internal knowledge.
For example, you can be constrained to use a grayscale camera, your engineers have already worked on a dedicated digital signal processing method to pre-process your sensor data or you just have the feeling that a particular neural network architecture will be more suited for a project.
In those cases, you can use the to define the scope of your project.
First, make sure you have an audio, motion, image classification, or object detection project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:
Select the EON Tuner tab.
Click the Configure target button to select your model’s task category, target device, and time per inference (in ms).
Click on the Task category dropdown and select the use case unique to your motion, audio, object detection, or image classification project.
Click Save and then select Start EON Tuner
Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:
Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware.
The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.
The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or task categories.
The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.
Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance
During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.
Detailed logs of the run are also available. To view them, click on the button next to Target shown below.
As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.
While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.
View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32
, or the quantized int8
, version of the neural network.
Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.
The selected sorting criteria will be shown in the top left corner of each result.
.
.
.
.
.
Log in to the and open a project.
Now you’re your automatically configured Edge Impulse model to your target edge device!
The Tuner can directly analyze the performance on any device by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.
Depending on the selected task category, the EON Tuner considers a variety of when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.