Search space
Last updated
Was this helpful?
Last updated
Was this helpful?
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.
For example:
Your project requires to use a grayscale camera because you already purchased the hardware.
Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.
You have the feeling that a particular neural network architecture will be more suited for your project.
This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.
Please read first the documentation to configure your Target, Task category and desired Time per inference.
The EON Tuner Search Space allows you to define the structure and constraints of your machine learning projects through the use of templates.
The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!
A blank template looks like the following:
To understand the core concepts, we recommend having a look at the available templates. We provide templates for different task categories as well as one for your current impulse if it has already been trained.
Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks
in your templates and each block
can contain several elements:
or
You can easily add pre-defined blocks using the + Add block section.
Input Blocks (inputBlocks
)
Common Fields for All Input Blocks
id
: Unique identifier for the block.
Type: number
type
: The nature of the input data.
Type: string
Valid Options: time-series
, image
title
: Optional descriptive title for the block.
Type: string
Specific Fields for Image Type Input Blocks
dimension
: Dimensions of the images.
Type: array
of array
of number
Example Valid Values: [[32, 32], [64, 64], [96, 96], [128, 128], [160, 160], [224, 224], [320, 320]]
Enterprise Option: All dimensions available with full enterprise search space.
resizeMode
: How the image should be resized to fit the specified dimensions.
Type: array
of string
Valid Options: squash
, fit-short
, fit-long
resizeMethod
: Method used for resizing the image.
Type: array
of string
Valid Options: nearest
, lanczos3
cropAnchor
: Position on the image where cropping is anchored.
Type: array
of string
Valid Options: top-left
, top-center
, top-right
, middle-left
, middle-center
, middle-right
, bottom-left
, bottom-center
, bottom-right
Specific Fields for Time-Series Type Input Blocks
window
: Details about the windowing approach for time-series data.
Type: array
of object
with fields:
windowSizeMs
: The size of the window in milliseconds.
Type: number
windowIncreaseMs
: The step size to increase the window in milliseconds.
Type: number
windowSizeMs
: Size of the window in milliseconds if not specified in the window
field.
Type: array
of number
windowIncreasePct
: Percentage to increase the window size each step.
Type: array
of number
frequencyHz
: Sampling frequency in Hertz.
Type: array
of number
padZeros
: Whether to pad the time-series data with zeros.
Type: array
of boolean
The actual availability of certain dimensions or options can depend on whether your project has full enterprise capabilities (projectHasFullEonTunerSearchSpace
). This might unlock additional valid values or remove restrictions on certain fields.
Fields within array
of array
structures (like dimension
or window
) allow for multi-dimensional setups where each sub-array represents a different configuration that the EON Tuner can evaluate.
Image classification
Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:
The parameters set in the custom DSP block are automatically retrieved.
Example using a custom ToF (Time of Flight) pre-processing block:
Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:
Object detection models can use either bounding boxes (object location and size) or centroids (object location only). See the to learn more about the differences between these two task categories.