Links

Search space

For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.
For example:
  • Your project requires to use a grayscale camera because you already purchased the hardware.
  • Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.
  • You have the feeling that a particular neural network architecture will be more suited for your project.
This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.
Please read first the EON Tuner documentation to configure your Target, Dataset category and desired Time per inference.

Templates

The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!
Overview
A blank template looks like the following:
[
{
"inputBlocks": [],
"dspBlocks": [],
"learnBlocks": []
}
]

Load a template

To understand the core concepts, we recommend having a look at the available templates. We provide templates for different dataset categories as well as one for your current impulse if it has already been trained.
Load a template

Search parameters

Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks in your templates and each block can contain several elements:
[
{
"inputBlocks": [],
"dspBlocks": [],
"learnBlocks": []
},
{
"inputBlocks": [],
"dspBlocks": [],
"learnBlocks": []
},
...
]
or
...
"inputBlocks": [
{
"type": "time-series",
"window": [
{"windowSizeMs": 300, "windowIncreaseMs": 67},
{"windowSizeMs": 500, "windowIncreaseMs": 100}
]
],
...
You can easily add pre-defined blocks using the + Add block section.
Add vision block
Add time series block

Examples

Image

Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:
[
{
"inputBlocks": [
{
"type": "image",
"dimension": [[96, 96]],
"resizeMode": ["squash", "fit-short"]
}
],
"dspBlocks": [
{
"type": "image",
"id": 1,
"implementationVersion": 1,
"channels": ["Grayscale"]
}
],
"learnBlocks": [
{
"type": "keras",
"dsp": [[1]],
"trainingCycles": [20],
"learningRate": [0.0005],
"minimumConfidenceRating": [0.6],
"trainTestSplit": [0.2],
"layers": [
[
{
"type": "conv2d",
"neurons": [4, 6, 8],
"kernelSize": [3],
"stack": [1]
},
{
"type": "conv2d",
"neurons": [3, 4, 5],
"kernelSize": [3],
"stack": [1]
},
{"type": "flatten"},
{"type": "dropout", "dropoutRate": [0.25]},
{"type": "dense", "neurons": [4, 6, 8, 16]}
]
]
},
{
"type": "keras-transfer-image",
"dsp": [[1]],
"model": [
"transfer_mobilenetv2_a35",
"transfer_mobilenetv2_a1",
"transfer_mobilenetv2_a05",
"transfer_mobilenetv1_a2_d100",
"transfer_mobilenetv1_a1_d100",
"transfer_mobilenetv1_a25_d100"
],
"denseNeurons": [16, 32, 64],
"dropout": [0.1, 0.25, 0.5],
"augmentationPolicyImage": ["all", "none"],
"learningRate": [0.0005],
"trainingCycles": [20]
}
]
}
]

Audio

Example of a template where we want to compare, on the one side, MFCC vs MFE pre-processing with a custom NN architecture and on the other side, keyword spotting transfer learning architecture:
[
{
"inputBlocks": [
{
"type": "time-series",
"window": [
{"windowSizeMs": 1000, "windowIncreaseMs": 250},
{"windowSizeMs": 1000, "windowIncreaseMs": 500},
{"windowSizeMs": 1000, "windowIncreaseMs": 1000}
],
"frequencyHz": [16000],
"padZeros": [true]
}
],
"dspBlocks": [
{
"id": 1,
"type": "mfcc",
"frame_length": [0.02, 0.032, 0.05],
"frame_stride_pct": [0.5, 1],
"num_filters": [32, 40],
"num_cepstral": [13],
"fft_length": [256],
"win_size": [101],
"low_frequency": [300],
"high_frequency": [0],
"pre_cof": [0.98]
},
{
"id": 2,
"type": "mfe",
"frame_length": [0.02, 0.032, 0.05],
"frame_stride_pct": [0.5, 1],
"noise_floor_db": [-72, -52, -32],
"num_filters": [32],
"fft_length": [256],
"low_frequency": [300],
"high_frequency": [0]
}
],
"learnBlocks": [
{
"type": "keras",
"dsp": [[1], [2]],
"dimension": ["conv1d", "conv2d"],
"convBaseFilters": [8, 16, 32],
"convLayers": [2, 3, 4],
"dropout": [0.25, 0.5],
"augmentationPolicySpectrogram": {
"enabled": [true, false],
"gaussianNoise": ["low"],
"timeMasking": ["low"],
"warping": [false]
},
"learningRate": [0.005],
"trainingCycles": [100]
}
]
},
{
"inputBlocks": [
{
"type": "time-series",
"windowSizeMs": [1000],
"windowIncreasePct": [0.5],
"frequencyHz": [16000],
"padZeros": [true]
}
],
"dspBlocks": [{"type": "mfe"}],
"learnBlocks": [
{
"type": "keras-transfer-kws",
"model": [
"transfer_kws_mobilenetv1_a1_d100",
"transfer_kws_mobilenetv2_a35_d100"
],
"learningRate": [0.01],
"trainingCycles": [30]
}
]
}
]

Custom DSP and ML Blocks

Only available for enterprise customers
Support for custom DSP & ML blocks: the EON Tuner can now use custom organization DSP & ML blocks by adding them to the custom search space. This feature will only be available for enterprises.
Organizational features are only available for enterprise customers. View our pricing for more information.
Add custom block

Custom DSP block

The parameters set in the custom DSP block are automatically retrieved.
Example using a custom ToF (Time of Flight) pre-processing block:
[
{
"inputBlocks": [
{
"type": "time-series",
"window": [
{"windowSizeMs": 300, "windowIncreaseMs": 67}
]
}
],
"dspBlocks": [
{
"type": "organization",
"organizationId": 1,
"organizationDSPId": 613,
"max_distance": [1800, 900, 450],
"min_distance": [100, 200, 300],
"std": [2]
}
],
"learnBlocks": [
{
"type": "keras",
"trainingCycles": [50],
"dimension": ["conv2d"],
"convBaseFilters": [8, 16, 32],
"convLayers": [2, 3, 4],
"dropout": [0.25, 0.5]
}
]
}
]

Custom learning block

Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:
[
{
"inputBlocks": [
{
"type": "image",
"dimension": [
[32, 32],
[64, 64],
[96, 96],
[128, 128],
[160, 160],
[224, 224],
[320, 320]
]
}
],
"dspBlocks": [{"type": "image", "channels": ["RGB"]}],
"learnBlocks": [
{
"type": "keras-transfer-image",
"organizationModelId": 69,
"title": "EfficientNetB0",
"learningRate": [0.001],
"trainingCycles": [30],
"trainTestSplit": [0.05]
},
{
"type": "keras-transfer-image",
"organizationModelId": 70,
"title": "EfficientNetB1",
"learningRate": [0.001],
"trainingCycles": [30],
"trainTestSplit": [0.05]
},
{
"type": "keras-transfer-image",
"organizationModelId": 71,
"title": "EfficientNetB2",
"learningRate": [0.001],
"trainingCycles": [30],
"trainTestSplit": [0.05]
}
]
}
]