Search space

For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.

For example:

  • Your project requires to use a grayscale camera because you already purchased the hardware.

  • Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.

  • You have the feeling that a particular neural network architecture will be more suited for your project.

This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.

Please read first the EON Tuner documentation to configure your Target, Task category and desired Time per inference.

Understanding Search Space Configuration

The EON Tuner Search Space allows you to define the structure and constraints of your machine learning projects through the use of templates.

Templates

The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!

Overview

A blank template looks like the following:

[
  {
    "inputBlocks": [],
    "dspBlocks": [],
    "learnBlocks": []
  }
]

Load a template

To understand the core concepts, we recommend having a look at the available templates. We provide templates for different task categories as well as one for your current impulse if it has already been trained.

Load a template

Search parameters

Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks in your templates and each block can contain several elements:

[
  {
    "inputBlocks": [],
    "dspBlocks": [],
    "learnBlocks": []
  },
  {
    "inputBlocks": [],
    "dspBlocks": [],
    "learnBlocks": []
  },
  ...
]

or

...
"inputBlocks": [
      {
        "type": "time-series",
        "window": [
          {"windowSizeMs": 300, "windowIncreaseMs": 67},
          {"windowSizeMs": 500, "windowIncreaseMs": 100}
        ]
],
...

You can easily add pre-defined blocks using the + Add block section.

Add vision block
Add time series block

Format

Input Blocks (inputBlocks)

Common Fields for All Input Blocks

  • id: Unique identifier for the block.

    • Type: number

  • type: The nature of the input data.

    • Type: string

    • Valid Options: time-series, image

  • title: Optional descriptive title for the block.

    • Type: string

Specific Fields for Image Type Input Blocks

  • dimension: Dimensions of the images.

    • Type: array of array of number

    • Example Valid Values: [[32, 32], [64, 64], [96, 96], [128, 128], [160, 160], [224, 224], [320, 320]]

    • Enterprise Option: All dimensions available with full enterprise search space.

  • resizeMode: How the image should be resized to fit the specified dimensions.

    • Type: array of string

    • Valid Options: squash, fit-short, fit-long

  • resizeMethod: Method used for resizing the image.

    • Type: array of string

    • Valid Options: nearest, lanczos3

  • cropAnchor: Position on the image where cropping is anchored.

    • Type: array of string

    • Valid Options: top-left, top-center, top-right, middle-left, middle-center, middle-right, bottom-left, bottom-center, bottom-right

Specific Fields for Time-Series Type Input Blocks

  • window: Details about the windowing approach for time-series data.

    • Type: array of object with fields:

      • windowSizeMs: The size of the window in milliseconds.

        • Type: number

      • windowIncreaseMs: The step size to increase the window in milliseconds.

        • Type: number

  • windowSizeMs: Size of the window in milliseconds if not specified in the window field.

    • Type: array of number

  • windowIncreasePct: Percentage to increase the window size each step.

    • Type: array of number

  • frequencyHz: Sampling frequency in Hertz.

    • Type: array of number

  • padZeros: Whether to pad the time-series data with zeros.

    • Type: array of boolean

Additional Notes

  • The actual availability of certain dimensions or options can depend on whether your project has full enterprise capabilities (projectHasFullEonTunerSearchSpace). This might unlock additional valid values or remove restrictions on certain fields.

  • Fields within array of array structures (like dimension or window) allow for multi-dimensional setups where each sub-array represents a different configuration that the EON Tuner can evaluate.

Examples

Image classification

Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:

Public project: Cars binary classifier - EON Tuner Search Space

[
{
    "inputBlocks": [
    {
        "type": "image",
        "dimension": [[96, 96]],
        "resizeMode": ["squash", "fit-short"]
    }
    ],
    "dspBlocks": [
    {
        "type": "image",
        "id": 1,
        "implementationVersion": 1,
        "channels": ["Grayscale"]
    }
    ],
    "learnBlocks": [
    {
        "type": "keras",
        "dsp": [[1]],
        "trainingCycles": [20],
        "learningRate": [0.0005],
        "minimumConfidenceRating": [0.6],
        "trainTestSplit": [0.2],
        "layers": [
        [
            {
            "type": "conv2d",
            "neurons": [4, 6, 8],
            "kernelSize": [3],
            "stack": [1]
            },
            {
            "type": "conv2d",
            "neurons": [3, 4, 5],
            "kernelSize": [3],
            "stack": [1]
            },
            {"type": "flatten"},
            {"type": "dropout", "dropoutRate": [0.25]},
            {"type": "dense", "neurons": [4, 6, 8, 16]}
        ]
        ]
    },
    {
        "type": "keras-transfer-image",
        "dsp": [[1]],
        "model": [
        "transfer_mobilenetv2_a35",
        "transfer_mobilenetv2_a1",
        "transfer_mobilenetv2_a05",
        "transfer_mobilenetv1_a2_d100",
        "transfer_mobilenetv1_a1_d100",
        "transfer_mobilenetv1_a25_d100"
        ],
        "denseNeurons": [16, 32, 64],
        "dropout": [0.1, 0.25, 0.5],
        "augmentationPolicyImage": ["all", "none"],
        "learningRate": [0.0005],
        "trainingCycles": [20]
    }
    ]
}
]

Custom DSP and ML Blocks

Only available with Edge Impulse Professional and Enterprise Plans

Try our Professional Plan or FREE Enterprise Trial today.

Add custom block

Custom DSP block

The parameters set in the custom DSP block are automatically retrieved.

Example using a custom ToF (Time of Flight) pre-processing block:

[
  {
    "inputBlocks": [
      {
        "type": "time-series",
        "window": [
          {"windowSizeMs": 300, "windowIncreaseMs": 67}
        ]
      }
    ],
    "dspBlocks": [
      {
        "type": "organization",
        "organizationId": 1,
        "organizationDSPId": 613,
        "max_distance": [1800, 900, 450],
        "min_distance": [100, 200, 300],
        "std": [2]
      }
    ],
    "learnBlocks": [
      {
        "type": "keras",
        "trainingCycles": [50],
        "dimension": ["conv2d"],
        "convBaseFilters": [8, 16, 32],
        "convLayers": [2, 3, 4],
        "dropout": [0.25, 0.5]
      }
    ]
  }
]

Custom learning block

Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:

[
  {
    "inputBlocks": [
      {
        "type": "image",
        "dimension": [
          [32, 32],
          [64, 64],
          [96, 96],
          [128, 128],
          [160, 160],
          [224, 224],
          [320, 320]
        ]
      }
    ],
    "dspBlocks": [{"type": "image", "channels": ["RGB"]}],
    "learnBlocks": [
      {
        "type": "keras-transfer-image",
        "organizationModelId": 69,
        "title": "EfficientNetB0",
        "learningRate": [0.001],
        "trainingCycles": [30],
        "trainTestSplit": [0.05]
      },
      {
        "type": "keras-transfer-image",
        "organizationModelId": 70,
        "title": "EfficientNetB1",
        "learningRate": [0.001],
        "trainingCycles": [30],
        "trainTestSplit": [0.05]
      },
      {
        "type": "keras-transfer-image",
        "organizationModelId": 71,
        "title": "EfficientNetB2",
        "learningRate": [0.001],
        "trainingCycles": [30],
        "trainTestSplit": [0.05]
      }
    ]
  }
]

Last updated

Revision created

fix