LogoLogo
HomeAPI & SDKsProjectsForumStudio
  • Getting started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions (FAQ)
  • Tutorials
    • End-to-end tutorials
      • Computer vision
        • Image classification
        • Object detection
          • Object detection with bounding boxes
          • Detect objects with centroid (FOMO)
        • Visual anomaly detection
        • Visual regression
      • Audio
        • Sound recognition
        • Keyword spotting
      • Time-series
        • Motion recognition + anomaly detection
        • Regression + anomaly detection
        • HR/HRV
        • Environmental (Sensor fusion)
    • Data
      • Data ingestion
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
        • Using the Edge Impulse Python SDK to upload and download data
        • Trigger connected board data sampling
        • Ingest multi-labeled data using the API
      • Synthetic data
        • Generate audio datasets using Eleven Labs
        • Generate image datasets using Dall-E
        • Generate keyword spotting datasets using Google TTS
        • Generate physics simulation datasets using PyBullet
        • Generate timeseries data with MATLAB
      • Labeling
        • Label audio data using your existing models
        • Label image data using GPT-4o
      • Edge Impulse Datasets
    • Feature extraction
      • Building custom processing blocks
      • Sensor fusion using embeddings
    • Machine learning
      • Classification with multiple 2D input features
      • Visualize neural networks decisions with Grad-CAM
      • Sensor fusion using embeddings
      • FOMO self-attention
    • Inferencing & post-processing
      • Count objects using FOMO
      • Continuous audio sampling
      • Multi-impulse (C++)
      • Multi-impulse (Python)
    • Lifecycle management
      • CI/CD with GitHub Actions
      • Data aquisition from S3 object store - Golioth on AI
      • OTA model updates
        • with Arduino IDE (for ESP32)
        • with Arduino IoT Cloud
        • with Blues Wireless
        • with Docker on Allxon
        • with Docker on Balena
        • with Docker on NVIDIA Jetson
        • with Espressif IDF
        • with Nordic Thingy53 and the Edge Impulse app
        • with Particle Workbench
        • with Zephyr on Golioth
    • API examples
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Python API bindings example
      • Running jobs using the API
      • Trigger connected board data sampling
    • Python SDK examples
      • Using the Edge Impulse Python SDK to run EON Tuner
      • Using the Edge Impulse Python SDK to upload and download data
      • Using the Edge Impulse Python SDK with Hugging Face
      • Using the Edge Impulse Python SDK with SageMaker Studio
      • Using the Edge Impulse Python SDK with TensorFlow and Keras
      • Using the Edge Impulse Python SDK with Weights & Biases
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
        • Cloud data storage
      • Data pipelines
      • Data transformation
        • Transformation blocks
      • Upload portals
      • Custom blocks
        • Custom AI labeling blocks
        • Custom deployment blocks
        • Custom learning blocks
        • Custom processing blocks
        • Custom synthetic data blocks
        • Custom transformation blocks
      • Health reference design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
    • Project dashboard
      • Select AI hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (time-series)
      • Multi-label (time-series)
      • Tabular data (pre-processed & non-time-series)
      • Metadata
      • Auto-labeler | deprecated
    • Impulses
    • EON Tuner
      • Search space
    • Processing blocks
      • Audio MFCC
      • Audio MFE
      • Audio Syntiant
      • Flatten
      • HR/HRV features
      • Image
      • IMU Syntiant
      • Raw data
      • Spectral features
      • Spectrogram
      • Custom processing blocks
      • Feature explorer
    • Learning blocks
      • Anomaly detection (GMM)
      • Anomaly detection (K-means)
      • Classification
      • Classical ML
      • Object detection
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • Object tracking
      • Regression
      • Transfer learning (images)
      • Transfer learning (keyword spotting)
      • Visual anomaly detection (FOMO-AD)
      • Custom learning blocks
      • Expert mode
      • NVIDIA TAO | deprecated
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
    • Bring your own model (BYOM)
    • File specifications
      • deployment-metadata.json
      • ei-metadata.json
      • ids.json
      • parameters.json
      • sample_id_details.json
      • train_input.json
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
      • Rust Library
    • Rust Library
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On Android
      • On your desktop computer
      • On your Alif Ensemble Series Device
      • On your Espressif ESP-EYE (ESP32) development board
      • On your Himax WE-I Plus
      • On your Raspberry Pi Pico (RP2040) development board
      • On your SiLabs Thunderboard Sense 2
      • On your Spresense by Sony development board
      • On your Syntiant TinyML Board
      • On your TI LaunchPad using GCC and the SimpleLink SDK
      • On your Zephyr-based Nordic Semiconductor development board
    • Arm Keil MDK CMSIS-PACK
    • Arduino library
      • Arduino IDE 1.18
    • Cube.MX CMSIS-PACK
    • Docker container
    • DRP-AI library
      • DRP-AI on your Renesas development board
      • DRP-AI TVM i8 on Renesas RZ/V2H
    • IAR library
    • Linux EIM executable
    • OpenMV
    • Particle library
    • Qualcomm IM SDK GStreamer
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Edge Impulse firmwares
    • Hardware specific tutorials
      • Image classification - Sony Spresense
      • Audio event detection with Particle boards
      • Motion recognition - Particle - Photon 2 & Boron
      • Motion recognition - RASynBoard
      • Motion recognition - Syntiant
      • Object detection - SiLabs xG24 Dev Kit
      • Sound recognition - TI LaunchXL
      • Keyword spotting - TI LaunchXL
      • Keyword spotting - Syntiant - RC Commands
      • Running NVIDIA TAO models on the Renesas RA8D1
      • Two cameras, two models - running multiple object detection models on the RZ/V2L
  • Edge AI Hardware
    • Overview
    • Production-ready
      • Advantech ICAM-540
      • Seeed SenseCAP A1101
      • Industry reference design - BrickML
    • MCU
      • Ambiq Apollo4 family of SoCs
      • Ambiq Apollo510
      • Arducam Pico4ML TinyML Dev Kit
      • Arduino Nano 33 BLE Sense
      • Arduino Nicla Sense ME
      • Arduino Nicla Vision
      • Arduino Portenta H7
      • Blues Wireless Swan
      • Espressif ESP-EYE
      • Himax WE-I Plus
      • Infineon CY8CKIT-062-BLE Pioneer Kit
      • Infineon CY8CKIT-062S2 Pioneer Kit
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
      • Open MV Cam H7 Plus
      • Particle Photon 2
      • Particle Boron
      • RAKwireless WisBlock
      • Raspberry Pi RP2040
      • Renesas CK-RA6M5 Cloud Kit
      • Renesas EK-RA8D1
      • Seeed Wio Terminal
      • Seeed XIAO nRF52840 Sense
      • Seeed XIAO ESP32 S3 Sense
      • SiLabs Thunderboard Sense 2
      • Sony's Spresense
      • ST B-L475E-IOT01A
      • TI CC1352P Launchpad
    • MCU + AI accelerators
      • Alif Ensemble
      • Arduino Nicla Voice
      • Avnet RASynBoard
      • Seeed Grove - Vision AI Module
      • Seeed Grove Vision AI Module V2 (WiseEye2)
      • Himax WiseEye2 Module and ISM Devboard
      • SiLabs xG24 Dev Kit
      • STMicroelectronics STM32N6570-DK
      • Synaptics Katana EVK
      • Syntiant Tiny ML Board
    • CPU
      • macOS
      • Linux x86_64
      • Raspberry Pi 4
      • Raspberry Pi 5
      • Texas Instruments SK-AM62
      • Microchip SAMA7G54
      • Renesas RZ/G2L
    • CPU + AI accelerators
      • AVNET RZBoard V2L
      • BrainChip AKD1000
      • i.MX 8M Plus EVK
      • Digi ConnectCore 93 Development Kit
      • MemryX MX3
      • MistyWest MistySOM RZ/V2L
      • Qualcomm Dragonwing RB3 Gen 2 Dev Kit
      • Renesas RZ/V2L
      • Renesas RZ/V2H
      • IMDT RZ/V2H
      • Texas Instruments SK-TDA4VM
      • Texas Instruments SK-AM62A-LP
      • Texas Instruments SK-AM68A
      • Thundercomm Rubik Pi 3
    • GPU
      • Advantech ICAM-540
      • NVIDIA Jetson
      • Seeed reComputer Jetson
    • Mobile phone
    • Porting guide
  • Integrations
    • Arduino Machine Learning Tools
    • AWS IoT Greengrass
    • Embedded IDEs - Open-CMSIS
    • NVIDIA Omniverse
    • Scailable
    • Weights & Biases
  • Tips & Tricks
    • Combining impulses
    • Increasing model performance
    • Optimizing compute time
    • Inference performance metrics
  • Concepts
    • Glossary
    • Course: Edge AI Fundamentals
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • Data engineering
      • Audio feature extraction
      • Motion feature extraction
    • Machine learning
      • Data augmentation
      • Evaluation metrics
      • Neural networks
        • Layers
        • Activation functions
        • Loss functions
        • Optimizers
          • Learned optimizer (VeLO)
        • Epochs
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • File structure
  • File examples
  • Parameter types
  • Boolean
  • Bucket
  • Dataset
  • Flag
  • Float
  • Int
  • Secret
  • Select
  • String
  • Parameter groups
  • Parameter logic
  • showIf
  • showForImplementationVersion

Was this helpful?

Export as PDF
  1. Edge Impulse Studio
  2. File specifications

parameters.json

Previousids.jsonNextsample_id_details.json

Last updated 4 months ago

Was this helpful?

The parameters.json file is included at the root of the directory of a . It is used to describe the block itself and identify the parameters available for its configuration. The parameters defined in this file are the input options rendered for the block in Studio and passed into the block as arguments when the it is run.

File structure

The file can be considered in two sections: a header section and a parameters section. The header section identifies the block type and its associated metadata. The metadata required varies by block type. This information is followed by an array of parameters items.

Custom parameters are not available for deployment blocks

type AIActionBlockParametersJson = {
    version: 1,
    type: 'ai-action',
    info: {
        name: string,
        description: string,
        requiredEnvVariables: string[] | undefined;
        operatesOn: ['images_object_detection' | 'images_single_label' | 'audio' | 'other'] | undefined;
    },
    parameters: DSPParameterItem[];
};
type DeployBlockParametersJson = {
    version: 1,
    type: 'deploy',
    info: {
        name: string,
        description: string,
        category?: 'library' | 'firmware';
        integrateUrl?: string,
        cliArguments: string,
        supportsEonCompiler: boolean,
        mountLearnBlock: boolean,
        showOptimizations: boolean,
        privileged?: boolean,
    },
};
type MachineLearningBlockParametersJson = {
    version: 1,
    type: 'machine-learning',
    info: {
        name: string,
        description: string,
        operatesOn?: 'object_detection' | 'audio' | 'image' | 'regression' | 'other';
        objectDetectionLastLayer?: 'mobilenet-ssd' | 'fomo' | 'yolov2-akida' | 'yolov5' | 'yolov5v5-drpai' | 'yolox' | 'yolov7' | 'tao-retinanet' | 'tao-ssd' | 'tao-yolov3' | 'tao-yolov4';
        imageInputScaling?: '0..1' | '-1..1' | '-128..127' | '0..255' | 'torch' | 'bgr-subtract-imagenet-mean';
        indRequiresGpu?: boolean,
        repositoryUrl?: string,
        customModelVariants?: {
            'key': string,
            'name': string,
            'inferencingEntrypoint': string,
            'profilingEntrypoint'?: string,
            'modelFiles'?: {
                'id': string,
                'name': string,
                'type': 'binary' | 'json' | 'text';
                'description': string,
            }[],
        }[],
        displayCategory?: 'classical' | 'tao';
    },
    parameters: DSPParameterItem[];
};
type DSPBlockParametersJson = {
    version: 1,
    type: 'dsp',
    info: {
        type: string,
        title: string,
        author: string,
        description: string,
        name: string,
        preferConvolution: boolean,
        convolutionColumns?: 'axes' | string;
        convolutionKernelSize?: number,
        cppType: string,
        visualization: 'dimensionalityReduction' | undefined;
        experimental: boolean,
        hasTfliteImplementation: boolean,
        latestImplementationVersion: number,
        hasImplementationVersion: boolean,
        hasFeatureImportance: boolean,
        hasAutoTune?: boolean,
        minimumVersionForAutotune?: number,
        usesState?: boolean,
        axes: {
            name: string,
            description: string,
            optional?: boolean,
        }[] | undefined;
        port?: number,
    }
    parameters: {
        group: string,
        items: DSPParameterItem[];
    }[],
};
type SyntheticDataBlockParametersJson = {
    version: 1,
    type: 'synthetic-data',
    info: {
        name: string,
        description: string,
        requiredEnvVariables: string[] | undefined;
    },
    parameters: DSPParameterItem[];
};
type TransformBlockParametersJson = {
    version: 1,
    type: 'transform',
    info: {
        name: string,
        description: string,
        operatesOn: 'file' | 'directory' | 'standalone' | undefined;
        transformMountpoints: {
            bucketId: number,
            mountPoint: string,
        }[] | undefined;
        indMetadata: boolean | undefined;
        cliArguments: string | undefined;
        allowExtraCliArguments: boolean | undefined;
        showInDataSources: boolean | undefined;
        showInCreateTransformationJob: boolean | undefined;
        requiredEnvVariables: string[] | undefined;
    },
    parameters: DSPParameterItem[];
};
type DSPParameterItem = {
    // Rendered as the label
    name: string,
    // Default value
    value: string | number | boolean;
    // Type of UI element to render
    type: 'string' | 'int' | 'float' | 'select' | 'boolean' | 'bucket' | 'dataset' | 'flag' | 'secret';
    // Optional help text (rendered as a help icon, text is shown on hover)
    help?: string,
    // Parameter that maps back to your block (no spaces allowed)
    param: string,
    // When type is "select" lists all options for the dropdown menu
    // you can either pass in an array of strings, or a list of objects
    // (if you want to customize the label)
    valid?: (string | { label: string, value: string })[];
    // If this is set, the field is rendered as readonly with the text "Click to set"
    // when clicked the UI changes to a normal text box.
    optional?: boolean,
    // Whether the field should be rendered as readonly.
    // These fields are shown, but cannot be changed.
    readonly?: boolean,
    // If set, this item is only shown if the implementation version of the block matches
    // (only for processing blocks)
    showForImplementationVersion: number[] | undefined;
    // Show/hide the item depending on another parameter
    showIf: ({
        parameter: string,
        operator: 'eq' | 'neq',
        value: string,
    }) | undefined;
    // Processing blocks only. If set, a macro is created like:
    // #define EI_DSP_PARAMS_BLOCKCPPTYPE_PARAM     VALUE
    createMacro?: boolean,
    // When type is "select" the value passed into your block will be a string,
    // you can use configType to override the type (used during deployment only)
    configType?: string,
    // (Optional) UX section to show parameter in.
    section?: 'advanced' | 'modelProfiling';
    // Only valid for type "string". If set to true, renders a multi-line text area.
    multiline?: boolean,
    // If set, shows a hint about the input format below the input. Use this
    // sparingly, as it clutters the UI.
    hint?: string,
    // Sets the placeholder text on the input element (for types "string", "int", "float" and "secret")
    placeholder?: string,
};

File examples

Below you will find full examples of parameter files for the various types of blocks.

{
    "version": 1,
    "type": "ai-action",
    "info": {
        "name": "Bounding box labeling with OWL-ViT",
        "description": "Zero-shot object detector to automatically label objects using bounding boxes with OWL-ViT. To detect more complex objects you can combine this block with 'Bounding box re-labeling with GPT-4o'. First, roughly find objects using this block, then re-label (or remove) bounding boxes using the GPT4o block.",
        "requiredEnvVariables": [
            "BEAM_ENDPOINT",
            "BEAM_ACCESS_KEY"
        ],
        "operatesOn": [
            "images_object_detection"
        ]
    },
    "parameters": [
        {
            "name": "Prompt",
            "value": "A person (person, 0.2)",
            "type": "string",
            "help": "A prompt specifying the images to label. Separate multiple objects with a newline. You can specify the label and the min. confidence rating in the parenthesis.",
            "param": "prompt",
            "multiline": true,
            "placeholder": "A prompt specifying the images to label. Separate multiple objects with a newline. You can specify the label and the min. confidence rating in the parenthesis.",
            "hint": "Separate multiple objects with a newline. You can specify the label and the min. confidence rating in the parenthesis (e.g. 'A person (person, 0.2)')."
        },
        {
            "name": "Delete existing bounding boxes",
            "value": "no",
            "type": "select",
            "valid": [
                { "label": "No", "value": "no" },
                { "label": "Only if they match any labels in the prompt", "value": "matching-prompt" },
                { "label": "Yes", "value": "yes" }
            ],
            "param": "delete_existing_bounding_boxes"
        },
        {
            "name": "Ignore objects smaller than (%)",
            "optional": true,
            "value": 0,
            "type": "float",
            "param": "ignore-objects-smaller-than",
            "help": "Any objects where the area is smaller than X% of the whole image will be ignored"
        },
        {
            "name": "Ignore objects larger than (%)",
            "optional": true,
            "value": 100,
            "type": "float",
            "param": "ignore-objects-larger-than",
            "help": "Any objects where the area is larger than X% of the whole image will be ignored"
        },
        {
            "name": "Non-max suppression",
            "help": "Deduplicate boxes via non-max suppression (NMS)",
            "value": true,
            "type": "flag",
            "param": "nms"
        },
        {
            "name": "NMS IoU threshold",
            "help": "Threshold for non-max suppression",
            "value": 0.2,
            "type": "float",
            "param": "nms-iou-threshold",
            "showIf": {
                "parameter": "nms",
                "operator": "eq",
                "value": "true"
            }
        }
    ]
}
{
    "version": 1,
    "type": "deploy",
    "info": {
        "name": "Build Linux app",
        "description": "An example custom deployment block to build a standalone Linux application",
        "category": "firmware",
        "mountLearnBlock": false,
        "supportsEonCompiler": true,
        "showOptimizations": true,
        "cliArguments": "",
        "privileged": false,
        "integrateUrl": "https://docs.edgeimpulse.com/docs"
    }
}
{
    "version": 1,
    "type": "machine-learning",
    "info": {
        "name": "Keras multi-layer perceptron",
        "description": "Demonstration of a simple Keras custom learn block with CUDA drivers that can run on both CPU and GPU.",
        "indRequiresGpu": false,
        "operatesOn": "other",
        "repositoryUrl": "https://github.com/edgeimpulse/example-custom-ml-block-keras"
    },
    "parameters": [
        {
            "name": "Number of training cycles",
            "value": 30,
            "type": "int",
            "help": "Number of epochs to train the neural network on.",
            "param": "epochs"
        },
        {
            "name": "Learning rate",
            "value": 0.001,
            "type": "float",
            "help": "How fast the neural network learns, if the network overfits quickly, then lower the learning rate.",
            "param": "learning-rate"
        }
    ]
}
{
    "version": 1,
    "info": {
        "title": "Custom processing block example",
        "author": "Test User",
        "description": "An example of a custom processing block.",
        "name": "Custom block",
        "cppType": "custom_block",
        "preferConvolution": false,
        "visualization": "dimensionalityReduction",
        "experimental": false,
        "latestImplementationVersion": 1
    },
    "parameters": [
        {
            "group": "Scaling",
            "items": [
                {
                    "name": "Scale axes",
                    "value": 1,
                    "type": "float",
                    "help": "Multiplies axes by this number",
                    "param": "scale-axes"
                }
            ]
        }
    ]
}
{
    "version": 1,
    "type": "synthetic-data",
    "info": {
        "name": "Whisper voice synthesis",
        "description": "An example synthetic data block that uses Whisper to generate audio keyword data."
    },
    "parameters": [
        {
            "name": "OpenAI API Key",
            "value": "",
            "type": "secret",
            "help": "An API Key that gives access to OpenAI",
            "param": "OPENAI_API_KEY"
        },
        {
            "name": "Phrase",
            "value": "Edge Impulse",
            "type": "string",
            "help": "Phrase for which to generate voice samples",
            "param": "phrase"
        },
        {
            "name": "Label",
            "value": "edge_impulse",
            "type": "string",
            "help": "Samples will be added to Edge Impulse with this label",
            "param": "label"
        },
        {
            "name": "Number of samples",
            "value": 3,
            "type": "int",
            "help": "Number of unique samples to generate",
            "param": "samples"
        },
        {
            "name": "Voice",
            "value": "random",
            "type": "select",
            "valid": [ "random", "alloy", "echo", "fable", "onyx", "nova", "shimmer" ],
            "help": "Voice to use for speech generation",
            "param": "voice",
            "optional": true
        },
        {
            "name": "Model",
            "value": "tts-1",
            "type": "select",
            "valid": [ "tts-1", "tts-1-hd" ],
            "help": "Model to use for speech generation",
            "param": "model",
            "optional": true
        },
        {
            "name": "Speed",
            "value": "0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2",
            "type": "string",
            "help": "A list of possible speed of the generated audio. Select values between '0.25' and '4.0'. A random one will be picked for each sample.",
            "param": "speed"
        },
        {
            "name": "Minimum length (seconds)",
            "value": 1,
            "type": "float",
            "help": "Minimum length of generated audio samples. Audio samples will be padded with silence to minimum length",
            "param": "min-length"
        },
        {
            "name": "Upload to category",
            "value": "split",
            "type": "select",
            "valid": [
                { "label": "Split 80/20 between training and testing", "value": "split" },
                { "label": "Training", "value": "training" },
                { "label": "Testing", "value": "testing" }
            ],
            "help": "Data will be uploaded to this category in your project",
            "param": "upload-category"
        }
    ]
}
{
    "version": 1,
    "type": "transform",
    "info": {
        "name": "Mix background noise",
        "description": "An example transformation block that mixes background noise into audio samples.",
        "operatesOn": "file",
        "transformMountpoints": [
            {
                "bucketId": 5532,
                "mountPoint": "/mnt/s3fs/edge-impulse-demo-bucket"
            }
        ]
    },
    "parameters": [
        {
            "name": "Number of files to create",
            "type": "int",
            "value": 10,
            "help": "How many new files to create per input file. Noise is randomly mixed in per file.",
            "param": "out-count"
        },
        {
            "name": "Frequency",
            "value": 16000,
            "type": "int",
            "param": "frequency",
            "help": "Output frequency of the WAV files"
        }
    ]
}

Parameter types

Parameter items are defined as JSON objects that contain a type property. For example:

{
    "name": "Scale axes",
    "value": 1.0,
    "type": "float",
    "help": "Multiplies axes by this number.",
    "param": "scale-axes"
}

The parameter type options available are shown in the table below, along with how the parameter is rendered in Studio and how it will be passed to your custom block. In general, parameter items are passed as command line arguments to your custom block script.

Type
Renders
Passes

Checkbox

--<param-name> 1 (true) | --<param-name> 0 (false)

Dropdown

--<param-name> "<bucket-name>"

Dropdown

--<param-name> "<dataset-name>"

Checkbox

--<param-name> (true) | (false)

Text box

--<param-name> <value>

Text box

--<param-name> <value>

Text box

<param-name> (environment variable)

Dropdown

--<param-name> <value>

Text box

--<param-name> "<value>"

Processing blocks do not receive command line arguments

Instead of command line arguments being passed to the block as shown above, processing blocks receive an HTTP request with the parameters in the request body, which are subsequently passed to the function generating the features in your processing block. In this case, dashes in parameter names are replaced with underscores before being passed to your function as arguments:

A processing block parameter named custom-processing-param is passed to your feature generation function as custom_processing_param.

Secrets are passed as environment variables instead of command line arguments

Boolean

{
    "name": "Boolean example",
    "value": true,
    "type": "boolean",
    "help": "An example boolean parameter type to show how it is rendered.",
    "param": "do-boolean-action"
}
--do-boolean-action 1

Bucket

Only available for AI labeling, synthetic data, and transformation blocks

{
    "name": "Bucket example",
    "value": "",
    "type": "bucket",
    "help": "An example bucket parameter type to show how it is rendered.",
    "param": "bucket-example-param"
}
--bucket-example-param "edge-impulse-customers-demo-team"

Dataset

Only available for AI labeling, synthetic data, and transformation blocks

{
    "name": "Dataset example",
    "value": "",
    "type": "dataset",
    "help": "An example flag parameter type to show how it is rendered.",
    "param": "dataset-example-param"
}
--dataset-example-param "Gestures"

Flag

{
    "name": "Flag example",
    "value": true,
    "type": "flag",
    "help": "An example flag parameter type to show how it is rendered.",
    "param": "do-flag-action"
}
--do-flag-action

Float

{
    "name": "Float example",
    "value": 0.1,
    "type": "float",
    "help": "An example float parameter type to show how it is rendered.",
    "param": "float-example-param"
}
--float-example-param 0.1

Int

{
    "name": "Int example",
    "value": 1,
    "type": "int",
    "help": "An example int parameter type to show how it is rendered.",
    "param": "int-example-param"
}
--int-example-param 1

Secret

Only available for AI labeling, synthetic data, and transformation blocks

{
    "name": "Secret example",
    "value": "",
    "type": "secret",
    "help": "An example secret parameter type to show how it is rendered.",
    "param": "SECRET_EXAMPLE_PARAM"
}
SECRET_EXAMPLE_PARAM

Select

{
    "name": "Select example 1",
    "value": "1",
    "type": "select",
    "help": "An example select parameter type to show how it is rendered.",
    "param": "select-example-param-1",
    "valid": [ "1", "3", "10", "30", "100","1000" ]
}
--select-example-param-1 "1"

{
    "name": "Select example 2",
    "value": "1",
    "type": "select",
    "help": "An example select parameter type to show how it is rendered.",
    "param": "select-example-param-2",
    "valid": [
        { "label": "One", "value": "1" },
        { "label": "Three", "value": "3" },
        { "label": "Ten", "value": "10" },
        { "label": "Thirty", "value": "30" },
        { "label": "One hundred", "value": "100" },
        { "label": "One thousand", "value": "1000"}
    ]
}
--select-example-param-2 "1"

String

{
    "name": "String example",
    "value": "An example string",
    "type": "string",
    "help": "An example string parameter type to show how it is rendered.",
    "param": "string-example-param"
}
--string-example-param "An example string"

Parameter groups

Only available for processing blocks

Processing block parameters can contain multiple groups to better organize the options when rendered in Studio. Each string entered as value for the group property is rendered as a header element.

"parameters": [
    {
        "group": "Example parameter group 1",
        "items": [
            {
                "name": "Boolean example",
                "value": false,
                "type": "boolean",
                "help": "An example boolean parameter type to show how it is rendered.",
                "param": "do-boolean-action"
            },
            {
                "name": "Flag example",
                "value": false,
                "type": "flag",
                "help": "An example flag parameter type to show how it is rendered.",
                "param": "do-flag-action"
            }
        ]
    },
    {
        "group": "Example parameter group 2",
        "items": [
            {
                "name": "Float example",
                "value": 1.0,
                "type": "float",
                "help": "An example float parameter type to show how it is rendered.",
                "param": "float-example-param"
            }
        ]
    }
]

Parameter logic

showIf

Parameters can be conditionally shown based on the value of another parameter using the showIf property.

{
    "name": "Boolean example",
    "value": false,
    "type": "boolean",
    "help": "An example boolean parameter type to show how it is rendered.",
    "param": "do-boolean-action"
},
{
    "name": "Int example",
    "value": 1,
    "type": "int",
    "help": "An example int parameter type to show how it is rendered.",
    "param": "int-example-param",
    "showIf": {
            "parameter": "do-boolean-action",
            "operator": "eq",
            "value": "true"
        }
},
{
    "name": "Float example",
    "value": 1.0,
    "type": "float",
    "help": "An example float parameter type to show how it is rendered.",
    "param": "float-example-param"
}

showForImplementationVersion

Only available for processing blocks

Processing blocks can have different versions, which allows you to add new functionality to existing blocks without breaking earlier implementations. You are able to shown/hide parameters based on the implementation version set in the latestImplementationVersion property of the processing block.

A processing block set to version 4:

"info": {
    "title": "Spectral Analysis",
    ...
    "latestImplementationVersion": 4
}

A parameter shown only for implementation versions 3 and 4:

{
    "name": "Type",
    "value": "FFT",
    "help": "Type of spectral analysis to apply",
    "type": "select",
    "valid": [ "FFT", "Wavelet" ],
    "param": "analysis-type",
    "showForImplementationVersion": [ 3, 4 ]
}

custom block
Boolean
Bucket
Dataset
Flag
Float
Int
Secret
Select
String
All parameter types rendered in Studio
Boolean parameter type rendered in Studio
Bucket parameter type rendered in Studio
Dataset parameter type rendered in Studio
Flag parameter type rendered in Studio
Float parameter type rendered in Studio
Int parameter type rendered in Studio
Secret parameter type rendered in Studio
Secret parameter type (hidden) rendered in Studio
Select parameter type without labels rendered in Studio
Select parameter type valid options without labels rendered in Studio
Select parameter type with labels rendered in Studio
Select parameter type valid options with labels rendered in Studio
String parameter type rendered in Studio
Processing parameters grouped into two groups
Parameter conditionally hidden based on another parameter
Parameter conditionally shown based on another parameter