LogoLogo
HomeAPI & SDKsProjectsForumStudio
  • Getting started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions (FAQ)
  • Tutorials
    • End-to-end tutorials
      • Computer vision
        • Image classification
        • Object detection
          • Object detection with bounding boxes
          • Detect objects with centroid (FOMO)
        • Visual anomaly detection
        • Visual regression
      • Audio
        • Sound recognition
        • Keyword spotting
      • Time-series
        • Motion recognition + anomaly detection
        • Regression + anomaly detection
        • HR/HRV
        • Environmental (Sensor fusion)
    • Data
      • Data ingestion
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
        • Using the Edge Impulse Python SDK to upload and download data
        • Trigger connected board data sampling
        • Ingest multi-labeled data using the API
      • Synthetic data
        • Generate audio datasets using Eleven Labs
        • Generate image datasets using Dall-E
        • Generate keyword spotting datasets using Google TTS
        • Generate physics simulation datasets using PyBullet
        • Generate timeseries data with MATLAB
      • Labeling
        • Label audio data using your existing models
        • Label image data using GPT-4o
      • Edge Impulse Datasets
    • Feature extraction
      • Building custom processing blocks
      • Sensor fusion using embeddings
    • Machine learning
      • Classification with multiple 2D input features
      • Visualize neural networks decisions with Grad-CAM
      • Sensor fusion using embeddings
      • FOMO self-attention
    • Inferencing & post-processing
      • Count objects using FOMO
      • Continuous audio sampling
      • Multi-impulse (C++)
      • Multi-impulse (Python)
    • Lifecycle management
      • CI/CD with GitHub Actions
      • Data aquisition from S3 object store - Golioth on AI
      • OTA model updates
        • with Arduino IDE (for ESP32)
        • with Arduino IoT Cloud
        • with Blues Wireless
        • with Docker on Allxon
        • with Docker on Balena
        • with Docker on NVIDIA Jetson
        • with Espressif IDF
        • with Nordic Thingy53 and the Edge Impulse app
        • with Particle Workbench
        • with Zephyr on Golioth
    • API examples
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Python API bindings example
      • Running jobs using the API
      • Trigger connected board data sampling
    • Python SDK examples
      • Using the Edge Impulse Python SDK to run EON Tuner
      • Using the Edge Impulse Python SDK to upload and download data
      • Using the Edge Impulse Python SDK with Hugging Face
      • Using the Edge Impulse Python SDK with SageMaker Studio
      • Using the Edge Impulse Python SDK with TensorFlow and Keras
      • Using the Edge Impulse Python SDK with Weights & Biases
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
        • Cloud data storage
      • Data pipelines
      • Data transformation
        • Transformation blocks
      • Upload portals
      • Custom blocks
        • Custom AI labeling blocks
        • Custom deployment blocks
        • Custom learning blocks
        • Custom processing blocks
        • Custom synthetic data blocks
        • Custom transformation blocks
      • Health reference design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
    • Project dashboard
      • Select AI hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (time-series)
      • Multi-label (time-series)
      • Tabular data (pre-processed & non-time-series)
      • Metadata
      • Auto-labeler | deprecated
    • Impulses
    • EON Tuner
      • Search space
    • Processing blocks
      • Audio MFCC
      • Audio MFE
      • Audio Syntiant
      • Flatten
      • HR/HRV features
      • Image
      • IMU Syntiant
      • Raw data
      • Spectral features
      • Spectrogram
      • Custom processing blocks
      • Feature explorer
    • Learning blocks
      • Anomaly detection (GMM)
      • Anomaly detection (K-means)
      • Classification
      • Classical ML
      • Object detection
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • Object tracking
      • Regression
      • Transfer learning (images)
      • Transfer learning (keyword spotting)
      • Visual anomaly detection (FOMO-AD)
      • Custom learning blocks
      • Expert mode
      • NVIDIA TAO | deprecated
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
    • Bring your own model (BYOM)
    • File specifications
      • deployment-metadata.json
      • ei-metadata.json
      • ids.json
      • parameters.json
      • sample_id_details.json
      • train_input.json
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
      • Rust Library
    • Rust Library
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On Android
      • On your desktop computer
      • On your Alif Ensemble Series Device
      • On your Espressif ESP-EYE (ESP32) development board
      • On your Himax WE-I Plus
      • On your Raspberry Pi Pico (RP2040) development board
      • On your SiLabs Thunderboard Sense 2
      • On your Spresense by Sony development board
      • On your Syntiant TinyML Board
      • On your TI LaunchPad using GCC and the SimpleLink SDK
      • On your Zephyr-based Nordic Semiconductor development board
    • Arm Keil MDK CMSIS-PACK
    • Arduino library
      • Arduino IDE 1.18
    • Cube.MX CMSIS-PACK
    • Docker container
    • DRP-AI library
      • DRP-AI on your Renesas development board
      • DRP-AI TVM i8 on Renesas RZ/V2H
    • IAR library
    • Linux EIM executable
    • OpenMV
    • Particle library
    • Qualcomm IM SDK GStreamer
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Edge Impulse firmwares
    • Hardware specific tutorials
      • Image classification - Sony Spresense
      • Audio event detection with Particle boards
      • Motion recognition - Particle - Photon 2 & Boron
      • Motion recognition - RASynBoard
      • Motion recognition - Syntiant
      • Object detection - SiLabs xG24 Dev Kit
      • Sound recognition - TI LaunchXL
      • Keyword spotting - TI LaunchXL
      • Keyword spotting - Syntiant - RC Commands
      • Running NVIDIA TAO models on the Renesas RA8D1
      • Two cameras, two models - running multiple object detection models on the RZ/V2L
  • Edge AI Hardware
    • Overview
    • Production-ready
      • Advantech ICAM-540
      • Seeed SenseCAP A1101
      • Industry reference design - BrickML
    • MCU
      • Ambiq Apollo4 family of SoCs
      • Ambiq Apollo510
      • Arducam Pico4ML TinyML Dev Kit
      • Arduino Nano 33 BLE Sense
      • Arduino Nicla Sense ME
      • Arduino Nicla Vision
      • Arduino Portenta H7
      • Blues Wireless Swan
      • Espressif ESP-EYE
      • Himax WE-I Plus
      • Infineon CY8CKIT-062-BLE Pioneer Kit
      • Infineon CY8CKIT-062S2 Pioneer Kit
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
      • Open MV Cam H7 Plus
      • Particle Photon 2
      • Particle Boron
      • RAKwireless WisBlock
      • Raspberry Pi RP2040
      • Renesas CK-RA6M5 Cloud Kit
      • Renesas EK-RA8D1
      • Seeed Wio Terminal
      • Seeed XIAO nRF52840 Sense
      • Seeed XIAO ESP32 S3 Sense
      • SiLabs Thunderboard Sense 2
      • Sony's Spresense
      • ST B-L475E-IOT01A
      • TI CC1352P Launchpad
    • MCU + AI accelerators
      • Alif Ensemble
      • Arduino Nicla Voice
      • Avnet RASynBoard
      • Seeed Grove - Vision AI Module
      • Seeed Grove Vision AI Module V2 (WiseEye2)
      • Himax WiseEye2 Module and ISM Devboard
      • SiLabs xG24 Dev Kit
      • STMicroelectronics STM32N6570-DK
      • Synaptics Katana EVK
      • Syntiant Tiny ML Board
    • CPU
      • macOS
      • Linux x86_64
      • Raspberry Pi 4
      • Raspberry Pi 5
      • Texas Instruments SK-AM62
      • Microchip SAMA7G54
      • Renesas RZ/G2L
    • CPU + AI accelerators
      • AVNET RZBoard V2L
      • BrainChip AKD1000
      • i.MX 8M Plus EVK
      • Digi ConnectCore 93 Development Kit
      • MemryX MX3
      • MistyWest MistySOM RZ/V2L
      • Qualcomm Dragonwing RB3 Gen 2 Dev Kit
      • Renesas RZ/V2L
      • Renesas RZ/V2H
      • IMDT RZ/V2H
      • Texas Instruments SK-TDA4VM
      • Texas Instruments SK-AM62A-LP
      • Texas Instruments SK-AM68A
      • Thundercomm Rubik Pi 3
    • GPU
      • Advantech ICAM-540
      • NVIDIA Jetson
      • Seeed reComputer Jetson
    • Mobile phone
    • Porting guide
  • Integrations
    • Arduino Machine Learning Tools
    • AWS IoT Greengrass
    • Embedded IDEs - Open-CMSIS
    • NVIDIA Omniverse
    • Scailable
    • Weights & Biases
  • Tips & Tricks
    • Combining impulses
    • Increasing model performance
    • Optimizing compute time
    • Inference performance metrics
  • Concepts
    • Glossary
    • Course: Edge AI Fundamentals
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • Data engineering
      • Audio feature extraction
      • Motion feature extraction
    • Machine learning
      • Data augmentation
      • Evaluation metrics
      • Neural networks
        • Layers
        • Activation functions
        • Loss functions
        • Optimizers
          • Learned optimizer (VeLO)
        • Epochs
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • File structure
  • DeploymentMetadataV1
  • CreateImpulseStateBase
  • CreateImpulseStateDsp
  • CreateImpulseStateDspNamedAxis
  • CreateImpulseStateInput
  • CreateImpulseStateInputFeatures
  • CreateImpulseStateInputImage
  • CreateImpulseStateInputTimeSeries
  • CreateImpulseStateLearning
  • CreateImpulseStateMetadata
  • DeploymentMetadataImpulse
  • DSPConfig
  • DSPFeatureMetadata
  • DSPFeatureMetadataOutput
  • KerasModelIODetails
  • KerasModelTensorDetails
  • File example

Was this helpful?

Export as PDF
  1. Edge Impulse Studio
  2. File specifications

deployment-metadata.json

PreviousFile specificationsNextei-metadata.json

Last updated 5 months ago

Was this helpful?

The deployment-metadata.json file is passed to . It provides details about the impulse being deployed.

File structure

DeploymentMetadataV1

interface DeploymentMetadataV1 {
    version: 1;
    // Global deployment counter
    deployCounter: number;
    // The output classes (for classification)
    classes: string[];
    // The number of samples to be taken per inference (e.g. 100Hz data, 3 axis, 2 seconds => 200)
    samplesPerInference: number;
    // Number of axes ((e.g. 100Hz data, 3 axis, 2 seconds => 3)
    axesCount: number;
    // Frequency of the data
    frequency: number;
    // TFLite models (already converted and quantized)
    tfliteModels: {
        // Information about the model type, e.g. quantization parameters
        details: KerasModelIODetails;
        // Name of the input tensor
        inputTensor: string | undefined;
        // Name of the output tensor
        outputTensor: string | undefined;
        // Path of the model on disk
        modelPath: string;
        // Path of the model on disk (ONNX), not always available
        onnxModelPath: string | undefined;
        // Path of a secondary/auxiliary model on disk (ONNX), not always available
        onnxAuxModelPath: string | undefined;
        // Path to .prototxt (in case of YOLOX), not always available
        prototxtPath: string | undefined;
        // Path to .fbz (BrainChip Akida model file), not always available
        akidaModelPath: string | undefined;
        // Path to .fbz (BrainChip Akida model prepared for Edge Learning), not always available
        akidaEdgeLearningModelPath: string | undefined;
        // Calculated arena size when running TFLite in interpreter mode
        arenaSize: number;
        // Number of values to be passed into the model
        inputFrameSize: number;
    }[];
    // Project information
    project: {
        // Project name
        name: string;
        // Project ID
        id: number;
        // Project owner (user or organization name)
        owner: string;
        // API key, only set for deploy blocks with privileged flag and development keys set
        apiKey: string | undefined;
        // Studio host
        studioHost: string;
    };
    // Impulse information
    impulse: DeploymentMetadataImpulse;
    // Sensor guess based on the input
    sensor: 'camera' | 'microphone' | 'accelerometer' | 'positional' | 'environmental' | 'fusion' | undefined;
    // Folder locations
    folders: {
        // Input files are here, the input folder contains 'edge-impulse-sdk', 'model-parameters', 'tflite-model'
        input: string;
        // Write your output file here
        output: string;
    };
}

CreateImpulseStateBase

/**
 * Fields common to all CreateImpulseStateX
 */
interface CreateImpulseStateBase extends CreateImpulseStateMetadata {
    id: number;
    name: string;
    title: string;
    type: string;
}

CreateImpulseStateDsp

interface CreateImpulseStateDsp extends CreateImpulseStateBase {
    type: string | 'custom';
    implementationVersion: number;
    axes: string[];
    customUrl?: string;
    input: number;
    tunerBaseBlockId?: number;
    autotune?: boolean;
    organization?: {
        id: number;
        dspId: number;
    };
    namedAxes?: CreateImpulseStateDspNamedAxis[];
}

CreateImpulseStateDspNamedAxis

type CreateImpulseStateDspNamedAxis = {
    name: string,
    description?: string,
    required?: boolean,
    selectedAxis?: string,
};

CreateImpulseStateInput

type CreateImpulseStateInput = CreateImpulseStateInputTimeSeries |
    CreateImpulseStateInputImage |
    CreateImpulseStateInputFeatures;

CreateImpulseStateInputFeatures

interface CreateImpulseStateInputFeatures extends CreateImpulseStateBase {
    type: 'features';
    datasetSubset?: {
        subsetModulo: number;
        subsetSeed: number;
    };
}

CreateImpulseStateInputImage

interface CreateImpulseStateInputImage extends CreateImpulseStateBase {
    type: 'image';
    imageWidth: number;
    imageHeight: number;
    resizeMode: 'squash' | 'fit-short' | 'fit-long' | 'crop';
    cropAnchor: 'top-left' | 'top-center' | 'top-right' | 'middle-left' | 'middle-center' | 'middle-right' | 'bottom-left' | 'bottom-center' | 'bottom-right';
    resizeMethod: 'nearest' | 'lanczos3';
    labelingMethod?: 'object_detection' | 'single_label';
    datasetSubset?: {
        subsetModulo: number;
        subsetSeed: number;
    };
}

CreateImpulseStateInputTimeSeries

interface CreateImpulseStateInputTimeSeries extends CreateImpulseStateBase {
    type: 'time-series';
    windowSizeMs: number;
    windowIncreaseMs: number;
    frequencyHz: number;
    classificationWindowIncreaseMs?: number;
    padZeros: boolean;
    datasetSubset?: {
        subsetModulo: number;
        subsetSeed: number;
    };
}

CreateImpulseStateLearning

interface CreateImpulseStateLearning extends CreateImpulseStateBase {
    dsp: number[];
    type: typeof ALL_CREATE_IMPULSE_STATE_LEARNING_TYPES[number];
}

const ALL_CREATE_IMPULSE_STATE_LEARNING_TYPES = [
    'keras',
    'keras-transfer-image',
    'keras-transfer-kws',
    'keras-object-detection',
    'keras-regression',
    'anomaly',
    'keras-akida',
    'keras-akida-transfer-image',
    'keras-akida-object-detection',
    'anomaly-gmm',
    'keras-visual-anomaly',
];

CreateImpulseStateMetadata

/**
 * Provides metadata shared between all block types
 */
interface CreateImpulseStateMetadata {
    /**
     * Metadata for block versioning
     */
    // The user-editable description of this block version
    description?: string;
    // Which part of the system created this version (createImpulse | clone | tuner)
    createdBy?: string;
    // The date and time this version was created
    createdAt?: Date;
    // Tuner template block id. This is _always_ -1 if the block is a Tuner block.
    // the only place where this is used is in the DB to query for Tuner-managed blocks
    // in the block config table.
    tunerTemplateId?: number;
    // If this is true, this block is also a tuner block.
    db?: boolean;
}

DeploymentMetadataImpulse

interface DeploymentMetadataImpulse {
    inputBlocks: CreateImpulseStateInput[];
    dspBlocks: (CreateImpulseStateDsp & { metadata: DSPFeatureMetadata | undefined })[];
    learnBlocks: CreateImpulseStateLearning[];
}

DSPConfig

interface DSPConfig {
    options: {[s: string]: string | number | boolean | null;};
    performance: { latency: number, ram: number } | undefined;
    calculateFeatureImportance: boolean;
    // Currently only used by EON tuner to identify blocks with the feature explorer
    // skipped.
    skipFeatureExplorer?: boolean;
}

DSPFeatureMetadata

interface DSPFeatureMetadata {
    created: Date;
    dspConfig: DSPConfig;
    labels: string[];   // the training labels
    featureLabels: string[] | undefined;
    featureCount: number;
    valuesPerAxis: number;
    windowCount: number;
    windowSizeMs: number;
    windowIncreaseMs: number;
    padZeros: boolean;
    frequency: number;
    outputConfig: DSPFeatureMetadataOutput;
    performance: { latency: number, ram: number } | undefined;
    fftUsed: number[] | undefined;
    includeEmptyLabels: boolean;
    inputShape: number[] | undefined;
    includedSamplesAreInOrder: boolean;
    resamplingAlgorithmVersion: number | undefined;
    resizingAlgorithmVersion: number | undefined;
}

DSPFeatureMetadataOutput

type DSPFeatureMetadataOutput = {
    type: 'image',
    shape: { width: number, height: number, channels: number, frames?: number },
    axes?: number
} | {
    type: 'spectrogram',
    shape: { width: number, height: number },
    axes?: number
} | {
    type: 'flat',
    shape: { width: number },
    axes?: number
};

KerasModelIODetails

/**
 * Information required to process a model's input and output data
 */
interface KerasModelIODetails {
    modelType: 'int8' | 'float32' | 'akida' | 'requiresRetrain';
    inputs: KerasModelTensorDetails[];
    outputs: KerasModelTensorDetails[];
}

KerasModelTensorDetails

/**
 * Information necessary to quantize or dequantize the contents of a tensor
 */
type KerasModelTensorDetails = {
    dataType: 'float32';
    // These are added since TF2.7 - older models don't have them
    name?: string;
    shape?: number[];
} | {
    dataType: 'int8' | 'uint8';
    // These are added since TF2.7 - older models don't have them
    name?: string;
    shape?: number[];
    // Scale and zero point are used only for quantized tensors
    quantizationScale?: number;
    quantizationZeroPoint?: number;
};

File example

{
    "version": 1,
    "samplesPerInference": 125,
    "axesCount": 3,
    "classes": [
        "idle",
        "snake",
        "updown",
        "wave"
    ],
    "deployCounter": 83,
    "folders": {
        "input": "/home/input",
        "output": "/home/output"
    },
    "frequency": 62.5,
    "impulse": {
        "inputBlocks": [
            {
                "id": 2,
                "type": "time-series",
                "name": "Time series",
                "title": "Time series data",
                "windowSizeMs": 2000,
                "windowIncreaseMs": 240,
                "frequencyHz": 62.5,
                "padZeros": false,
                "primaryVersion": true,
                "db": false
            }
        ],
        "dspBlocks": [
            {
                "id": 24,
                "type": "spectral-analysis",
                "name": "Spectral features",
                "axes": [
                    "accX",
                    "accY",
                    "accZ"
                ],
                "title": "Spectral Analysis",
                "input": 2,
                "primaryVersion": true,
                "createdBy": "createImpulse",
                "createdAt": "2022-08-07T07:39:37.055Z",
                "implementationVersion": 2,
                "db": false,
                "metadata": {
                    "created": "2023-08-29T01:32:50.434Z",
                    "dspConfig": {
                        "options": {
                            "scale-axes": 1,
                            "filter-cutoff": 8,
                            "filter-order": 6,
                            "fft-length": 64,
                            "spectral-peaks-count": 3,
                            "spectral-peaks-threshold": 0.1,
                            "spectral-power-edges": "0.1, 0.5, 1.0, 2.0, 5.0",
                            "do-log": true,
                            "do-fft-overlap": true,
                            "wavelet-level": 1,
                            "extra-low-freq": false,
                            "input-decimation-ratio": "1",
                            "filter-type": "low",
                            "analysis-type": "FFT",
                            "wavelet": "db4"
                        },
                        "performance": {
                            "latency": 4,
                            "ram": 2144
                        },
                        "calculateFeatureImportance": false
                    },
                    "labels": [
                        "idle",
                        "snake",
                        "updown",
                        "wave"
                    ],
                    "featureLabels": [
                        "accX RMS",
                        "accX Skewness",
                        "accX Kurtosis",
                        "accX Spectral Power 0.49 - 1.46 Hz",
                        "accX Spectral Power 1.46 - 2.44 Hz",
                        "accX Spectral Power 2.44 - 3.42 Hz",
                        "accX Spectral Power 3.42 - 4.39 Hz",
                        "accX Spectral Power 4.39 - 5.37 Hz",
                        "accX Spectral Power 5.37 - 6.35 Hz",
                        "accX Spectral Power 6.35 - 7.32 Hz",
                        "accX Spectral Power 7.32 - 8.3 Hz",
                        "accY RMS",
                        "accY Skewness",
                        "accY Kurtosis",
                        "accY Spectral Power 0.49 - 1.46 Hz",
                        "accY Spectral Power 1.46 - 2.44 Hz",
                        "accY Spectral Power 2.44 - 3.42 Hz",
                        "accY Spectral Power 3.42 - 4.39 Hz",
                        "accY Spectral Power 4.39 - 5.37 Hz",
                        "accY Spectral Power 5.37 - 6.35 Hz",
                        "accY Spectral Power 6.35 - 7.32 Hz",
                        "accY Spectral Power 7.32 - 8.3 Hz",
                        "accZ RMS",
                        "accZ Skewness",
                        "accZ Kurtosis",
                        "accZ Spectral Power 0.49 - 1.46 Hz",
                        "accZ Spectral Power 1.46 - 2.44 Hz",
                        "accZ Spectral Power 2.44 - 3.42 Hz",
                        "accZ Spectral Power 3.42 - 4.39 Hz",
                        "accZ Spectral Power 4.39 - 5.37 Hz",
                        "accZ Spectral Power 5.37 - 6.35 Hz",
                        "accZ Spectral Power 6.35 - 7.32 Hz",
                        "accZ Spectral Power 7.32 - 8.3 Hz"
                    ],
                    "valuesPerAxis": 11,
                    "windowCount": 2554,
                    "featureCount": 33,
                    "windowSizeMs": 2000,
                    "windowIncreaseMs": 240,
                    "frequency": 62.5,
                    "padZeros": false,
                    "outputConfig": {
                        "type": "flat",
                        "shape": {
                            "width": 33
                        }
                    },
                    "performance": {
                        "latency": 4,
                        "ram": 2144
                    },
                    "fftUsed": [
                        64
                    ],
                    "includeEmptyLabels": false,
                    "inputShape": [
                        375
                    ],
                    "includedSamplesAreInOrder": true
                }
            }
        ],
        "learnBlocks": [
            {
                "id": 7,
                "type": "keras",
                "name": "NN Classifier",
                "dsp": [
                    24
                ],
                "title": "Neural Network (Keras)",
                "primaryVersion": true,
                "db": false
            },
            {
                "id": 30,
                "type": "anomaly",
                "name": "Anomaly detection",
                "dsp": [
                    24
                ],
                "title": "Anomaly Detection (K-means)",
                "primaryVersion": true,
                "createdBy": "createImpulse",
                "createdAt": "2023-08-29T01:40:50.747Z",
                "db": false
            }
        ]
    },
    "project": {
        "name": "Tutorial: Continuous motion recognition",
        "id": 276194,
        "owner": "Edge Impulse Docs",
        "studioHost": "studio.edgeimpulse.com"
    },
    "sensor": "accelerometer",
    "tfliteModels": [
        {
            "arenaSize": 2982,
            "inputFrameSize": 33,
            "inputTensor": "dense_input",
            "outputTensor": "y_pred/Softmax:0",
            "details": {
                "modelType": "int8",
                "inputs": [
                    {
                        "dataType": "int8",
                        "name": "serving_default_x:0",
                        "shape": [
                            1,
                            33
                        ],
                        "quantizationScale": 0.10049157589673996,
                        "quantizationZeroPoint": -70
                    }
                ],
                "outputs": [
                    {
                        "dataType": "int8",
                        "name": "StatefulPartitionedCall:0",
                        "shape": [
                            1,
                            4
                        ],
                        "quantizationScale": 0.00390625,
                        "quantizationZeroPoint": -128
                    }
                ]
            },
            "modelPath": "/home/input/trained.tflite"
        }
    ]
}
custom deployment blocks