Deployment blocks

One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.

In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.

Only available with Edge Impulse Enterprise Plan

Try our FREE Enterprise Trial today.

Prerequisites

You'll need:

  • The Edge Impulse CLI.

    • If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):

1. Download example repository

Go to https://github.com/edgeimpulse/example-custom-deployment-block and clone (or download) the repository. Then, open a command prompt or terminal window and run:

$ edge-impulse-blocks init

To initialize the block.

2. Input to your custom deployment block

When a user deploys with a custom deployment block two things happen:

  1. A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite and SavedModel formats, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.

  2. This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.

To test this locally, you can download this package from the Studio. In your Edge Impulse project go to Deployment, and search for Custom block.

Once you click Build you'll receive a ZIP file containing the following items:

  • deployment-metadata.json - this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: Deployment metadata spec.

  • trained.tflite - if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the int8 optimization, otherwise this is the float32 model.

  • trained.savedmodel.zip - if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.

  • edge-impulse-sdk - a copy of the latest Inferencing SDK.

  • model-parameters - impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.

  • tflite-model - neural network as source code in a way that can be used by the SDK to quickly run your impulse.

Store all these files under example-custom-deployment-block/input.

2.1 Testing the build script with Docker

To test your deployment block you first build the container, then invoke it with the files from the input directory. Open a command prompt or terminal, navigate to the example-custom-deployment-block folder and:

  1. Build the container:

    $ docker build -t cdb-demo .
  2. Invoke the build script - this mounts the current directory in the container under /home, and then passes the downloaded metadata script to the container:

    $ docker run --rm -it -v $PWD:/home cdb-demo --metadata /home/input/deployment-metadata.json
  3. Voila. You now have an output folder that contains a ZIP file. Unzip output/deploy.zip and now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see Running your impulse locally):

    $ ./output/edge-impulse-standalone "RAW FEATURES HERE"

Or if you run Windows or macOS, you can use Docker to run this application:

```
$ docker run --rm -v $PWD/output:/home ubuntu:20.04 /home/edge-impulse-standalone "RAW FEATURES HERE"
```

3. Uploading the deployment block to Edge Impulse

With the deployment block ready you can make it available in Edge Impulse. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:

$ edge-impulse-blocks push

This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization. The transformation block is now available in Edge Impulse under Custom blocks > deployment blocks. You can go here to set the logo, update the description, and set extra command line parameters.

Privileged mode

Deployment blocks do not have access to the internet by default. If you need this, or if you need to pull additional information from the project (e.g. access to DSP blocks) you can set the 'privileged' flag on a deployment block. This will enable outside internet access, and will pass in the project.apiKey parameter in the metadata (if a development API key is set) that you can use to authenticate with the Edge Impulse API.

4. Using the deployment block

The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and search for your block:

Just click Build and now you'll have a freshly built binary from your own deployment block!

5. Conclusion

Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.

You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating data ingestion from your cloud services, transforming raw data into ML-suitable data, training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features, let us know!

🚀

Parameters.json format

This is the specification for the parameters.json type:

type DeployBlockParametersJson = {
    version: 1,
    type: 'deploy',
    info: {
        name: string,
        description: string,
        category?: 'library' | 'firmware',
        integrateUrl?: string,
        cliArguments: string,
        supportsEonCompiler: boolean,
        mountLearnBlock: boolean,
        showOptimizations: boolean,
        privileged?: boolean,
    },
};

Last updated