One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.
In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.
Only available for enterprise customers
Organizational features are only available for enterprise customers. Contact us for more information.
Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. You'll thus need:
After installing, log in to Docker hub by opening a command prompt or terminal window and run:
$ docker login
Then, create a new folder on your computer named
When a user deploys with a custom deployment block two things happen:
- A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite format, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.
- This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.
To obtain this package go to your project's Dashboard, look for Administrative zone, enable Custom deploys, and click Save.
If you now go to the Deployment page, a new option appears under 'Create library':
Once you click Build you'll receive a ZIP file containing five items:
deployment-metadata.json- this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: Deployment metadata spec.
trained.tflite- if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the
int8optimization, otherwise this is the
trained.savedmodel.zip- if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.
edge-impulse-sdk- a copy of the latest Inferencing SDK.
model-parameters- impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.
tflite-model- neural network as source code in a way that can be used by the SDK to quickly run your impulse.
Store the unzipped file under
With the basic information in place we can create a new deployment block. Here we'll build a standalone application that runs our impulse on Linux, very useful when running your impulse on a gateway or desktop computer. The base application can be found at edgeimpulse/example-standalone-inferencing.
- Download the base application.
- Unzip under
To build this application we need to combine the application with the
tflite-model folder, and invoke the (already included) Makefile.
To build the application we use Docker, a virtualization technique which lets developers package up an application with all dependencies in a single package. In this container we'll place the build tools required for this application, and scripts to combine the trained impulse with the base application.
First, let's create a small build script. As a parameter you'll receive
--metadata which points to the deployment information. In here you'll also get information on the input and output folders where you need to read from and write to.
Create a new file called
custom-deploy-block/build.py and add:
import argparse, json, os, shutil, zipfile, threading # parse arguments (--metadata FILE is passed in) parser = argparse.ArgumentParser(description='Custom deploy block demo') parser.add_argument('--metadata', type=str) args = parser.parse_args() # load the metadata.json file with open(args.metadata) as f: metadata = json.load(f) # now we have two folders 'metadata.folders.input' - this is where all the SDKs etc are, # and 'metadata.folders.output' - this is where we need to write our output input_dir = metadata['folders']['input'] app_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'app') output_dir = metadata['folders']['output'] print('Copying files to build directory...') is_copying = True def print_copy_progress(): if (is_copying): threading.Timer(2.0, print_copy_progress).start() print("Still copying...") print_copy_progress() # create a build directory, the input / output folders are on network storage so might be very slow build_dir = '/tmp/build' if os.path.exists(build_dir): shutil.rmtree(build_dir) os.makedirs(build_dir) # copy in the data from both 'input' and 'app' folders os.system('cp -r ' + input_dir + '/* ' + build_dir) os.system('cp -r ' + app_dir + '/* ' + build_dir) is_copying = False print('Copying files to build directory OK') print('') print('Compiling application...') is_compiling = True def print_compile_progress(): if (is_compiling): threading.Timer(2.0, print_compile_progress).start() print("Still compiling...") print_compile_progress() # then invoke Make os.chdir(build_dir) os.system('make -f Makefile.tflite') is_compiling = False print('Compiling application OK') # ZIP the build folder up, and copy to output dir if not os.path.exists(output_dir): os.makedirs(output_dir) shutil.make_archive(os.path.join(output_dir, 'deploy'), 'zip', os.path.join(build_dir, 'build'))
Next, we need to create a Dockerfile, which contains all dependencies for the build. These include GNU Make, a compiler, and both the build script and the base application.
Create a new file called
custom-deploy-block/Dockerfile and add:
FROM ubuntu:18.04 WORKDIR /ei # Install base dependencies RUN apt update && apt install -y build-essential software-properties-common wget # Install LLVM 9 RUN wget https://apt.llvm.org/llvm.sh && chmod +x llvm.sh && ./llvm.sh 9 RUN rm /usr/bin/gcc && rm /usr/bin/g++ && ln -s $(which clang-9) /usr/bin/gcc && ln -s $(which clang++-9) /usr/bin/g++ # Install Python 3.7 RUN add-apt-repository ppa:deadsnakes/ppa && apt install -y python3.7 # Copy the base application in COPY app ./app # Copy any scripts in that we have COPY *.py ./ # This is the script our application should run (-u to disable buffering) ENTRYPOINT [ "python3", "-u", "build.py" ]
To test the build script we first build the container, then invoke it with the files from the
input directory. Open a command prompt or terminal, navigate to the
custom-deploy-block folder and:
- Build the container:
$ docker build -t cdb-demo .
- Invoke the build script - this mounts the current directory in the container under
/home, and then passes the downloaded metadata script to the container:
$ docker run --rm -it -v $PWD:/home cdb-demo --metadata /home/input/deployment-metadata.json
- Voila. You now have an
outputfolder which contains a ZIP file. Unzip
output/deploy.zipand now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see Running your impulse locally):
$ ./output/edge-impulse-standalone "RAW FEATURES HERE"
Or if you run Windows or macOS, you can use Docker to run this application:
$ docker run --rm -v $PWD/output:/home ubuntu:18.04 /home/edge-impulse-standalone "RAW FEATURES HERE"
With the deployment block ready you can make it available in Edge Impulse. This requires pushing the block to Docker Hub, so the Edge Impulse servers can retrieve the block. To push the block:
- Go to Docker hub.
- Click Create repository, and enter
custom-deploy-block-demoas the name.
- Tag and push the transformation block:
$ docker tag cdb-demo YOUR_DOCKER_HUB_USERNAME/custom-deploy-block-demo:v1 $ docker push YOUR_DOCKER_HUB_USERNAME/custom-deploy-block-demo:v1
YOUR_DOCKER_HUB_USERNAME with your Docker Hub username).
- You should now have a new tag listed in Docker Hub.
If you've created a private repository you'll need to add
edgeimpulseserveras a collaborator in Docker Hub.
To add the deployment block to Edge Impulse go to your organization, go to Deployment blocks and select Add new deployment block. Give the block a nice logo, set a name and a description, and set the location of the Docker container (
The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and you'll find a new section 'Custom targets'. Select your new deployment target and click Build.
And now you'll have a freshly built binary from your own deployment block!
Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.
You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating data ingestion from your cloud services, transforming raw data into ML-suitable data, training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features, let us know!
Updated 3 days ago