Deployment

After training and validating your model, you can now deploy it to any device. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption.

The Deployment page consists of a variety of deploy options to choose from depending on your target device. Regardless of whether you are using a fully supported development board or not, Edge Impulse provides deploy options through C++ library in which you can use to deploy your model on any targets (as long as the target has enough compute can handle the task).

The following are the 5 main categories of deploy options currently supported by Edge Impulse:

  1. Deploy as a customizable library

  2. Deploy as a pre-built firmware - for fully supported development boards

  3. Run directly on your phone or computer

  4. Use Edge Impulse for Linux for Linux targets

  5. Create a custom deployment block (Enterprise feature)

Deploying as a customizable library

This deploy option lets you turn your impulse into a fully optimized source code that can be further customized and integrated with your application. This option supports the following libraries:

Arduino Library

You can run your impulse locally as an Arduino library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package.

To deploy as an Arduino library, select Arduino library on the Deployment page and click Build to create the library. Download the .ZIP file and import it as a sketch in your Arduino IDE then run your application.

For a full tutorial on how to run your impulse locally as an arduino library, have a look at Running your impulse locally - Arduino.

C++ Library

You can run your Impulse as a C++ library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package that can be easily ported to your custom applications.

Visit Running your impulse locally for a deep dive on how to deploy your impulse as a C++ library.

Cube.MX CMSIS-PACK library

If you want to deploy your impulse to an STM32 MCU, you can use the Cube.MX CMSIS-PACK. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in any STM32 project with a single function call.

Have a look at Running your impulse locally - using CubeAI for a deep dive on how to deploy your impulse on STM32 based targets using the Cube.MX CMSIS-PACK.

WebAssembly Library

When you want to deploy your impulse to a web app you can use the WebAssembly library.This packages all your signal processing blocks, configuration and learning blocks up into a single package that can run without any compilation.

Have a look at Running your impulse locally - through WebAssembly (Browser) fora deep dive on how you can run your impulse to classify sensor data in your Node.js application.

Deploy as a pre-built firmware

For this option, you can use a ready-to-go binary for your development board that bundles signal processing blocks, configuration and learning blocks up into a single package. This option is currently only available for fully supported development boards as shown in the image below:

To deploy your model using ready to go binaries, select your target device and click "build". Flash the downloaded firmware to your device then run the following command:

edge-impulse-run-impulse

The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.

Edge Impulse for Linux

If you are developing for Linux based devices, you can use Edge Impulse for Linux for deployment. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.

For a deep dive on how to deploy your impulse to linux targets using Edge Impulse for linux, you can visit the Edge Impulse for Linux tutorial.

Deploy to your mobile phone/computer

You can run your impulse directly on your computer/mobile phone without the need of additional app. To run on your computer, you simply just need to select "computer" then click "Switch to classification mode". To run on your mobile phone, select 'Mobile Phone' then scan the QR code and click 'switch to classification mode".

Optimizations

Enabling EON Compiler

When building your impulse for deployment, Edge Impulse gives you the option of adding another layer of optimization to your impulse using the EON compiler. The EON Compiler lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.

To activate the EON Compiler, select you preferred deployment option then go to Enable EON™ Compiler then enable it and click 'Build' to build your impulse for deployment.

To have a peek of how your impulse would utilize compute resources of your target device, Edge Impulse also gives an estimate of latency, flash, RAM to be consumed by your target device even before deploying your impulse locally. This can really save you a lot of engineering time costs incurred by recurring iterations and experiments.

You can also select whether to run the unquantized float32 or the quantized int8 models as shown in the image below.

The above confusion matrix is only based on the test data to help you know how your model performs on unseen real world data. It can also help you know whether your model has learned to overfit on your training data which is a common occurrence.

Last updated

Revision created on 11/14/2022