with Docker on BalenaOS

Introduction

This page is part of the Lifecycle management with Edge Impulse tutorial series. If you haven't read the introduction yet, we recommend you to do so here. Balena can serve as the infrastructure backbone for deploying OTA updates, including new models trained with Edge Impulse.

BalenaOS - Docker Deploy

Overview

Balena offers a comprehensive platform for building, deploying, and managing IoT devices. It simplifies fleet management, enhances security, and streamlines the deployment of updates. This tutorial will guide you through using Balena and Docker to deploy Edge Impulse model updates across your device fleet efficiently. This can be particularly useful for managing multiple devices in the field, ensuring they are always running the latest model. Devices like the Nvidia Jetson Nano, Raspberry Pi, and other single-board computers are supported by Balena and can be used to deploy Edge Impulse models. To see how to use the GPU on the Jetson Nano, check out the Edge Impulse Jetson Nano Docker Deploy using GPU tutorial

For this example, we will deploy an Edge Impulse model as a Docker container on a Raspberry Pi using BalenaOS. The model will run an HTTP inference server, allowing you to send data to the device for processing and receive predictions in real-time.

Prerequisites

  • An active Edge Impulse account with a trained model.

  • Follow the Edge Impulse Docker documentation.

Introduction to Balena

Balena is a platform that provides tools for building, deploying, and managing IoT devices. It simplifies the process of managing fleets of devices, offering a robust framework for deploying updates, monitoring device health, and ensuring security. Balena could serve as the infrastructure backbone for deploying OTA updates, including new models trained with Edge Impulse.

Step 1: Exporting Your Model as a Docker Container

Go to your Edge Impulse project, navigate to the Deployment section, and select Docker container as the deployment option. Follow the instructions to generate the Docker container command. It will look something like this:

docker run --rm -it \
   -p 80:80 \
   public.ecr.aws/g7a8t7v6/inference-container:<tag> \
   --api-key <your_api_key> \
   --run-http-server 1337

Copy this generated command from your deployment, and we will modify the port to 80.

Step 2: Preparing Your Balena Application

Log in to your balenaCloud dashboard and create a new fleet, selecting the appropriate device type that matches your hardware. Follow the instructions to add a device to your application and download the balenaOS image for it. Flash the downloaded OS image to your device's SD card using balenaEtcher or a similar tool. Power on your device with the SD card inserted; it should connect to your Balena application automatically.

Step 4: Deploying Your Docker Container to Balena

Clone the Balena base image for your device type from Balena's GitHub repository or start with a Dockerfile.template in a new directory on your local machine. Modify the Dockerfile.template to include the Docker run command from earlier.

For example:

git clone https://github.com/Balena-os/Balena-raspberrypi.git
cd Balena-raspberrypi
vi Dockerfile

Since Balena uses Docker containers, you will integrate the Edge Impulse Docker command within the CMD instruction of your Dockerfile.template. It might look like this again note that the copied command has 1337 as the port and we will use 80 as this is what Balena is configured to expose on by default

Add the following to the Dockerfile:

# Use the specified base image
FROM public.ecr.aws/z9b3d4t5/inference-container:c0fd........97d

# Set the API key as an environment variable (optional, for security reasons you might want to handle this differently)
ENV API_KEY=ei_952ba2......66f3cc

# Expose port 1337 on the container to the host is standard for the Edge Impulse Docker container, we will change this to 80
EXPOSE 80

# Start the inference server when the container launches
CMD ["--api-key", "ei_952ba............66f3cc", "--run-http-server", "80"]

Step 5: Build your application

Use the Balena CLI to build, and scan for for your local device and push your application to balenaCloud:

sudo Balena build
BalenaOS - Build your Balena Application
sudo Balena scan

From the results take the local hostname e.g. 12004cf.local use this to push your application to a local pi or you can push to the Balena organisation with

Balena push <YourFleetName>

Wait for the application to build and deploy. You can monitor the progress in the balenaCloud dashboard.

BalenaOS - Dashboard

Step 6: Accessing Your Inference Server

Once deployed, your device will start the Docker container and run the HTTP inference server. You can access it using the device's IP address on your local network or through the public URL feature provided by balenaCloud if enabled for your device.

BalenaOS - Hosted Inference Server

Step 7: Monitoring and Managing Your Fleet

With your Edge Impulse inference server running on Balena, you can now monitor and manage your device fleet using balenaCloud's dashboard and tools. This includes monitoring device health, deploying updates, and rolling back changes if needed.

BalenaOS - Docker Deploy
BalenaOS - Variables available to store results

Conclusion

By following these steps, you should have a functional Edge Impulse inference server running on your Balena device, ready to process data and make predictions. This setup can be integrated into a robust OTA model update process, enabling Lifecycle management and improvement of your Edge AI enabled devices.

Learn More

Balena Documentation: Explore the official Balena documentation for detailed guides and examples on deploying and managing IoT devices.

Last updated

Revision created

tab only