with Docker on Balena

Introduction

This page is part of the Lifecycle management with Edge Impulse tutorial series. If you haven't read the introduction yet, we recommend you to do so here. Balena can serve as the platform for deploying OTA updates to edge devices, including new models trained with Edge Impulse.

Prerequisites

Overview

Balena offers a comprehensive platform for building, deploying and managing large fleets of IoT devices. It simplifies fleet management, enhances security, and streamlines the deployment of software and hostOS updates. This tutorial will guide you through using balena to deploy Edge Impulse model updates across your device fleet efficiently. This can be particularly useful for managing multiple devices in the field, ensuring they are always running the latest model. Devices like the NVIDIA Jetson Nano, NVIDIA Jetson Orin, Raspberry Pi, and other single-board computers are supported by balena and can be used to deploy Edge Impulse models.

For this example, we will deploy an Edge Impulse model as a Docker container on a Raspberry Pi using balenaOS. The model will run an HTTP inference server, allowing you to send data to the device for processing and receive predictions in real-time.

Prerequisites

  • An active Edge Impulse account with a trained model.

  • Follow the Edge Impulse Docker documentation.

Introduction to balena

Balena is a platform that provides tools for building, deploying, and managing large fleets of IoT devices. It simplifies the process of managing fleets of devices, offering a robust framework for deploying software and hostOS updates, monitoring device health, and ensuring security. Balena could serve as the fleet management and device management platform for deploying OTA updates, including new models trained with Edge Impulse.

Step 1: Exporting Your Model as a Docker Container

Go to your Edge Impulse project, navigate to the Deployment section and select Docker container as the deployment option. Follow the instructions to generate the Docker container command. It will look something like this:

Docker - Export

Note the Container address and the api-key, as we will need to use them in a subsequent step below.

Step 2: Preparing Your Balena Fleet

Log in to your balenaCloud dashboard and create a new fleet.

Balena - Add new device

BalenaOS - Add a new Device Select the appropriate device type that matches your hardware. Follow the instructions to add a device to your application and download the balenaOS image for it. Flash the downloaded OS image to your device's SD card using balenaEtcher or a similar tool. Power on your device with the SD card or USB key inserted; it should connect to your balena application automatically.

Step 3: Deploying Your Docker Container to Your balena Fleet

Clone the balena boilerplate Edge Impulse project from Github or start with a Dockerfile.template in a new directory on your local machine. Modify the Dockerfile.template to use your container address and api-key as mentioned earlier. Since balena uses Docker containers, you will simply use the container generated by the deployment screen of your Edge Impulse model.

# Use the specified base image
FROM public.ecr.aws/z9b3d4t5/inference-container:c0fd........97d
# Set the API key as an environment variable (optional, for security reasons you might want to handle this differently)
ENV API_KEY=ei_952ba2......66f3cc
# Expose port 1337 on the container to the host is standard for the Edge Impulse Docker container, we will change this to 80
EXPOSE 80
# Start the inference server when the container launches
CMD ["--api-key", "ei_952ba............66f3cc", "--run-http-server", "80"]

Step 4: Build Your Application

Use the balena CLI to build, and scan for your local device and push your application to balenaCloud:

balena login
balena push <your fleet on balenaCloud>

Wait for the application to build and deploy. You can monitor the progress in the balenaCloud dashboard.

Balena - Dashboard Summary

Step 5: Accessing Your Inference Server

Once deployed, your device will start the Docker container and run the HTTP inference server.

Deployed - Edge Impulse Inference Server

You can access it using the device's IP address on your local network or through the Public Device URL feature provided by balenaCloud if enabled for your device. Remember that we decided to point this server to the port 80, but feel free to use another port.

Step 6: Monitoring and Managing Your Fleet

With your Edge Impulse inference server running on balena, now you can monitor and manage the other services running on your device fleet using balenaCloud's dashboard and tools. This includes monitoring device health, deploying updates, and rolling back changes if needed.

Balena - Monitoring your fleet

Furthermore, to deploy the latest ML model deployed from Edge Impulse after retraining, you will need to restart the edge-impulse service running in the connected devices. You can do it from the balenaCloud Fleet Devices page selecting them.

Balena - Restarting your service

If you have any questions or any issues feel free to contact the balena team using the balena forums.

Conclusion

By following these steps, you should have a functional Edge Impulse inference server running on your balena devices, ready to process data and make predictions. This setup can be integrated into a robust OTA model update process, enabling Lifecycle management and improvement of your Edge AI enabled devices.

Additional Resources

  • Webinar - Edge Impulse and Balena webinar exploring this topic in more detail.

Last updated

Revision created

fix