Introduction
With increasing fleet sizes, managing edge devices and applications becomes increasingly harder. This introduces the necessity for device management platforms such as ZEDEDA, which allow orchestrating large number of edge devices and applications with ease. Applications also need to be packaged in a more structured way, allowing deploying and updating them in a more automated manner. This project shows how we can package and deploy Edge Impulse based Machine Learning (ML) applications on devices managed by the ZEDEDA Cloud platform.
Overview
Edge Impulse
In this project we will focus on deploying Image / Video based Edge Impulse projects on ZEDEDA Cloud devices. We can use an existing Edge Impulse project, or create a new one. For this demo, I created a simple object detection project. First, I collected a couple of images with a mug, a glass and a Raspberry Pi 4:
Data Acquisition

Create Impulse

API Key
ei_...
format.
ZEDEDA
The ZEDEDA Cloud is a SaaS platform offering among others, orchestration and management services of edge device and applications directly from the Cloud. ZEDEDA works with fully managed Edge Devices, to which one or more Edge Applications can be seamlessly deployed. In this project we will show how an Edge Impulse ML model can be deployed on a ZEDEDA managed edge devices.
Add project
- a Raspberry Pi 4 Model B, with at least 2GB of RAM
- a microSD card with at least 8GB capacity
- wired LAN connection with Internet access
- an IP camera or an USB webcam
- (optional) an HDMI display and micro-HDMI to HDMI cable - these are only needed to view the debug output of EVE-OS
Installing EVE-OS on a Raspberry Pi 4
Edge Nodes managed by the ZEDEDA cloud platform must run EVE-OS, which is light-weight, open-source Linux distribution designed to run containerized or VM-based workloads. In this section we will show how to install EVE-OS on a Raspberry Pi 4. To install EVE-OS we need to generate and flash an SD Card image. This can be done using the lfedge/eve tool which is packaged as a Docker container. The default settings create an EVE-OS image intended for production use. In case we are using a demo / trial account with ZEDEDA Cloud, we need to prepare a small customization to point the EVE-OS installation to the ZEDEDA Demo server. This can be done as bellow:live.img
file with our EVE-OS image. Along with this, there is uniquely generated soft serial number printed as the last line of the output. Make sure to note this, as it will be needed later in the provisioning step.
The resulting live.img
should be a regular disk image file, and can flashed to a microSD card using Balena Etcher or similar tools.
After the SD card is flashed we can insert it into the Raspberry Pi 4. EVE OS should boot automatically. In case we have a HDMI display connected we will see some message with EVE-OS trying to connect to ZEDEDA Cloud.
Creating an ZEDEDA Cloud Project
With the Raspberry Pi 4 running EVE-OS, we can start setting up things in the ZEDEDA Cloud platform. The first thing we need in ZEDEDA Cloud is a Project. To create it we go to Administration -> Projects and click on Add Project:
Projects page

Add project

Add project / Deployments

Add project / Policies

Add project / Review

View Project
Configuring a Network
Before being able to onboard the Raspberry Pi 4 we will need to configure a network for the Edge Nodes to use. For this go to Library -> Networks and click Add Network.
Networks

Add Networks

Networks
Onboarding the Raspberry Pi 4 to ZEDEDA Cloud
At this point we should be ready to onboard our Raspberry Pi 4 into ZEDEDA Cloud. If this is our first Edge Node we first need to import a supported hardware model from the ZEDEDA Marketplace. For this go to MarketPlace -> Models, and in the Global Models section find a import the RPi4-4G model:

Add Edge Node
5d0767ee-0547-4569-b530-387e526f8cb9
, which is the default key for all projects. In the Serial Number field enter the unique serial number we got earlier at the generate EVE OS image step. For the Brand and Model select RaspberryPi and RPi-4G.
In the Port Mapping section set eth0
as a Management interface, with our previously created Network attached to it. The wlan0
network can be left unused, while the USB port can be set as App Direct (we will not use them).

Add Edge Node (cont)

Edge Nodes

Edge Node Metrics
Deploying the Edge Impulse Project to ZEDEDA
In this section we will show how we can deploy Edge Impulse models as an Edge App into the ZEDEDA platform. EVE OS and the ZEDEDA platform supports running applications based either on Containers or Virtual Machines (VM). In this project we will build and deploy our Edge Impulse model as a Container-based Edge App.Preparing a Container Image
Edge Impulse already packages the EI Runner as a Docker container. We can use this as a base of our Container image, over which we can apply customizations. Customizations can range from running EI Runner parameters to run in different modes (ex. API server vs. live inference), to adding startup scripts or implementing custom applications. For this demo project, I added the following customizations to the base Docker image:- A set of GStreamer plugins was added to be able to use RTSP Camera as our video source. (note: this was needed as ZEDEDA / EVE OS does not seems to support USB cameras with the Raspberry Pi)
- A entry point script was added, which can start the EI Impulse Runner with custom parameters
Dockerfile
looks like this:
app.sh
is a script used as the container’s entry point. It can start the Edge Impulse runner in two possible modes:
- HTTP Server mode - starts an inference server on port
1337
- this exposes the EI model as an API to be used by other applications - RTSP Camera mode, with Model Monitoring - starts the EI Runner with a RTSP Camera as the video source, and the experimental Model Monitoring features enabled
Configuring the Container Registry and Adding the Container Image
As we will use a slightly modified container image, we will need a container registry that we can attach to ZEDEDA Cloud. To attach a container registry to ZEDEDA Cloud, go to Library -> Data Stores, and hit+
to create a new data store.

Configure Container Registry
docker://docker.io
as the FQDN. Select the type of Container, and enter a Docker IO user name and API key.
After this we should be able to import our container image into ZEDEDA. For this go to Library -> Edge App Images, and click +
to add a new image:

Add Container Image
/<username>/<image>:<tag>
format.
Creating and Edge App
With this we are ready to package our EI model as an ZEDEDA Edge App. For this go to Marketplace -> Edge Apps, and create a new edge app. Select Container as the application type.
Add Edge App

Edge App Image

Edge App | Networking
1337
port to the outside world.
In the Configurations, enable the custom edge app configuration as follows:

Edge App | Configurations

Edge App | Developer Info
Deploying the Edge App to the Raspberry Pi 4
With the Edge App created, we should be able deploy it to our Raspberry Pi 4 Edge Node. To do this go to the Edge App Instances section, and use the+
button to create a new deployment:
In the first page select the Raspberry Pi 4 Edge Node to deploy to:

Edge App Deployment

Edge App Deployment | Identity

Edge App Deployment | Networking

Edge App Deployment | Configuration

Edge App Deployment

Edge App Deployment
Edge Impulse Model Monitoring
Managing large fleets of Edge Devices can get complex. The ZEDEDA Cloud solves this by offering a centralized platform that makes managing Edge Devices and Apps easy. The ZEDEDA platform however does not have insights on what our Edge Apps are actually doing. With Edge ML applications it is particularly important to get insights about our model’s performance in the real world. Up until recently, in Edge Impulse implementing monitoring of production Edge ML apps was left to the users. Now, Edge Impulse is working on a new set of Model Monitoring features, meant to enable deployment and monitoring of EdgeML apps. With Model Monitoring enabled on our ZEDEDA Edge App we can benefit from the following features:- New devices running the Edge App are automatically populated in the Devices tab in EI Studio.
- Using Live Inference we can monitor / debug the AI models running on the Edge Device in real-time.
- We can push a new model version to the Edge Devices, without the need to restart or redeploy the Edge App.

Edge App Deployment