LogoLogo
HomeDocsAPI & SDKsProjectsForumStudio
  • Welcome
    • Featured Machine Learning Projects
      • Getting Started with the Edge Impulse Nvidia TAO Pipeline - Renesas EK-RA8D1
      • Smart City Traffic Analysis - NVIDIA TAO + Jetson Orin Nano
      • ROS 2 Pick and Place System - Arduino Braccio++ Robotic Arm and Luxonis OAK-D
      • Optimize a cloud-based Visual Anomaly Detection Model for Edge Deployments
      • Rooftop Ice Detection with Things Network Visualization - Nvidia Omniverse Replicator
      • Surgery Inventory Object Detection - Synthetic Data - Nvidia Omniverse Replicator
      • NVIDIA Omniverse - Synthetic Data Generation For Edge Impulse Projects
      • Community Guide – Using Edge Impulse with Nvidia DeepStream
      • Computer Vision Object Counting - Avnet RZBoard V2L
      • Gesture Appliances Control with Pose Detection - BrainChip AKD1000
      • Counting for Inspection and Quality Control - Nvidia Jetson Nano (TensorRT)
      • High-resolution, High-speed Object Counting - Nvidia Jetson Nano (TensorRT)
    • Prototype and Concept Projects
      • Renesas CK-RA6M5 Cloud Kit - Getting Started with Machine Learning
      • TI CC1352P Launchpad - Getting Started with Machine Learning
      • OpenMV Cam RT1062 - Getting Started with Machine Learning
      • Getting Started with Edge Impulse Experiments
  • Computer Vision Projects
    • Workplace Organizer - Nvidia Jetson Nano
    • Recyclable Materials Sorter - Nvidia Jetson Nano
    • Analog Meter Reading - Arduino Nicla Vision
    • Creating Synthetic Data with Nvidia Omniverse Replicator
    • SonicSight AR - Sound Classification with Feedback on an Augmented Reality Display
    • Traffic Monitoring - Brainchip Akida
    • Multi-camera Video Stream Inference - Brainchip Akida
    • Industrial Inspection Line - Brainchip Akida
    • X-Ray Classification and Analysis - Brainchip Akida
    • Inventory Stock Tracker - FOMO - BrainChip Akida
    • Container Counting - Arduino Nicla Vision
    • Smart Smoke Alarm - Arduino Nano 33
    • Shield Bot Autonomous Security Robot
    • Cyclist Blind Spot Detection - Himax WE-I Plus
    • IV Drip Fluid-Level Monitoring - Arduino Portenta H7
    • Worker PPE Safety Monitoring - Nvidia Jetson Nano
    • Delivered Package Detection - ESP-EYE
    • Bean Leaf Disease Classification - Sony Spresense
    • Oil Tank Measurement Using Computer Vision - Sony Spresense
    • Object Counting for Smart Industries - Raspberry Pi
    • Smart Cashier with FOMO - Raspberry Pi
    • PCB Defect Detection with Computer Vision - Raspberry Pi
    • Bicycle Counting - Sony Spresense
    • Counting Eggs with Computer Vision - OpenMV Cam H7
    • Elevator Passenger Counting - Arduino Nicla Vision
    • ESD Protection using Computer Vision - Seeed ReComputer
    • Solar Panel Defect Detection - Arduino Portenta H7
    • Label Defect Detection - Raspberry Pi
    • Dials and Knob Monitoring with Computer Vision - Raspberry Pi
    • Digital Character Recognition on Electric Meter System - OpenMV Cam H7
    • Corrosion Detection with Computer Vision - Seeed reTerminal
    • Inventory Management with Computer Vision - Raspberry Pi
    • Monitoring Retail Checkout Lines with Computer Vision - Renesas RZ/V2L
    • Counting Retail Inventory with Computer Vision - Renesas RZ/V2L
    • Pose Detection - Renesas RZ/V2L
    • Product Quality Inspection - Renesas RZ/V2L
    • Smart Grocery Cart Using Computer Vision - OpenMV Cam H7
    • Driver Drowsiness Detection With FOMO - Arduino Nicla Vision
    • Gastroscopic Image Processing - OpenMV Cam H7
    • Pharmaceutical Pill Quality Control and Defect Detection
    • Deter Shoplifting with Computer Vision - Texas Instruments TDA4VM
    • Smart Factory Prototype - Texas Instruments TDA4VM
    • Correct Posture Detection and Enforcement - Texas Instruments TDA4VM
    • Visual Anomaly Detection with FOMO-AD - Texas Instruments TDA4VM
    • Surface Crack Detection and Localization - Texas Instruments TDA4VM
    • Surface Crack Detection - Seeed reTerminal
    • Retail Image Classification - Nvidia Jetson Nano
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 1
    • SiLabs xG24 Plus Arducam - Sorting Objects with Computer Vision and Robotics - Part 2
    • Object Detection and Visualization - Seeed Grove Vision AI Module
    • Bike Rearview Radar - Raspberry Pi
    • Build a Self-Driving RC Vehicle - Arduino Portenta H7 and Computer Vision
    • "Bring Your Own Model" Image Classifier for Wound Identification
    • Acute Lymphoblastic Leukemia Classifier - Nvidia Jetson Nano
    • Hardhat Detection in Industrial Settings - Alif Ensemble E7
    • Motorcycle Helmet Identification and Traffic Light Control - Texas Instruments AM62A
    • Import a Pretrained Model with "Bring Your Own Model" - Texas Instruments AM62A
    • Product Inspection with Visual Anomaly Detection - FOMO-AD - Sony Spresense
    • Visual Anomaly Detection in Fabric using FOMO-AD - Raspberry Pi 5
    • Car Detection and Tracking System for Toll Plazas - Raspberry Pi AI Kit
    • Visual Anomaly Detection - Seeed Grove Vision AI Module V2
    • Object Counting with FOMO - OpenMV Cam RT1062
    • Visitor Heatmap with FOMO Object Detection - Jetson Orin Nano
    • Vehicle Security Camera - Arduino Portenta H7
  • Audio Projects
    • Occupancy Sensing - SiLabs xG24
    • Smart Appliance Control Using Voice Commands - Nordic Thingy:53
    • Glass Window Break Detection - Nordic Thingy:53
    • Illegal Logging Detection - Nordic Thingy:53
    • Illegal Logging Detection - Syntiant TinyML
    • Wearable Cough Sensor and Monitoring - Arduino Nano 33 BLE Sense
    • Collect Data for Keyword Spotting - Raspberry Pi Pico
    • Voice-Activated LED Strip - Raspberry Pi Pico
    • Snoring Detection on a Smart Phone
    • Gunshot Audio Classification - Arduino Nano 33 + Portenta H7
    • AI-Powered Patient Assistance - Arduino Nano 33 BLE Sense
    • Acoustic Pipe Leakage Detection - Arduino Portenta H7
    • Location Identification using Sound - Syntiant TinyML
    • Environmental Noise Classification - Nordic Thingy:53
    • Running Faucet Detection - Seeed XIAO Sense + Blues Cellular
    • Vandalism Detection via Audio Classification - Arduino Nano 33 BLE Sense
    • Predictive Maintenance Using Audio Classification - Arduino Nano 33 BLE Sense
    • Porting an Audio Project from the SiLabs Thunderboard Sense 2 to xG24
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 1
    • Environmental Audio Monitoring Wearable - Syntiant TinyML - Part 2
    • Keyword Spotting - Nordic Thingy:53
    • Detecting Worker Accidents with Audio Classification - Syntiant TinyML
    • Snoring Detection with Syntiant NDP120 Neural Decision Processor - Arduino Nicla Voice
    • Recognize Voice Commands with the Particle Photon 2
    • Voice Controlled Power Plug with Syntiant NDP120 (Nicla Voice)
    • Determining Compressor State with Audio Classification - Avnet RaSynBoard
    • Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
    • Enhancing Worker Safety using Synthetic Audio to Create a Dog Bark Classifier
  • Predictive Maintenance and Defect Detection Projects
    • Predictive Maintenance - Nordic Thingy:91
    • Brushless DC Motor Anomaly Detection
    • Industrial Compressor Predictive Maintenance - Nordic Thingy:53
    • Anticipate Power Outages with Machine Learning - Arduino Nano 33 BLE Sense
    • Faulty Lithium-Ion Cell Identification in Battery Packs - Seeed Wio Terminal
    • Weight Scale Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Fluid Leak Detection With a Flowmeter and AI - Seeed Wio Terminal
    • Pipeline Clog Detection with a Flowmeter and AI - Seeed Wio Terminal
    • Refrigerator Predictive Maintenance - Arduino Nano 33 BLE Sense
    • Motor Pump Predictive Maintenance - Infineon PSoC 6 WiFi-BT Pioneer Kit + CN0549
    • BrickML Demo Project - 3D Printer Anomaly Detection
    • Condition Monitoring - Syntiant TinyML Board
    • Predictive Maintenance - Commercial Printer - Sony Spresense + CommonSense
    • Vibration Classification with BrainChip's Akida
    • AI-driven Audio and Thermal HVAC Monitoring - SeeedStudio XIAO ESP32
  • Accelerometer and Activity Projects
    • Arduino x K-Way - Outdoor Activity Tracker
    • Arduino x K-Way - Gesture Recognition for Hiking
    • Arduino x K-Way - TinyML Fall Detection
    • Posture Detection for Worker Safety - SiLabs Thunderboard Sense 2
    • Hand Gesture Recognition - OpenMV Cam H7
    • Arduin-Row, a TinyML Rowing Machine Coach - Arduino Nicla Sense ME
    • Fall Detection using a Transformer Model – Arduino Giga R1 WiFi
    • Bluetooth Fall Detection - Arduino Nano 33 BLE Sense
    • Monitor Packages During Transit with AI - Arduino Nano 33 BLE Sense
    • Smart Baby Swing - Arduino Portenta H7
    • Warehouse Shipment Monitoring - SiLabs Thunderboard Sense 2
    • Gesture Recognition - Bangle.js Smartwatch
    • Gesture Recognition for Patient Communication - SiLabs Thunderboard Sense 2
    • Hospital Bed Occupancy Detection - Arduino Nano 33 BLE Sense
    • Porting a Posture Detection Project from the SiLabs Thunderboard Sense 2 to xG24
    • Porting a Gesture Recognition Project from the SiLabs Thunderboard Sense 2 to xG24
    • Continuous Gait Monitor (Anomaly Detection) - Nordic Thingy:53
    • Classifying Exercise Activities on a BangleJS Smartwatch
  • Air Quality and Environmental Projects
    • Arduino x K-Way - Environmental Asthma Risk Assessment
    • Gas Detection in the Oil and Gas Industry - Nordic Thingy:91
    • Smart HVAC System with a Sony Spresense
    • Smart HVAC System with an Arduino Nicla Vision
    • Indoor CO2 Level Estimation - Arduino Portenta H7
    • Harmful Gases Detection - Arduino Nano 33 BLE Sense
    • Fire Detection Using Sensor Fusion and TinyML - Arduino Nano 33 BLE Sense
    • AI-Assisted Monitoring of Dairy Manufacturing Conditions - Seeed XIAO ESP32C3
    • AI-Assisted Air Quality Monitoring - DFRobot Firebeetle ESP32
    • Air Quality Monitoring with Sipeed Longan Nano - RISC-V Gigadevice
    • Methane Monitoring in Mines - Silabs xG24 Dev Kit
    • Smart Building Ventilation with Environmental Sensor Fusion
    • Sensor Data Fusion with Spresense and CommonSense
    • Water Pollution Detection - Arduino Nano ESP32 + Ultrasonic Scan
    • Fire Detection Using Sensor Fusion - Arduino Nano 33 BLE Sense
  • Novel Sensor Projects
    • 8x8 ToF Gesture Classification - Arduino RP2040 Connect
    • Food Irradiation Dose Detection - DFRobot Beetle ESP32C3
    • Applying EEG Data to Machine Learning, Part 1
    • Applying EEG Data to Machine Learning, Part 2
    • Applying EEG Data to Machine Learning, Part 3
    • Liquid Classification with TinyML - Seeed Wio Terminal + TDS Sensor
    • AI-Assisted Pipeline Diagnostics and Inspection with mmWave Radar
    • Soil Quality Detection Using AI and LoRaWAN - Seeed Sensecap A1101
    • Smart Diaper Prototype - Arduino Nicla Sense ME
    • DIY Smart Glove with Flex Sensors
    • EdgeML Energy Monitoring - Particle Photon 2
    • Wearable for Monitoring Worker Stress using HR/HRV DSP Block - Arduino Portenta
  • Software Integration Demos
    • Azure Machine Learning with Kubernetes Compute and Edge Impulse
    • ROS2 + Edge Impulse, Part 1: Pub/Sub Node in Python
    • ROS2 + Edge Impulse, Part 2: MicroROS
    • Using Hugging Face Datasets in Edge Impulse
    • Using Hugging Face Image Classification Datasets with Edge Impulse
    • Edge Impulse API Usage Sample Application - Jetson Nano Trainer
    • MLOps with Edge Impulse and Azure IoT Edge
    • A Federated Approach to Train and Deploy Machine Learning Models
    • DIY Model Weight Update for Continuous AI Deployments
    • Automate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions
    • Deploying Edge Impulse Models on ZEDEDA Cloud Devices
Powered by GitBook
On this page
  • Introduction
  • Edge Impulse
  • ZEDEDA
  • Installing EVE-OS on a Raspberry Pi 4
  • Creating an ZEDEDA Cloud Project
  • Configuring a Network
  • Onboarding the Raspberry Pi 4 to ZEDEDA Cloud
  • Deploying the Edge Impulse Project to ZEDEDA
  • Preparing a Container Image
  • Configuring the Container Registry and Adding the Container Image
  • Creating and Edge App
  • Deploying the Edge App to the Raspberry Pi 4
  • Edge Impulse Model Monitoring
  • Conclusions

Was this helpful?

Edit on GitHub
Export as PDF
  1. Software Integration Demos

Deploying Edge Impulse Models on ZEDEDA Cloud Devices

Use containers to deploy edge AI applications to a fleet of devices managed by ZEDEDA.

PreviousAutomate the CI/CD Pipeline of your Models with Edge Impulse and GitHub Actions

Last updated 3 months ago

Was this helpful?

Created By: Attila Tokes

Introduction

With increasing fleet sizes, managing edge devices and applications becomes increasingly harder. This introduces the necessity for device management platforms such as , which allow orchestrating large number of edge devices and applications with ease. Applications also need to be packaged in a more structured way, allowing deploying and updating them in a more automated manner.

This project shows how we can package and deploy based Machine Learning (ML) applications on devices managed by the ZEDEDA Cloud platform.

A Raspberry Pi 4 single-board computer will be used as our example Edge Device. On the Raspberry Pi 4 we will install EVE OS, then we will provision it into the ZEDEDA Cloud platform.

Finally, we will show a quick preview on how the experimental Model Monitoring features can be used to monitor Edge Impulse models running on production devices.

Edge Impulse

In this project we will focus on deploying Image / Video based Edge Impulse projects on ZEDEDA Cloud devices. We can use an existing Edge Impulse project, or create a new one.

For this demo, I created a simple object detection project. First, I collected a couple of images with a mug, a glass and a Raspberry Pi 4:

Then the target objects in the collected images were manually labeled in the Data acquisition section in Edge Impulse Studio. The dataset was then split into train and test sets.

Then, I set up an Impulse implementing Object Detection for our target objects:

The Impulse I used is a simple one, and uses the standard Object Detection processing block with the default parameters. Different Impulse architectures can also be used, as it makes little difference on how we will package our EI application later in this project.

After we train our Impulse, we can can test its basic functionality with Live classification on a supported device. If everything is good, the Impulse should be ready to be used in an edge application.

In order to access the trained Impulse from ZEDEDA Cloud / EVE OS devices we will need an API Key from the Edge Impulse Studio. We can get this from the Dashboard -> Keys section:

From here, copy the API Key's value with the ei_... format.

ZEDEDA

In this project we will show how an Edge Impulse ML model can be deployed on a ZEDEDA managed edge devices.

For the purpose of the demo we will use a Raspberry Pi 4 as our ZEDEDA Edge Node. The Edge Impulse ML model, also known as an Impulse, will be deployed to the platform as an Edge App. Additionally, the new experimental Model Monitoring features will be used to inspect the live running AI model directly from Edge Impulse Studio.

Hardware used:

  • a Raspberry Pi 4 Model B, with at least 2GB of RAM

  • a microSD card with at least 8GB capacity

  • wired LAN connection with Internet access

  • an IP camera or an USB webcam

  • (optional) an HDMI display and micro-HDMI to HDMI cable - these are only needed to view the debug output of EVE-OS

Installing EVE-OS on a Raspberry Pi 4

The default settings create an EVE-OS image intended for production use. In case we are using a demo / trial account with ZEDEDA Cloud, we need to prepare a small customization to point the EVE-OS installation to the ZEDEDA Demo server. This can be done as bellow:

$ mkdir "$HOME/eve-overrides-demo"
$ echo zedcloud.gmwtus.zededa.net > "$HOME/eve-overrides-demo/server"

With this we are ready to generate an EVE-OS image by running the following command:

$ docker run -v "$HOME/eve-overrides-demo:/in" --rm lfedge/eve:latest-arm64 live > ./live.img

...
b5171159-734b-4254-9930-2c35239d3858     # <-- this is an uniquely generated soft serial number

The command produces a live.img file with our EVE-OS image. Along with this, there is uniquely generated soft serial number printed as the last line of the output. Make sure to note this, as it will be needed later in the provisioning step.

After the SD card is flashed we can insert it into the Raspberry Pi 4. EVE OS should boot automatically. In case we have a HDMI display connected we will see some message with EVE-OS trying to connect to ZEDEDA Cloud.

Creating an ZEDEDA Cloud Project

With the Raspberry Pi 4 running EVE-OS, we can start setting up things in the ZEDEDA Cloud platform.

Next, give a name to the project and select the "Deployment" type:

On the Deployments and Policies pages we can use the same name:

...While keeping the rest of the options as default:

Lastly, we can review our inputs and hit Next to create our project:

After the project is created, our Projects list should look something like this:

Configuring a Network

Here, add a new IPv4 network with an arbitrary name, DHCP client mode, and 1500 MTU:

The newly added network should appear in the networks list:

Onboarding the Raspberry Pi 4 to ZEDEDA Cloud

At this point we should be ready to onboard our Raspberry Pi 4 into ZEDEDA Cloud.

Here we should give a name to the new node, and select our previously created Project and Deployment Tag:

In the Details sections, select Onboarding Key as the Identity Type. Set the Onboarding Key to 5d0767ee-0547-4569-b530-387e526f8cb9, which is the default key for all projects. In the Serial Number field enter the unique serial number we got earlier at the generate EVE OS image step. For the Brand and Model select RaspberryPi and RPi-4G.

In the Port Mapping section set eth0 as a Management interface, with our previously created Network attached to it. The wlan0 network can be left unused, while the USB port can be set as App Direct (we will not use them).

In the Additional Configuration section we can check both activation options.

After we click Next, the onboarding of the Edge Node will start. During the onboarding, if we have an HDMI screen connected to the Raspberry Pi, we should see some console activity showing the device is trying to onboard to the ZEDEDA cloud platform.

The onboarding process can take a couple of minutes, after which we should find that our Edge Node comes online:

In the Edge Node's page we can find various details and metrics:

Deploying the Edge Impulse Project to ZEDEDA

In this section we will show how we can deploy Edge Impulse models as an Edge App into the ZEDEDA platform.

EVE OS and the ZEDEDA platform supports running applications based either on Containers or Virtual Machines (VM). In this project we will build and deploy our Edge Impulse model as a Container-based Edge App.

Preparing a Container Image

Edge Impulse already packages the EI Runner as a Docker container. We can use this as a base of our Container image, over which we can apply customizations.

Customizations can range from running EI Runner parameters to run in different modes (ex. API server vs. live inference), to adding startup scripts or implementing custom applications.

For this demo project, I added the following customizations to the base Docker image:

  1. A set of GStreamer plugins was added to be able to use RTSP Camera as our video source. (note: this was needed as ZEDEDA / EVE OS does not seems to support USB cameras with the Raspberry Pi)

  2. A entry point script was added, which can start the EI Impulse Runner with custom parameters

The final Dockerfile looks like this:

FROM aureleq/ei-inference-container

ARG DEBIAN_FRONTEND=noninteractive

RUN ln -snf /usr/share/zoneinfo/Europe/Bucharest /etc/localtime && echo Europe/Bucharest > /etc/timezone

RUN apt update -y && apt install -y gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps gstreamer1.0-libav && apt dist-upgrade -y && apt autoremove -y && apt autoclean -y

ADD app.sh /app/app.sh

The app.sh is a script used as the container's entry point. It can start the Edge Impulse runner in two possible modes:

  1. HTTP Server mode - starts an inference server on port 1337 - this exposes the EI model as an API to be used by other applications

  2. RTSP Camera mode, with Model Monitoring - starts the EI Runner with a RTSP Camera as the video source, and the experimental Model Monitoring features enabled

The script also accepts an EI API Key, and a custom Device Name:

#!/bin/bash

MODE="$1"
EI_API_KEY="$2"
DEVICE_NAME="$3"

echo "Mode: $MODE"
echo "EI API Key: $EI_API_KEY"

if [[ "$MODE" == "http-server" ]]; then
    echo "Running EI runner in HTTP server mode..."
    node /app/linux/node/build/cli/linux/runner.js --api-key "${EI_API_KEY}" --run-http-server 1337 --impulse-id 1

elif [[ "$MODE" == "gst-model-monitoring" ]]; then
    echo "Running EI runner with GStreamer sources + Model monitoring..."
    while true; do
       echo "${DEVICE_NAME}" | node /app/linux/node/build/cli/linux/runner.js --clean --silent --monitor --api-key "${EI_API_KEY}" --verbose --enable-camera --gst-launch-args "rtspsrc location=rtsp://<RTSP-CAM-IP>:8554/stream ! rtph264depay ! avdec_h264 ! videoconvert ! jpegenc" || true; 
       echo "Runner stopped! Restarting it..."
    done
else
    echo "Unknown mode!"
    exit 1;
fi

To be able to use this container image in ZEDEDA Cloud, we need to make it available in a container repository. I used a private DockerHub repository for this purpose. The image was built and published as follows:

$ docker buildx build . --platform linux/arm64 --tag attitokes/zededa-test:edge-impulse-in-docker-0.1.0 --load
$ docker push attitokes/zededa-test:edge-impulse-in-docker-0.1.0

Configuring the Container Registry and Adding the Container Image

Give it a name, and select Container Registry as the Category. I used Docker Hub, for which we should set docker://docker.io as the FQDN. Select the type of Container, and enter a Docker IO user name and API key.

Here, select the newly added Data Store, specify the image URL using the /<username>/<image>:<tag> format.

Creating and Edge App

In the Add Edge App page give the application a name, and select Standalone as the Deployment Type. For Resources, Tiny or Small should be enough.

In the Drives section, select the Edge App Image we created previously.

Then, in the Networking section we need to configure an Outbound rule that allows any traffic:

Additionally, if we want to use the HTTP server, we also need to expose the 1337 port to the outside world.

In the Configurations, enable the custom edge app configuration as follows:

This will allow us to inject settings like the Device name and Edge Impulse API Key later when we deploy the Edge App to the Raspberry Pi 4.

On the Developer Info section, fill in the necessary details, and click Add to create the Edge App.

Deploying the Edge App to the Raspberry Pi 4

In the first page select the Raspberry Pi 4 Edge Node to deploy to:

Then, in the next page, give the app instance a name:

In the next page, the Networking settings should be already pre-populated with the correct adapter, so we can go the next page:

On the next page we need to configure the settings for our Edge App instance. Here we can specify a Device Name and our Edge Impulse API Key as follows:

For this use the following configuration:

EVE_ECO_CMD="/app/app.sh gst-model-monitoring <EI_API KEY> <DEVICE_NAME>"

Finally, we can review and deploy the app:

It takes a couple of minutes until the container image is fully downloaded, a volume is created and the app is booted. During this time the Edge App Instance will go through various states, and in the end it should come online:

Edge Impulse Model Monitoring

Managing large fleets of Edge Devices can get complex. The ZEDEDA Cloud solves this by offering a centralized platform that makes managing Edge Devices and Apps easy.

The ZEDEDA platform however does not have insights on what our Edge Apps are actually doing. With Edge ML applications it is particularly important to get insights about our model's performance in the real world.

Up until recently, in Edge Impulse implementing monitoring of production Edge ML apps was left to the users.

With Model Monitoring enabled on our ZEDEDA Edge App we can benefit from the following features:

  1. New devices running the Edge App are automatically populated in the Devices tab in EI Studio.

  2. Using Live Inference we can monitor / debug the AI models running on the Edge Device in real-time.

  3. We can push a new model version to the Edge Devices, without the need to restart or redeploy the Edge App.

The Model Monitoring features are still experimental, but here is a quick demo on how Live Inference currently looks in the Edge Impulse Studio:

Conclusions

The ZEDEDA platform allows managing and orchestrating large number of edge devices and applications from a centralized platform. It provides visibility and control over the edge devices deployed in the field directly from the cloud. Its zero-trust security model ensures device integrity, and allows secure communication of edge apps with the cloud.

Packaging Edge Impulse models into container-based edge apps allow deploying them to multiple devices with ease. Using the EI Impulse Runner in various modes allows launching models and integrating them with external applications and data sources in flexible ways. Additionally, the new set of model monitoring features will allow monitoring edge models deployed in the real world and collecting data from them in real-time.

These features make the combination of the ZEDEDA and Edge Impulse platforms a great solution for deploying edge ML applications to large fleets of edge devices.

Our Edge Impulse model will be packaged as a containerized application based on the . The containerized application then will be imported into the ZEDEDA platform as an Edge App, from where and it will be deployed to the Edge Device.

The is a SaaS platform offering among others, orchestration and management services of edge device and applications directly from the Cloud. ZEDEDA works with fully managed Edge Devices, to which one or more Edge Applications can be seamlessly deployed.

Edge Nodes managed by the ZEDEDA cloud platform must run , which is light-weight, open-source Linux distribution designed to run containerized or VM-based workloads. In this section we will show how to install EVE-OS on a Raspberry Pi 4.

To install EVE-OS we need to generate and flash an SD Card image. This can be done using the tool which is packaged as a Docker container.

The resulting live.img should be a regular disk image file, and can flashed to a microSD card using or similar tools.

The first thing we need in ZEDEDA Cloud is a Project. To create it we go to and click on Add Project:

Before being able to onboard the Raspberry Pi 4 we will need to configure a network for the Edge Nodes to use. For this go to and click Add Network.

If this is our first Edge Node we first need to import a supported hardware model from the ZEDEDA Marketplace. For this go to , and in the Global Models section find a import the RPi4-4G model:

Next, go to the page, and click Add Edge Node.

As we will use a slightly modified container image, we will need a container registry that we can attach to ZEDEDA Cloud. To attach a container registry to ZEDEDA Cloud, go to , and hit + to create a new data store.

After this we should be able to import our container image into ZEDEDA. For this go to , and click + to add a new image:

With this we are ready to package our EI model as an ZEDEDA Edge App. For this go to , and create a new edge app. Select Container as the application type.

With the Edge App created, we should be able deploy it to our Raspberry Pi 4 Edge Node. To do this go to the section, and use the + button to create a new deployment:

Now, Edge Impulse is working on a new set of features, meant to enable deployment and monitoring of EdgeML apps.

EI Impulse Runner
ZEDEDA Cloud
EVE-OS
lfedge/eve
Balena Etcher
Administration -> Projects
Library -> Networks
MarketPlace -> Models
Edge Nodes
Library -> Data Stores
Library -> Edge App Images
Marketplace -> Edge Apps
Edge App Instances
Model Monitoring
ZEDEDA
Edge Impulse
Overview
Data Acquisition
Create Impulse
API Key
Add project
Projects page
Add project
Add project / Deployments
Add project / Policies
Add project / Review
View Project
Networks
Add Networks
Networks
Add Edge Node
Add Edge Node (cont)
Edge Nodes
Edge Node Metrics
Configure Container Registry
Add Container Image
Add Edge App
Edge App Image
Edge App | Networking
Edge App | Configurations
Edge App | Developer Info
Edge App Deployment
Edge App Deployment | Identity
Edge App Deployment | Networking
Edge App Deployment | Configuration
Edge App Deployment
Edge App Deployment
Edge App Deployment