LogoLogo
HomeAPI & SDKsProjectsForumStudio
  • Getting started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions (FAQ)
  • Tutorials
    • End-to-end tutorials
      • Computer vision
        • Image classification
        • Object detection
          • Object detection with bounding boxes
          • Detect objects with centroid (FOMO)
        • Visual anomaly detection
        • Visual regression
      • Audio
        • Sound recognition
        • Keyword spotting
      • Time-series
        • Motion recognition + anomaly detection
        • Regression + anomaly detection
        • HR/HRV
        • Environmental (Sensor fusion)
    • Data
      • Data ingestion
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
        • Using the Edge Impulse Python SDK to upload and download data
        • Trigger connected board data sampling
        • Ingest multi-labeled data using the API
      • Synthetic data
        • Generate audio datasets using Eleven Labs
        • Generate image datasets using Dall-E
        • Generate keyword spotting datasets using Google TTS
        • Generate physics simulation datasets using PyBullet
        • Generate timeseries data with MATLAB
      • Labeling
        • Label audio data using your existing models
        • Label image data using GPT-4o
      • Edge Impulse Datasets
    • Feature extraction
      • Building custom processing blocks
      • Sensor fusion using embeddings
    • Machine learning
      • Classification with multiple 2D input features
      • Visualize neural networks decisions with Grad-CAM
      • Sensor fusion using embeddings
      • FOMO self-attention
    • Inferencing & post-processing
      • Count objects using FOMO
      • Continuous audio sampling
      • Multi-impulse (C++)
      • Multi-impulse (Python)
    • Lifecycle management
      • CI/CD with GitHub Actions
      • Data aquisition from S3 object store - Golioth on AI
      • OTA model updates
        • with Arduino IDE (for ESP32)
        • with Arduino IoT Cloud
        • with Blues Wireless
        • with Docker on Allxon
        • with Docker on Balena
        • with Docker on NVIDIA Jetson
        • with Espressif IDF
        • with Nordic Thingy53 and the Edge Impulse app
        • with Particle Workbench
        • with Zephyr on Golioth
    • API examples
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Python API bindings example
      • Running jobs using the API
      • Trigger connected board data sampling
    • Python SDK examples
      • Using the Edge Impulse Python SDK to run EON Tuner
      • Using the Edge Impulse Python SDK to upload and download data
      • Using the Edge Impulse Python SDK with Hugging Face
      • Using the Edge Impulse Python SDK with SageMaker Studio
      • Using the Edge Impulse Python SDK with TensorFlow and Keras
      • Using the Edge Impulse Python SDK with Weights & Biases
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
        • Cloud data storage
      • Data pipelines
      • Data transformation
        • Transformation blocks
      • Upload portals
      • Custom blocks
        • Custom AI labeling blocks
        • Custom deployment blocks
        • Custom learning blocks
        • Custom processing blocks
        • Custom synthetic data blocks
        • Custom transformation blocks
      • Health reference design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
    • Project dashboard
      • Select AI hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (time-series)
      • Multi-label (time-series)
      • Tabular data (pre-processed & non-time-series)
      • Metadata
      • Auto-labeler | deprecated
    • Impulses
    • EON Tuner
      • Search space
    • Processing blocks
      • Audio MFCC
      • Audio MFE
      • Audio Syntiant
      • Flatten
      • HR/HRV features
      • Image
      • IMU Syntiant
      • Raw data
      • Spectral features
      • Spectrogram
      • Custom processing blocks
      • Feature explorer
    • Learning blocks
      • Anomaly detection (GMM)
      • Anomaly detection (K-means)
      • Classification
      • Classical ML
      • Object detection
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • Object tracking
      • Regression
      • Transfer learning (images)
      • Transfer learning (keyword spotting)
      • Visual anomaly detection (FOMO-AD)
      • Custom learning blocks
      • Expert mode
      • NVIDIA TAO | deprecated
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
    • Bring your own model (BYOM)
    • File specifications
      • deployment-metadata.json
      • ei-metadata.json
      • ids.json
      • parameters.json
      • sample_id_details.json
      • train_input.json
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
      • Rust Library
    • Rust Library
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On Android
      • On your desktop computer
      • On your Alif Ensemble Series Device
      • On your Espressif ESP-EYE (ESP32) development board
      • On your Himax WE-I Plus
      • On your Raspberry Pi Pico (RP2040) development board
      • On your SiLabs Thunderboard Sense 2
      • On your Spresense by Sony development board
      • On your Syntiant TinyML Board
      • On your TI LaunchPad using GCC and the SimpleLink SDK
      • On your Zephyr-based Nordic Semiconductor development board
    • Arm Keil MDK CMSIS-PACK
    • Arduino library
      • Arduino IDE 1.18
    • Cube.MX CMSIS-PACK
    • Docker container
    • DRP-AI library
      • DRP-AI on your Renesas development board
      • DRP-AI TVM i8 on Renesas RZ/V2H
    • IAR library
    • Linux EIM executable
    • OpenMV
    • Particle library
    • Qualcomm IM SDK GStreamer
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Edge Impulse firmwares
    • Hardware specific tutorials
      • Image classification - Sony Spresense
      • Audio event detection with Particle boards
      • Motion recognition - Particle - Photon 2 & Boron
      • Motion recognition - RASynBoard
      • Motion recognition - Syntiant
      • Object detection - SiLabs xG24 Dev Kit
      • Sound recognition - TI LaunchXL
      • Keyword spotting - TI LaunchXL
      • Keyword spotting - Syntiant - RC Commands
      • Running NVIDIA TAO models on the Renesas RA8D1
      • Two cameras, two models - running multiple object detection models on the RZ/V2L
  • Edge AI Hardware
    • Overview
    • Production-ready
      • Advantech ICAM-540
      • Seeed SenseCAP A1101
      • Industry reference design - BrickML
    • MCU
      • Ambiq Apollo4 family of SoCs
      • Ambiq Apollo510
      • Arducam Pico4ML TinyML Dev Kit
      • Arduino Nano 33 BLE Sense
      • Arduino Nicla Sense ME
      • Arduino Nicla Vision
      • Arduino Portenta H7
      • Blues Wireless Swan
      • Espressif ESP-EYE
      • Himax WE-I Plus
      • Infineon CY8CKIT-062-BLE Pioneer Kit
      • Infineon CY8CKIT-062S2 Pioneer Kit
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
      • Open MV Cam H7 Plus
      • Particle Photon 2
      • Particle Boron
      • RAKwireless WisBlock
      • Raspberry Pi RP2040
      • Renesas CK-RA6M5 Cloud Kit
      • Renesas EK-RA8D1
      • Seeed Wio Terminal
      • Seeed XIAO nRF52840 Sense
      • Seeed XIAO ESP32 S3 Sense
      • SiLabs Thunderboard Sense 2
      • Sony's Spresense
      • ST B-L475E-IOT01A
      • TI CC1352P Launchpad
    • MCU + AI accelerators
      • Alif Ensemble
      • Arduino Nicla Voice
      • Avnet RASynBoard
      • Seeed Grove - Vision AI Module
      • Seeed Grove Vision AI Module V2 (WiseEye2)
      • Himax WiseEye2 Module and ISM Devboard
      • SiLabs xG24 Dev Kit
      • STMicroelectronics STM32N6570-DK
      • Synaptics Katana EVK
      • Syntiant Tiny ML Board
    • CPU
      • macOS
      • Linux x86_64
      • Raspberry Pi 4
      • Raspberry Pi 5
      • Texas Instruments SK-AM62
      • Microchip SAMA7G54
      • Renesas RZ/G2L
    • CPU + AI accelerators
      • AVNET RZBoard V2L
      • BrainChip AKD1000
      • i.MX 8M Plus EVK
      • Digi ConnectCore 93 Development Kit
      • MemryX MX3
      • MistyWest MistySOM RZ/V2L
      • Qualcomm Dragonwing RB3 Gen 2 Dev Kit
      • Renesas RZ/V2L
      • Renesas RZ/V2H
      • IMDT RZ/V2H
      • Texas Instruments SK-TDA4VM
      • Texas Instruments SK-AM62A-LP
      • Texas Instruments SK-AM68A
      • Thundercomm Rubik Pi 3
    • GPU
      • Advantech ICAM-540
      • NVIDIA Jetson
      • Seeed reComputer Jetson
    • Mobile phone
    • Porting guide
  • Integrations
    • Arduino Machine Learning Tools
    • AWS IoT Greengrass
    • Embedded IDEs - Open-CMSIS
    • NVIDIA Omniverse
    • Scailable
    • Weights & Biases
  • Tips & Tricks
    • Combining impulses
    • Increasing model performance
    • Optimizing compute time
    • Inference performance metrics
  • Concepts
    • Glossary
    • Course: Edge AI Fundamentals
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • Data engineering
      • Audio feature extraction
      • Motion feature extraction
    • Machine learning
      • Data augmentation
      • Evaluation metrics
      • Neural networks
        • Layers
        • Activation functions
        • Loss functions
        • Optimizers
          • Learned optimizer (VeLO)
        • Epochs
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • Identify need and scope
  • Machine learning pipeline
  • Data collection
  • Data cleaning
  • Data analysis
  • Feature extraction
  • Train machine learning model
  • Model testing
  • Model deployment
  • Operations and maintenance (O&M)
  • Quiz

Was this helpful?

Export as PDF
  1. Concepts
  2. Course: Edge AI Fundamentals

Edge AI lifecycle

PreviousHow to choose an edge AI deviceNextWhat is edge MLOps?

Last updated 2 months ago

Was this helpful?

The edge AI lifecycle includes the steps involved in planning, implementing, and maintaining an edge AI project. It follows the same general flow as most engineering and programming undertakings with the added complexity of managing data and models.

Previously, we examined techniques for choosing hardware for edge AI projects. In this lesson, we will look at the machine learning (ML) pipeline and how to approach an edge AI project.

Identify need and scope

Before starting a machine learning project, it is imperative that you examine the actual need for such a project: what problem are you trying to solve? For example, you could improve user experience, such as creating a more accurate fall detection or voice-activated smart speaker. You might want to monitor machinery to identify anomalies before problems become unmanageable, which could save you time and money in the long run. Alternatively, you could count people in a retail store to identify peak times and shopping trends.

Once you have identified your requirements, you can begin scoping your project:

  • Can the project be solved through traditional, rules-based methods, or is AI needed to solve the problem?

  • Is cloud AI or edge AI the better approach?

  • What kind of hardware is the best fit for the problem?

Note that the hardware selection might not be apparent until you have constructed a prototype ML model, as that will determine the amount of processing power required. As a result, it can be helpful to quickly build a proof-of-concept and iterate on the design, including hardware selection, to arrive at a complete solution.

Machine learning pipeline

Most ML projects follow a similar flow when it comes to collecting data, examining that data, training an ML model, and deploying that model.

This complete process is known as a machine learning pipeline.

Data collection

To start the process, you need to collect raw data. For most deep learning models, you need a lot of data (think thousands or tens of thousands of samples).

In many cases, data collection involves deploying sensors to the field or your target environment and let them collect raw data. You might collect audio data with a smartphone or vibration data using an IoT sensor. You can create custom software that automatically transmits the data to a data lake or store it directly to an Edge Impulse project. Alternatively, you can store data directly to the device, such as on an SD card, that you later upload to your data storage.

Examples of data can include raw time-series data in a CSV file, audio saved as a WAV file, or images in JPEG format.

Note that sensors can vary. As a result, it's usually a good idea to collect data using the same device and/or sensors that you plan to ultimately deploy to. For example, if you plan to deploy your ML model to a smartphone, you likely want to collect data using smartphones.

Data cleaning

Raw data often contains errors in the forms of omissions (some fields missing), corrupted samples, or duplicate entries. If you do not fix these errors, the machine learning training process will either not work or contain errors.

A common practice is to employ the medallion architecture for scrubbing data, which involves copying data, cleaning out an errors or filling missing fields, and storing the results into a different bucket. The buckets have different labels: bronze, silver, gold. As the data is successively cleaned and aggregated, it moves up from bronze to silver, then silver to gold. The gold bucket is ready for analysis or to be fed to a machine learning pipeline.

The process of downloading, manipulating, and re-uploading the data back into a separate storage is known as extract, transform, load (ETL). A number of tools, such as Edge Impulse transformation blocks and AWS Glue, can be used to build automated ETL pipelines once you have an understanding of how the data is structured and what cleaning processes are required.

Data analysis

Once the data is cleaned, it can be analyzed by domain experts and data scientists to identify patterns and extract meaning. This is often a manual process that utilizes various algorithms (e.g. unsupervised ML) and tools (e.g. Python, R). Such patterns can be used to construct ML models that automatically generalize meaning from the raw input data.

Additionally, data can contain any number of biases that can lead to a biased machine learning model. Analyzing your data for biases can create a much more robust and fair model down the road.

Feature extraction

Sometimes, the raw data is not sufficient or might cause the ML model to be overly complex. As a result, manual features can be extracted from the raw data to be fed into the ML model. While feature engineering is a manual step, it can potentially save time and inference compute resources by not having to train a larger model. In other words, feature extraction can simplify the data going to a model to help make the model smaller and faster.

For example, a time-series sample might have hundreds or thousands of data points. As the number of such points increases, the model complexity also often increases. To help keep the model small, we can extract some features from each sample. In this case, performing the Fast Fourier Transform (FFT) breaks the signal apart into its frequency components, which helps the model identify repeating patterns. Now, we have a few dozen data points going into a model rather than a few hundred.

In general, smaller models and fewer inputs mean faster execution times.

Train machine learning model

With the data cleaned and features extracted, you can select or construct an ML model architecture and train that model. In the training process, you attempt to generalize meaning in the input data such that the model's output matches expected values (even when presented with new data).

Deep neural networks are the current popular approach to solving a variety of supervised and unsupervised ML tasks. ML scientists and engineers use a variety of tools, such as TensorFlow and PyTorch to build, train, and test deep neural networks.

In addition to using these lower-level tools to design your own model architecture, you can also rely on pre-built models or tools, like Edge Impulse, that contain the building blocks needed to tackle a wide variety of edge AI tasks.

Pretrained models can be retrained using custom data in a process known as transfer learning. Transfer learning is often faster and requires less data than training from scratch.

The combination of automated feature extraction and ML model is known as an impulse. This combination of steps can be deployed to cloud servers and edge devices. The impulse takes in raw data, performs any necessary feature extraction, and runs inference during prediction serving.

Model testing

In almost all cases, you want to test your model's performance. Good ML practices dictate keeping a part of your data separate from the training data (known as a test set, or holdout set). Once you have trained the model, you will use this test set to verify the model's functionality. If your model performs well on the training set but poorly on the test set, it might be overfit, which often requires you to rethink your dataset, feature extraction, and model architecture.

The process of data cleaning, feature extraction, model training, and model testing is almost always iterative. You will often find yourself revisiting each stage in the pipeline to create an impulse that performs well for your particular task and within your hardware constraints.

Additionally, you might need to collect new data if your current dataset does not produce an acceptable model. For example, vibration data from an accelerometer alone might prove insufficient for creating a robust model, so you have to collect supplemental data, such as audio data from a microphone. The combination of vibration and audio data is usually better at identifying mechanical anomalies than one sensor type alone.

Model deployment

For cloud-based AI, you can use tools like SageMaker to deploy your model to a server as part of a prediction serving application. Edge AI can be somewhat trickier, as you often need to optimize your model for a particular hardware and develop an application around that model.

Optimization can involve a number of processes that reduce the size and complexity of the ML model, such as pruning unimportant nodes from the neural network, quantizing operations to run more efficiently on low-end hardware, and compiling models to run on specialized hardware (e.g. GPUs and NPUs).

The ML model is simply a collection of mathematical operations. On it's own, it cannot do much. Due to this limitation, an application needs to be built around the model to collect data, feed data to the impulse for feature extraction and inference, and take some action based on the inference results.

In cloud-based AI, this application is often a prediction serving program that waits for web requests containing raw data. The application can then respond with inference results. On the other hand, edge AI usually requires a tighter integration between performing inference and doing something with the results, such as notifying a user, stopping a machine, or making a decision on how to steer a car.

Programmers and software engineers are often needed to build the application. In many cases, these developers are experts with the target deployment hardware, such as a particular microcontroller, embedded Linux, or smartphone app creation. They work with the ML engineering team to ensure that the model can run on the target hardware.

Operations and maintenance (O&M)

As with any software deployment, operations and maintenance is important to provide continuing support to the edge AI solution. As the data or operating environment changes over time, model performance can begin to degrade. As a result, such deployments often require monitoring model performance, collecting new data, and updating the model.

In the next section on edge MLOps, we will examine the different types of model drift and how parts of the ML pipeline can be automated to create a repeatable system for O&M.

Quiz

Test your knowledge on the edge AI lifecycle with the following quiz:

Machine learning pipeline and workflow
Machine learning data cleaning
Feature extraction and engineering