Edge AI lifecycle
Last updated
Last updated
The edge AI lifecycle includes the steps involved in planning, implementing, and maintaining an edge AI project. It follows the same general flow as most engineering and programming undertakings with the added complexity of managing data and models.
Previously, we examined techniques for choosing hardware for edge AI projects. In this lesson, we will look at the machine learning (ML) pipeline and how to approach an edge AI project.
Before starting a machine learning project, it is imperative that you examine the actual need for such a project: what problem are you trying to solve? For example, you could improve user experience, such as creating a more accurate fall detection or voice-activated smart speaker. You might want to monitor machinery to identify anomalies before problems become unmanageable, which could save you time and money in the long run. Alternatively, you could count people in a retail store to identify peak times and shopping trends.
Once you have identified your requirements, you can begin scoping your project:
Can the project be solved through traditional, rules-based methods, or is AI needed to solve the problem?
Is cloud AI or edge AI the better approach?
What kind of hardware is the best fit for the problem?
Note that the hardware selection might not be apparent until you have constructed a prototype ML model, as that will determine the amount of processing power required. As a result, it can be helpful to quickly build a proof-of-concept and iterate on the design, including hardware selection, to arrive at a complete solution.
Most ML projects follow a similar flow when it comes to collecting data, examining that data, training an ML model, and deploying that model.
This complete process is known as a machine learning pipeline.
To start the process, you need to collect raw data. For most deep learning models, you need a lot of data (think thousands or tens of thousands of samples).
In many cases, data collection involves deploying sensors to the field or your target environment and let them collect raw data. You might collect audio data with a smartphone or vibration data using an IoT sensor. You can create custom software that automatically transmits the data to a data lake or store it directly to an Edge Impulse project. Alternatively, you can store data directly to the device, such as on an SD card, that you later upload to your data storage.
Examples of data can include raw time-series data in a CSV file, audio saved as a WAV file, or images in JPEG format.
Note that sensors can vary. As a result, it's usually a good idea to collect data using the same device and/or sensors that you plan to ultimately deploy to. For example, if you plan to deploy your ML model to a smartphone, you likely want to collect data using smartphones.
Raw data often contains errors in the forms of omissions (some fields missing), corrupted samples, or duplicate entries. If you do not fix these errors, the machine learning training process will either not work or contain errors.
A common practice is to employ the medallion architecture for scrubbing data, which involves copying data, cleaning out an errors or filling missing fields, and storing the results into a different bucket. The buckets have different labels: bronze, silver, gold. As the data is successively cleaned and aggregated, it moves up from bronze to silver, then silver to gold. The gold bucket is ready for analysis or to be fed to a machine learning pipeline.
The process of downloading, manipulating, and re-uploading the data back into a separate storage is known as extract, transform, load (ETL). A number of tools, such as Edge Impulse transformation blocks and AWS Glue, can be used to build automated ETL pipelines once you have an understanding of how the data is structured and what cleaning processes are required.
Once the data is cleaned, it can be analyzed by domain experts and data scientists to identify patterns and extract meaning. This is often a manual process that utilizes various algorithms (e.g. unsupervised ML) and tools (e.g. Python, R). Such patterns can be used to construct ML models that automatically generalize meaning from the raw input data.
Additionally, data can contain any number of biases that can lead to a biased machine learning model. Analyzing your data for biases can create a much more robust and fair model down the road.
Sometimes, the raw data is not sufficient or might cause the ML model to be overly complex. As a result, manual features can be extracted from the raw data to be fed into the ML model. While feature engineering is a manual step, it can potentially save time and inference compute resources by not having to train a larger model. In other words, feature extraction can simplify the data going to a model to help make the model smaller and faster.
For example, a time-series sample might have hundreds or thousands of data points. As the number of such points increases, the model complexity also often increases. To help keep the model small, we can extract some features from each sample. In this case, performing the Fast Fourier Transform (FFT) breaks the signal apart into its frequency components, which helps the model identify repeating patterns. Now, we have a few dozen data points going into a model rather than a few hundred.
In general, smaller models and fewer inputs mean faster execution times.
With the data cleaned and features extracted, you can select or construct an ML model architecture and train that model. In the training process, you attempt to generalize meaning in the input data such that the model's output matches expected values (even when presented with new data).
Deep neural networks are the current popular approach to solving a variety of supervised and unsupervised ML tasks. ML scientists and engineers use a variety of tools, such as TensorFlow and PyTorch to build, train, and test deep neural networks.
In addition to using these lower-level tools to design your own model architecture, you can also rely on pre-built models or tools, like Edge Impulse, that contain the building blocks needed to tackle a wide variety of edge AI tasks.
Pretrained models, such as those available from NVIDIA TAO, can be retrained using custom data in a process known as transfer learning. Transfer learning is often faster and requires less data than training from scratch.
The combination of automated feature extraction and ML model is known as an impulse. This combination of steps can be deployed to cloud servers and edge devices. The impulse takes in raw data, performs any necessary feature extraction, and runs inference during prediction serving.
In almost all cases, you want to test your model's performance. Good ML practices dictate keeping a part of your data separate from the training data (known as a test set, or holdout set). Once you have trained the model, you will use this test set to verify the model's functionality. If your model performs well on the training set but poorly on the test set, it might be overfit, which often requires you to rethink your dataset, feature extraction, and model architecture.
The process of data cleaning, feature extraction, model training, and model testing is almost always iterative. You will often find yourself revisiting each stage in the pipeline to create an impulse that performs well for your particular task and within your hardware constraints.
Additionally, you might need to collect new data if your current dataset does not produce an acceptable model. For example, vibration data from an accelerometer alone might prove insufficient for creating a robust model, so you have to collect supplemental data, such as audio data from a microphone. The combination of vibration and audio data is usually better at identifying mechanical anomalies than one sensor type alone.
For cloud-based AI, you can use tools like SageMaker to deploy your model to a server as part of a prediction serving application. Edge AI can be somewhat trickier, as you often need to optimize your model for a particular hardware and develop an application around that model.
Optimization can involve a number of processes that reduce the size and complexity of the ML model, such as pruning unimportant nodes from the neural network, quantizing operations to run more efficiently on low-end hardware, and compiling models to run on specialized hardware (e.g. GPUs and NPUs).
The ML model is simply a collection of mathematical operations. On it's own, it cannot do much. Due to this limitation, an application needs to be built around the model to collect data, feed data to the impulse for feature extraction and inference, and take some action based on the inference results.
In cloud-based AI, this application is often a prediction serving program that waits for web requests containing raw data. The application can then respond with inference results. On the other hand, edge AI usually requires a tighter integration between performing inference and doing something with the results, such as notifying a user, stopping a machine, or making a decision on how to steer a car.
Programmers and software engineers are often needed to build the application. In many cases, these developers are experts with the target deployment hardware, such as a particular microcontroller, embedded Linux, or smartphone app creation. They work with the ML engineering team to ensure that the model can run on the target hardware.
As with any software deployment, operations and maintenance is important to provide continuing support to the edge AI solution. As the data or operating environment changes over time, model performance can begin to degrade. As a result, such deployments often require monitoring model performance, collecting new data, and updating the model.
In the next section on edge MLOps, we will examine the different types of model drift and how parts of the ML pipeline can be automated to create a repeatable system for O&M.
Test your knowledge on the edge AI lifecycle with the following quiz: