Machine learning training and deployment
In machine learning (ML), data is fed into the training process. For supervised learning, the ground-truth labels are also provided along with each sample. The training algorithm automatically updates the parameters (also known as “weights”) in the ML model.
Machine learning model training

Machine learning model inference
Benefits of edge AI
Assuming that you can run your ML model on an edge device, such as a laptop, smartphone, single-board computer, or embedded Internet of Things (IoT) device, edge AI has the following advantages over a cloud-based approach:- Reduced bandwidth - Rather than transmitting raw data over the network, you can perform inference on the edge device directly. From there, you would only need to transmit the results, which is often much less data than the raw input.
- Reduced latency - Transmitting data across networks (including the internet) can take time, as that data has to travel through multiple switches, routers, and servers. The round trip latency is often measured in 100s of milliseconds when waiting for a response from a cloud server. On the other hand, there is little or no network latency with edge AI, as inference is performed on or relatively close to where the data was collected.
- Better energy efficiency - Most cloud servers require large overhead with containerized operating systems and various abstraction layers. By running inference on edge devices, you can often do away with these layers and overhead.
- Increased reliability - If you are operating in an environment with little or no internet connection, your edge devices can still continue to operate. This is important in remote environments or applications like self-driving cars.
- Improved data privacy - While IoT devices require care when implementing security plans, you can rest assured that your raw data does not leave your device or edge network. Users can raw data, such as images of their faces, is not leaving the network to be intercepted by malicious actors.
Limitations of edge AI
Edge AI has a number of limitations that you should take into consideration and, you should weigh your options carefully versus cloud deployment.- Resource constraints - In general, edge devices offer fewer computational resources than their cloud-based counterparts. Cloud servers can offer powerful processors and large amounts of memory. If your ML model cannot be optimized or constrained to run on an edge device, you should consider a cloud-based solution.
- Limited remote access - Prediction serving from the cloud offers easy access from any device that has internet access. Remotely access edge devices often requires special network configuration, such as running a VPN service.
- Scaling - Scaling prediction services of cloud models usually requires simply cloning your server and paying the service provider more money for additional computing power. With edge computing, you need to purchase and configure additional hardware.
Examples of edge AI
Edge AI is already being used in our everyday lives as well as offering money savings as an extension of industrial IoT applications. One of the most prominent home automation example of edge AI is the smart speaker.
Smart speaker

Smart watch

Lexmark Optra

Self-driving bus