Neural Networks

Neural networks are a set of algorithms, modeled loosely after the human brain, designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling, or clustering of raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text, or time series, must be translated.

What about anomaly detection with K-Means or GMM?

Please note that K-Means and Gaussian Mixture Models (GMM) are not neural networks. They are algorithms used in unsupervised machine learning, specifically for clustering tasks.

In Edge Impulse, Neural Networks can be used for supervised learning tasks such as Image or Audio Classification, Regression, Object Detection either using Transfer Learning, using pre-set neural network architectures or by designing your own.

How do they work?

Neural networks consist of layers of interconnected nodes, also known as neurons.

Neurons (or nodes)

Each node receives input from its predecessors, processes it, and passes its output to succeeding nodes. The processing involves weighted inputs, a bias (threshold), and an activation function that determines whether and to what extent the signal should progress further through the network.


Neurons are organized into layers: input, hidden, and output layers. The complexity of the network depends on the number and size of these layers.

  • Input Layer: Receives raw input data.

  • Hidden Layers: Perform computations using weighted inputs.

  • Output Layer: Produces the final output.

Neural Network Architectures

Neural networks can vary widely in architecture, adapting to different types of problems and data.

Learning process

The power of neural networks lies in their ability to learn. Learning occurs through a process called training, where the network adjusts its weights based on the difference between its output and the desired output. This process is facilitated by an optimizer, which guides the network in adjusting its weights to minimize error (the loss).

  • Training: Neural networks learn by adjusting weights based on the error in predictions. This process is repeated over many training cycles, or epochs, using training data.

  • Backpropagation: A key mechanism where the network adjusts its weights starting from the output layer and moving backward through the hidden layers, minimizing error with each pass.

Neural Networks in Edge AI

In Edge AI, neural networks operate under constraints of lower computational power and energy efficiency. They need to be optimized for speed and size without compromising too much on accuracy. This often involves techniques like feature extraction, neural network architectures, transfer learning, quantization, and model pruning.

  • Feature Extraction: Extracting meaningful features from the raw data that can be effectively processed by the neural network on resource-constrained devices.

  • Neural Networks Architectures: Selecting a model architecture that is designed to run efficiently on the type of processor you are targeting, and fit within memory constraints.

  • Transfer Learning: Using a pre-trained model and retraining it with a specific smaller dataset relevant to the edge application.

  • Quantization: Reducing the precision of the numbers used in the model to decrease the computational and storage burden.

  • Model Pruning: Reducing the size of the model by eliminating unnecessary nodes and layers.

Neural networks, in the context of Edge AI, must be designed and optimized to function efficiently in resource-constrained environments, balancing the trade-off between accuracy and performance.

To learn more about Neural Networks, see the “Introduction to Neural Networks” video in our “Introduction to Embedded Machine Learning” course:

Last updated