Glossary

In this glossary, you will find a comprehensive list of terms related to Edge Impulse and various related fields. The terms are organized alphabetically for easy reference. If you come across a term that you are unfamiliar with, you can look it up here to find a clear and concise definition. This glossary is designed to help you navigate the world of Edge Impulse and related technologies with confidence.

A

  • ADC (Analog-to-Digital Converter): Converts an analog signal into digital.

  • Algorithm: A procedure used for solving a problem or performing a computation.

  • ARM Processor: CPUs based on the RISC architecture, used in SBCs.

  • Artificial Intelligence (AI): The simulation of human intelligence in machines.

  • Attention Mechanism: A technique used in neural networks to focus on specific parts of the input sequence when making predictions. It is crucial in the architecture of transformers, which are used in LLMs.

B

  • Bioinformatics: Computational technology in molecular biology.

  • Biometric Monitoring: Tracking physiological data for health purposes.

  • Bidirectional Encoder Representations from Transformers (BERT): A pre-trained language model that uses transformers to process text in both directions, improving understanding and context.

C

  • Classification: The task of determining what category (or class) an input belongs to.

  • Connectivity: Methods and technologies for connecting IoT devices.

  • Condition Monitoring: Tracking machine or component health.

  • Cross-Compilation: Compiling code for an embedded system with a different architecture.

D

  • Data pipeline: In Edge Impulse a data pipeline can be part of a project or be stand-alone.

  • Data Preprocessing: Cleaning and organizing raw data before model training.

  • Deep Learning: ML subset using neural networks with many layers.

  • Digital Twin: A virtual model of a physical process or product.

E

  • Edge Computing: Processing data near its generation point.

  • Edge Impulse Studio: Development platform for AI on edge devices.

  • Embedded Linux: Linux OS/kernel used in embedded systems.

  • Embedded Programming: Writing software for embedded systems.

  • Embedded System: Computer hardware and software for specific functions.

  • Embedding: A representation of text where words or phrases are mapped to vectors of real numbers. Embeddings capture semantic meaning and are used as input to language models.

  • Edge Optimized Neural (EON) Compiler: A compiler that optimizes neural network models for edge devices, reducing memory and computational requirements. Learn more.

  • Ethernet Port: Networking port on some SBCs.

F

  • Firmware: Software programmed into the read-only memory of electronic devices.

  • Fine-tuning: The process of further training a pre-trained model on a specific task with labeled data. Fine-tuning helps the model adapt to particular tasks or domains.

  • Float32: A type of numerical precision where each number is stored with decimal values and take 32 bits of computer memory to store.

  • FOMO (Faster Objects More Objects): is a novel machine learning algorithm designed to bring object detection to highly constrained devices. It allows for real-time object detection, tracking, and counting on microcontrollers, using up to 30x less processing power and memory than MobileNet SSD or YOLOv5. FOMO is capable of running in less than 200K of RAM and is fully compatible with any MobileNetV2 model. Outputs centroids and can run on microcontrollers Learn more.

  • FOMO-AD (Faster Objects More Objects for Anomaly Detection): is a learning block provided by Edge Impulse for handling defect identification in computer vision applications. It is designed for fast object detection deployment on resource-constrained devices like microcontrollers. The FOMO-AD learning block uses a combination of neural networks and Gaussian Mixture Models (GMMs) to identify unusual patterns or anomalies in image data that do not conform to expected behavior. Outputs a segmentation with anomaly scores. Learn more.

  • GPIO (General-Purpose Input/Output): Pins on an MCU controlled by the user.

  • GPIO Header: Group of pins on an SBC for interfacing with other circuits.

H

  • Heat Sink: Component for dissipating heat in SBCs.

I

  • Impulse: An on-device optimized processing pipeline composed of a combination of preprocessing, DSP and ML models.

  • Inference: Making predictions using a trained ML model.

  • Inference Performance Metrics: Metrics to evaluate the performance of a machine learning model during inference. Learn more.

  • Ingestion Service: Collecting and transferring data to Edge Impulse.

  • Int8: A type of numerical precision where each number is stored as a whole number and take 8 bits of computer memory to store.

  • Industrial Automation: Control systems for industrial process management.

  • Interrupt: Signal indicating a need for immediate attention.

  • IoT Device: A device connected to the Internet with computing capabilities.

K

  • Keras: A tool within TensorFlow that makes it easy to create and train deep learning models.

L

  • Label: A special type of metadata that is used during training to instruct the model on some property of the data.

  • Linux OS: Operating system for many SBCs, known for its open-source nature.

M

  • Machine Learning (ML): AI field enabling systems to learn and improve from experience.

  • MCU (Microcontroller Unit): A compact integrated circuit for specific operations.

  • Medical Imaging: Visual representations of the interior of a body.

  • Metadata: Additional information that is associated with a given data sample in the dataset.

  • Microcontroller (MCU): A small computer on a single integrated circuit, often used in IoT devices.

  • Model: A combination of algorithm and state that is trained to perform a particular task.

  • Model Compression: Reducing machine learning model size and complexity.

  • Multi-Impulse: Running multiple machine learning models simultaneously on the same device. Learn more.

N

  • Neural network: A model whose structure is inspired by networks of biological neurons.

  • Neural Processing Unit (NPU): Specialized hardware for efficient neural network computations.

O

  • Object detection: The task of identifying and localizing specific objects within an image.

  • On-board Storage: Built-in storage capacity in an SBC.

P

  • Personal Health Records (PHRs): Health records maintained by the patient.

  • PLC: Controller for industrial processes.

  • Prescriptive Maintenance: Maintenance strategy that uses data analysis and diagnostics.

  • Pre-training: The process of training a model on a large dataset in an unsupervised manner before fine-tuning it on a specific task. Pre-training helps the model learn general language representations.

  • Private Project: A private project is an Edge Impulse project only viewable, modifiable and clonable by the user, collaborators and organization members.

  • Project: An Edge Impulse project is an ML pipeline.

  • Prompt Engineering: The process of designing input prompts to effectively utilize the capabilities of large language models to perform specific tasks.

  • Public Project: A public project is an Edge Impulse project made public under the Apache 2.0 license with the community on the Edge Impulse community portal.

  • PWM (Pulse Width Modulation): Technique for analog results with digital means.

Q

  • Quantization: A technique to reduce model weights and biases in numerical precision to save memory and speed up computation.

R

  • Raspberry Pi: A series of small SBCs for teaching computer science.

  • Real-Time Operating System (RTOS): An OS designed for real-time applications.

  • Real-Time Processing: Immediate processing of input for timely output.

  • Remote Patient Monitoring (RPM): Recording and analyzing health data in real-time.

S

  • SCADA: System for remote monitoring and control.

  • Self-Attention: A mechanism that allows the model to weigh the importance of different parts of the input sequence, enabling it to focus on relevant information when making predictions.

  • Sensor Data: Data from physical sensors like temperature, motion, etc.

  • Sensor Fusion: Combining data from multiple sensors to improve accuracy and reliability of machine learning models. Learn more.

  • Sequence-to-Sequence (Seq2Seq): A type of model architecture used for tasks where an input sequence is transformed into an output sequence, such as translation or summarization.

  • Smart Health: Advanced technologies in healthcare for monitoring and treatment.

  • SoC (System on Chip): An integrated circuit integrating all components of a computer.

  • SBC (Single Board Computer): A complete computer on one circuit board.

T

  • Telehealth: Health-related services via electronic technologies.

  • TensorFlow: A set of software tools focused on deep learning, published by Google.

  • TensorFlow Lite: A tool within TensorFlow that helps run inference on mobile and embedded Linux devices.

  • TensorFlow Lite for Microcontrollers: A tool within TensorFlow that helps run inference on bare metal devices such as microcontrollers.

  • Tokenization: The process of converting a text into a sequence of tokens (words, subwords, or characters) that can be used as input for a language model.

  • Tiny Machine Learning (TinyML): ML for low-power devices.

  • Training: Teaching a machine learning model using data.

  • Transfer learning: A special technique for training models that reduces the amount of data required.

  • Transformer Model: A type of neural network architecture that uses self-attention mechanisms to process input data. Transformers are the foundation of many state-of-the-art LLMs, including BERT and GPT.

U

  • UART (Universal Asynchronous Receiver/Transmitter): Serial communication protocol.

  • User: Designated license holder on the Edge Impulse platform, empowered to create and execute projects.

W

  • Wearable Technology: Devices collecting health and exercise data.

X

  • X86 Architecture: A common CPU architecture used in PCs.

Y

  • Yocto Project: Open-source project for creating custom Linux distributions.

  • YOLO (You Only Look Once): Object detection architecture that outputs bounding boxes. Learn more.

  • YOLOv4 (You Only Look Once version 4): Enhanced version of YOLO with improved accuracy and performance for object detection tasks. Learn more.

Z

  • Zero-shot Learning: A machine learning paradigm where a model can make predictions on classes it has never seen during training.

Formulas

Accuracy

Accuracy is the fraction of predictions our model got right. It is defined as:

accuracy=TP+TNTP+TN+FP+FNaccuracy = \frac{TP + TN}{TP + TN + FP + FN}

Area Under ROC Curve (AUC-ROC)

The Area Under the Receiver Operating Characteristic Curve (AUC-ROC) is a performance measurement for classification problems. The ROC curve is a plot of true positive rate (recall) against the false positive rate (1 - specificity). The AUC represents the degree or measure of separability, and it tells how much the model is capable of distinguishing between classes. The higher the AUC, the better the model. It is defined as:

AUC=01TPR(f)dfAUC = \int_{0}^{1} TPR(f) \, df

where:

  • (TPR) is the true positive rate (recall),

  • (f) is the false positive rate.

Cross-Entropy Loss

Cross-Entropy Loss is a measure used to quantify the difference between two probability distributions for a given random variable or set of events. It is defined as:

H(y,y^)=iyilog(y^i)H(y, \hat{y}) = -\sum_{i} y_i \log(\hat{y}_i)

Explained Variance Score

The Explained Variance Score measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. It is defined as:

Explained Variance=1Var(yy^)Var(y)\text{Explained Variance} = 1 - \frac{\text{Var}(y - \hat{y})}{\text{Var}(y)}

where:

  • (\text{Var}(y - \hat{y})) is the variance of the errors,

  • (\text{Var}(y)) is the variance of the actual values.

An Explained Variance Score close to 1 indicates that the model explains a large portion of the variance in the data.

F1 Score

The F1 score is a harmonic mean of precision and recall, providing a balance between them. It is calculated as:

F1=2precisionrecallprecision+recallF1 = 2 \cdot \frac{precision \cdot recall}{precision + recall}

where: precision recall

IoU (Intersection over Union) for Object Detection

IoU is a measure of the overlap between two bounding boxes. It is defined as:

IoU=area_of_overlaparea_of_unionIoU = \frac{area\_of\_overlap}{area\_of\_union}

mAP (Mean Average Precision)

Mean Average Precision (mAP) is a common metric used to evaluate object detection models. It summarizes the precision-recall curve for different classes. It is calculated as:

mAP=1Ni=1NAPimAP = \frac{1}{N} \sum_{i=1}^{N} AP_i

where:

  • (N) is the number of classes,

  • (AP_i) is the Average Precision for class (i).

Average Precision (AP) is computed as the area under the precision-recall curve for a specific class. It integrates the precision over all recall values from 0 to 1. For object detection, AP can be calculated at different IoU thresholds to provide a comprehensive evaluation.

In addition to the standard mAP, specific metrics include:

  • mAP@[IoU=50]: mAP at 50% IoU threshold.

  • mAP@[IoU=75]: mAP at 75% IoU threshold.

  • mAP@[area=small]: mAP for small objects.

  • mAP@[area=medium]: mAP for medium objects.

  • mAP@[area=large]: mAP for large objects.

Mean Absolute Error (MAE)

Mean Absolute Error (MAE) measures the average magnitude of the errors in a set of predictions, without considering their direction. It is calculated as:

MAE=1n(i=1)nyiy^iMAE = \frac{1}{n} \sum{(i=1)}^{n} |y_i - \hat{y}_i|

where:

  • (n) is the number of data points,

  • (y_i) is the actual value,

  • (\hat{y}_i) is the predicted value.

Mean Squared Error (MSE)

Mean Squared Error (MSE) measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. It is calculated as:

MSE=1n(i=1)n(yiy^i)2MSE = \frac{1}{n} \sum{(i=1)}^{n} (y_i - \hat{y}_i)^2

where:

  • (n) is the number of data points,

  • (y_i) is the actual value,

  • (\hat{y}_i) is the predicted value.

Precision

Precision indicates the accuracy of positive predictions. It is defined as:

precision=TPTP+FPprecision = \frac{TP}{TP + FP}

where:

  • (TP) is the number of true positives,

  • (FP) is the number of false positives,

  • (FN) is the number of false negatives.

Recall

Recall measures the ability of a model to find all relevant cases within a dataset. It is defined as:

recall=TPTP+FNrecall = \frac{TP}{TP + FN}

where:

  • (TP) is the number of true positives,

  • (FP) is the number of false positives,

  • (FN) is the number of false negatives.

Recall for object detection can also be specified with additional parameters:

  • Recall@[max_detections=1]: Recall when considering only the top 1 detection per image.

  • Recall@[max_detections=10]: Recall when considering the top 10 detections per image.

  • Recall@[max_detections=100]: Recall when considering the top 100 detections per image.

  • Recall@[area=small]: Recall for small objects.

  • Recall@[area=medium]: Recall for medium objects.

  • Recall@[area=large]: Recall for large objects.

Sigmoid Function

The Sigmoid function is used for binary classification in logistic regression models. It is defined as:

σ(x)=11+ex\sigma(x) = \frac{1}{1 + e^{-x}}

Softmax Function

The Softmax function is used for multi-class classification. It converts logits to probabilities that sum to 1. It is defined for class (j) as:

σ(z)j=ezj(k=1)Kezk  for  j=1,...,K\sigma(z)_j = \frac{e^{z_j}}{\sum{(k=1)}^{K} e^{z_k}} \; \text{for} \; j = 1, ..., K

Weighted Average F1 Score

Weighted Average F1 Score takes into account the F1 score of each class and the number of instances for each class. It is defined as:

Weighted Average F1 Score=(i=1)n(TPi+FNiTP+FNF1i)Weighted\ Average\ F1\ Score = \sum{(i=1)}^{n} \left( \frac{TP_i + FN_i}{TP + FN} \cdot F1_i \right)

where:

  • (n) is the number of classes,

  • (TP_i) is the true positives for class (i),

  • (FN_i) is the false negatives for class (i),

  • (TP) is the total number of true positives,

  • (FN) is the total number of false negatives,

  • (F1_i) is the F1 score for class (i).

Weighted Average Precision

Weighted Average Precision takes into account the precision of each class and the number of instances for each class. It is defined as:

Weighted Average Precision=(i=1)n(TPi+FNiTP+FNPrecisioni)Weighted\ Average\ Precision = \sum{(i=1)}^{n} \left( \frac{TP_i + FN_i}{TP + FN} \cdot Precision_i \right)

where:

  • (n) is the number of classes,

  • (TP_i) is the true positives for class (i),

  • (FN_i) is the false negatives for class (i),

  • (TP) is the total number of true positives,

  • (FN) is the total number of false negatives,

  • (Precision_i) is the precision for class (i).

Weighted Average Recall

Weighted Average Recall takes into account the recall of each class and the number of instances for each class. It is defined as:

Weighted Average Recall=(i=1)n(TPi+FNiTP+FNRecalli)Weighted\ Average\ Recall = \sum{(i=1)}^{n} \left( \frac{TP_i + FN_i}{TP + FN} \cdot Recall_i \right)

where:

  • (n) is the number of classes,

  • (TP_i) is the true positives for class (i),

  • (FN_i) is the false negatives for class (i),

  • (TP) is the total number of true positives,

  • (FN) is the total number of false negatives,

  • (Recall_i) is the recall for class (i).

Closing note on our glossary

The terms in this glossary are defined based on their usage in Edge Impulse documentation and tutorials. Some terms may have different meanings in other contexts. For example, the term "project" is used in Edge Impulse to refer to a machine learning pipeline, but it may have other meanings in other contexts. If you are unsure about the meaning of a term, please refer to the context in which it is used in Edge Impulse documentation.

Let us know if you have any questions or suggestions for this glossary. We are always aiming to keep up with the latest terminology in our documentation and resources please feel free to note any you feel we missed any or want to discuss these on our forum

Last updated