In this glossary, you will find a comprehensive list of terms related to Edge Impulse, edge AI, and tangential fields. The terms are organized alphabetically for easy reference. This glossary is designed to help you navigate the world of Edge Impulse and related technologies with confidence.Let us know if you have any questions or suggestions for this glossary. We are always aiming to keep up with the latest terminology in our documentation and resources. Please feel free to note any terms you would like to see included or want to discuss on our forum
The terms in this glossary are defined based on their usage in Edge Impulse documentation and tutorials. Some terms may have different meanings in other contexts. If you are unsure about the meaning of a term, please refer to the context in which it is used in Edge Impulse documentation.
ADC (Analog-to-Digital Converter): Converts an analog signal into digital.
Algorithm: A procedure used for solving a problem or performing a computation.
ARM Processor: CPUs based on the RISC architecture, used in SBCs.
Artificial Intelligence (AI): The simulation of human intelligence in machines.
Attention Mechanism: A technique used in neural networks to focus on specific parts of the input sequence when making predictions. It is crucial in the architecture of transformers, which are used in LLMs.
Bioinformatics: Computational technology in molecular biology.
Biometric Monitoring: Tracking physiological data for health purposes.
Bidirectional Encoder Representations from Transformers (BERT): A pre-trained language model that uses transformers to process text in both directions, improving understanding and context.
Edge Computing: Processing data near its generation point.
Edge Impulse Studio: Development platform for AI on edge devices.
Embedded Linux: Linux OS/kernel used in embedded systems.
Embedded Programming: Writing software for embedded systems.
Embedded System: Computer hardware and software for specific functions.
Embedding: A representation of text where words or phrases are mapped to vectors of real numbers. Embeddings capture semantic meaning and are used as input to language models.
EON Compiler: A compiler that optimizes neural network models for edge devices, reducing memory and computational requirements. EON: Edge Optimized Neural.
Firmware: Software programmed into the read-only memory of electronic devices.
Fine-tuning: The process of further training a pre-trained model on a specific task with labeled data. Fine-tuning helps the model adapt to particular tasks or domains.
Float32: A type of numerical precision where each number is stored with decimal values and take 32 bits of computer memory to store.
FOMO (Faster Objects More Objects): is a novel machine learning algorithm designed to bring object detection to highly constrained devices. It allows for real-time object detection, tracking, and counting on microcontrollers, using up to 30x less processing power and memory than MobileNet SSD or YOLOv5. FOMO is capable of running in less than 200K of RAM and is fully compatible with any MobileNetV2 model. Outputs centroids and can run on microcontrollers Learn more.
FOMO-AD (Faster Objects More Objects for Anomaly Detection): is a learning block provided by Edge Impulse for handling defect identification in computer vision applications. It is designed for fast object detection deployment on resource-constrained devices like microcontrollers. The FOMO-AD learning block uses a combination of neural networks and Gaussian Mixture Models (GMMs) to identify unusual patterns or anomalies in image data that do not conform to expected behavior. Outputs a segmentation with anomaly scores. Learn more.
Personal Health Records (PHRs): Health records maintained by the patient.
Programmable Logic Controllers (PLCs): Programmable Logic Controllers (PLCs) are industrial-grade digital computers designed to perform control functions—especially in manufacturing and industrial processes. Initially developed to replace complex relay-based control systems, PLCs offer a flexible and efficient solution for automating tasks like machinery control, assembly line management, and other industrial operations. Read on
Prescriptive Maintenance: Maintenance strategy that uses data analysis and diagnostics.
Pre-training: The process of training a model on a large dataset in an unsupervised manner before fine-tuning it on a specific task. Pre-training helps the model learn general language representations.
Private Project: A private project is an Edge Impulse project only viewable, modifiable and clonable by the user, collaborators and organization members.
Project: An Edge Impulse project is an ML pipeline.
Prompt Engineering: The process of designing input prompts to effectively utilize the capabilities of large language models to perform specific tasks.
Public Project: A public project is an Edge Impulse project made public under the Apache 2.0 license with the community on the Edge Impulse community portal.
PWM (Pulse Width Modulation): Technique for analog results with digital means.
Self-Attention: A mechanism that allows the model to weigh the importance of different parts of the input sequence, enabling it to focus on relevant information when making predictions.
Sensor Data: Data from physical sensors like temperature, motion, etc.
Sensor Fusion: Combining data from multiple sensors to improve accuracy and reliability of machine learning models. Learn more.
Sequence-to-Sequence (Seq2Seq): A type of model architecture used for tasks where an input sequence is transformed into an output sequence, such as translation or summarization.
Smart Health: Advanced technologies in healthcare for monitoring and treatment.
SoC (System on Chip): An integrated circuit integrating all components of a computer.
SBC (Single Board Computer): A complete computer on one circuit board.
Telehealth: Health-related services via electronic technologies.
TensorFlow: A set of software tools focused on deep learning, published by Google.
LiteRT (previously Tensorflow Lite): A tool within TensorFlow that helps run inference on mobile and embedded Linux devices.
LiteRT (previously Tensorflow Lite) for Microcontrollers: A tool within TensorFlow that helps run inference on bare metal devices such as microcontrollers.
Tokenization: The process of converting a text into a sequence of tokens (words, subwords, or characters) that can be used as input for a language model.
Tiny Machine Learning (TinyML): ML for low-power devices.
Training: Teaching a machine learning model using data.
Transfer learning: A special technique for training models that reduces the amount of data required.
Transformer Model: A type of neural network architecture that uses self-attention mechanisms to process input data. Transformers are the foundation of many state-of-the-art LLMs, including BERT and GPT.