Welcome to Edge Impulse! When we started Edge Impulse, we initially focused on developing a suite of engineering tools designed to empower embedded engineers to harness the power of machine learning on edge devices. As we grew, we also started to develop advanced tools for ML practitioners to ease the collaboration between teams in organizations.
In this getting started guide, we'll walk you through the essential steps to dive into Edge Impulse and leverage it for your embedded projects.
Embedded systems are becoming increasingly intelligent, and Edge Impulse is here to streamline the integration of machine learning into your hardware projects. Here's why embedded engineers are turning to Edge Impulse:
Extend hardware capabilities: Edge Impulse can extend hardware capabilities by enabling the integration of machine learning models, allowing edge devices to process complex tasks, recognize patterns, and make intelligent decisions that are complex to develop using rule-based algorithms.
Open-source export formats: Exported models and libraries contain both digital signal processing code and machine learning models, giving you full explainability of the code.
Powerful integrations: Edge Impulse provides complete and documented integrations with various hardware platforms, allowing you to focus on the application logic rather than the intricacies of machine learning.
Support for diverse sensors: Whether you're working with accelerometers, microphones, cameras, or custom sensors, Edge Impulse accommodates a wide range of data sources for your projects.
Predict on-device performances: Models trained in Edge Impulse run directly on your edge devices, ensuring real-time decision-making with minimal latency. We provide tools to ensure the DSP and models developed with Edge Impulse can fit your device constraints.
Device-aware optimization: You have full control over model optimization, enabling you to tailor your machine-learning models to the specific requirements and constraints of your embedded systems. Our EON tuner can help you select the best model by training many different variants of models only from an existing dataset and your device constraints!
Ready to embark on your journey with Edge Impulse? Follow these essential steps to get started:
Start by creating your Edge Impulse account. Registration is straightforward, granting you immediate access to the comprehensive suite of tools and resources.
Upon logging in, initiate your first project. Select a name that resonates with your project's objectives. If you already which hardware target or system architecture you will be using, you can set it up directly in the dashboard's project info section. This will help you to make sure your model fits your device constraints.
We offer various methods to collect data from your sensors or to import datasets (see Data acquisition for all methods). For the officially supported hardware targets, we provide binaries or simple steps to attach your device to Edge Impulse Studio and collect data from the Studio. However, as an embedded engineer, you might want to collect data from sensors that are not necessarily available on these devices. To do so, you can use the Data forwarder and print out your sensor values over serial (up to 8kHz) or use our C Ingestion SDK, a portable header-only library (designed to reliably store sampled data from sensors at a high frequency in very little memory).
Edge Impulse offers an intuitive model training process through processing blocks and learning blocks. You don't need to write Python code to train your model; the platform guides you through feature extraction, model creation, and training. Customize and fine-tune your blocks for optimal performance on your hardware. Each block will provide on-device performance information showing you the estimated RAM, flash, and latency.
This is where the fun start, you can easily export your model as ready-to-flash binaries for all the officially supported hardware targets. This method will let you test your model on real hardware very quickly.
In addition, we also provide a wide variety of export methods to easily integrate your model with your application logic. See C++ library to run your model on any device that supports C++ or our guides for Arduino library, Cube.MX CMSIS-PACK, DRP-AI library, OpenMV library, Ethos-U library, Meta TF model, Simplicity Studio Component, Tensai Flow library, TensorRT library, TIDL-RT library, etc...
The C++ inferencing library is a portable library for digital signal processing and machine learning inferencing, and it contains native implementations for both processing and learning blocks in Edge Impulse. It is written in C++11 with all dependencies bundled and can be built on both desktop systems and microcontrollers. See Inferencing SDK documentation.
Building Edge AI solutions is an iterative process. Feel free to try our organization hub to automate your machine-learning pipelines, collaborate with your colleagues, and create custom blocks.
If you want to get familiar with the full end-to-end flow, please have a look at our end-to-end tutorials on continuous motion recognition, responding to your voice, recognizing sounds from audio, adding sight to your sensors, or object detection.
In the advanced inferencing tutorials section, you will discover useful techniques to leverage our inferencing libraries or how you can use the inference results in your application logic:
Edge Impulse offers a thriving community of embedded engineers, developers, and experts. Connect with like-minded professionals, share your knowledge, and collaborate to enhance your embedded machine-learning projects.
Now that you have a roadmap, it's time to explore Edge Impulse and discover the exciting possibilities of embedded machine learning. Let's get started!