For ML practitioners

Welcome to Edge Impulse! Whether you are a machine learning engineer, MLOps engineer, data scientist, or researcher, we have developed professional tools to help you build and optimize models to run efficiently on any edge device.

In this guide, we'll explore how Edge Impulse empowers you to bring your expertise and your own models to the world of edge AI using either the Edge Impulse Studio, our visual interface, and the Edge Impulse Python SDK, available as a pip package.

Why Edge Impulse, for ML practitioners?

Flexibility: You can choose to work with the tools they are already familiar with and import your models, architecture, and feature processing algorithms into the platform. This means that you can leverage your existing knowledge and workflows seamlessly. Or, for those who prefer an all-in-one solution, Edge Impulse provides enterprise-grade tools for your entire machine-learning pipeline.

Optimized for edge devices: Edge Impulse is designed specifically for deploying machine learning models on edge devices, which are typically resource-constrained, from low-power MCUs up to powerful edge GPUs. We provide tools to optimize your models for edge deployment, ensuring efficient resource usage and peak performance. Focus on developing the best models, we will provide feedback on whether they can run on your hardware target!

Data pipelines: We developed a strong expertise in complex data pipelines (including clinical data) while working with our customers. We support data coming from multiple sources, in any format, and provide tools to perform data alignment and validation checks. All of this in customizable multi-stage pipelines. This means you can build gold-standard labeled datasets that can then be imported into your project to train your models.

Getting started in a few steps

In this getting started guide, we'll walk you through the two different approaches to bringing your expertise to edge devices. Either starting from your dataset or from an existing model.

First, start by creating your Edge Impulse account.

Start with existing data

You can import data using Studio Uploader, CLI Uploader, or our Ingestion API. These allow you to easily upload and manage your existing data samples and datasets to Edge Impulse Studio.

We currently accept various file types, including .cbor, .json, .csv, .wav, .jpg, .png, .mp4, and .avi.

If you are working with image datasets, the Studio uploader and the CLI uploader currently handle these types of dataset annotation formats: Edge Impulse object detection, COCO JSON, Open Images CSV, Pascal VOC XML, Plain CSV, and YOLO TXT.

Organization data

Since the creation of Edge Impulse, we have been helping our customers deal with complex data pipelines, complex data transformation methods and complex clinical validation studies.

The organizational data gives you tools to centralize, validate and transform datasets so they can be easily imported into your projects.

See the Organization data documentation.

To visualize how your labeled data items are clustered, use the Data explorer feature available for most dataset types, where we apply dimensionality reduction techniques (t-SNE or PCA) on your embeddings.

To extract features from your data items, either choose an available processing block (MFE, MFCC, spectral analysis using FFT or Wavelets, etc.) or create your own from your expertise. These can be written in any language.

Similarly, to train your machine learning model, you can choose from different learning blocks (Classification, Anomaly Detection, Regression, Image or Audio Transfer Learning, Object Detection). In most of these blocks, we expose the Keras API in an expert mode. You can also bring your own architecture/training pipeline as a custom learning block.

Each block will provide on-device performance information showing you the estimated RAM, flash, and latency.

Run the inference on a device

You can easily export your model in a .eim format, a Linux executable that contains your signal processing and ML code, compiled with optimizations for your processor or GPU. This executable can then be called with our Linux inferencing libraries. We have inferencing libraries and examples for Python, Node.js, C++, and Go.

If you target MCU-based devices, you can generate ready-to-flash binaries for all the officially supported hardware targets. This method will let you test your model on real hardware very quickly.

In both cases, we will provide profiling information about your models so you can make sure your model will fit your edge device constraints.

Tutorials and resources, for ML practitioners

End-to-end tutorials

If you want to get familiar with the full end-to-end flow using Edge Impulse Studio, please have a look at our end-to-end tutorials on continuous motion recognition, responding to your voice, recognizing sounds from audio, adding sight to your sensors, or object detection.

To understand the full potential of Edge Impulse, see our health reference design that describes an end-to-end ML workflow for building a wearable health product using Edge Impulse. It handles data coming from multiple sources, data alignment, and a multi-stage pipeline before the data is imported into an Edge Impulse project.

Edge Impulse Python SDK tutorials

While the Edge Impulse Studio is a great interface for guiding you through the process of collecting data and training a model, the edgeimpulse Python SDK allows you to programmatically Bring Your Own Model (BYOM), developed and trained on any platform:

Other useful resources

Integrations

Last updated