Overview

We support any Edge AI Hardware that can run C++, and more!

You will find on this page a list of edge AI hardware targets that are either maintained by Edge Impulse or by our partners. During the integration and when possible, we leverage and integrate the hardware capabilities (optimized floating point units (FPU), DSP and Neural Network accelerations, GPU or other AI accelerators).

For the MCU-based hardware, depending on the integration we provide several or all of the following options:

  • A default Edge Impulse firmware, ready to be flashed on the hardware. The firmware capabilities depends on the integration (see also Edge Impulse firmwares).:

    • Data collection: Enables to connect the hardware to Edge Impulse Studio to simplify your getting started journey and ease the data collection from some or all the sensors available.

    • Inferencing example: This includes the data sampling, extracting features using the signal processing blocks and run the inference using learning blocks.

    • The open-source code for the firmware, which comes with documentation on how to build and compile the Edge Impulse firmware.

  • Examples on how to integrate your Impulse with your custom firmware, either using the C++ inferencing SDK or using libraries or components tailored for your hardware development environments. In our Github repository, search for the example-standalone-inferencing-%target%

  • Integrated deployment options to directly export a ready-to-flash Edge Impulse firmware packaged with your Impulse (including both the signal processing and the machine learning model).

  • Profiling (estimation of memory, flash and latency) available in Edge Impulse Studio and in the Edge Impulse Python SDK.

  • Extensive hardware testing, to make sure any improvements and changes in Edge Impulse will not break the current integration.

Not on the list?

If you are using a different hardware target or custom PCB? No problem!

You can upload data to Edge Impulse in a variety of ways, such as using the Data forwarder, the Edge Impulse for Linux SDK, or by uploading files directly (e.g. CSV, JPG, WAV).

From there, your trained model can be deployed as a C++ library. It can require some effort, but most build systems (for computers, smartphones, and microcontrollers) will work with our C++ library. This, of course, requires that your build system has a C++ compiler and that there is enough flash/RAM on your device to run the library/model. And although we leverage hardware acceleration when possible on the hardware listed in this section, keep in mind that our EON Compiler will optimize your preprocessing and your ai models for any targets compared to traditional compiler options.

Also, if you feel like porting the official Edge Impulse firmware to your own board, use this porting guide.

The hardware targets listed in this section are the perfect way to start building machine learning solutions on real embedded hardware. Edge Impulse's Solution Engineers and Embedded Engineers have a strong expertise with these hardware targets and can help on your integration. Feel free to contact us.

If you just want to experience Edge Impulse? You can also use your Mobile phone!

Edge AI Hardware

Production-ready

MCU

MCU + AI Accelerators

CPU

CPU + AI Accelerators

CPU + GPU

Last updated

Revision created

tab only