Edge Impulse Docs

Edge Impulse Documentation

Welcome to the Edge Impulse documentation. You'll find comprehensive guides and documentation to help you start working with Edge Impulse as quickly as possible, as well as support if you get stuck. Let's jump right in!

Frequently asked questions

How can I share my Edge Impulse project?

The enterprise version of Edge Impulse offers team collaboration on projects, go to Dashboard, find the Collaborators section, and click the '+' icon. If you have an interesting research or community project we can enable collaboration on the free version of Edge Impulse as well, by emailing [email protected]

Managing collaborators on a projectManaging collaborators on a project

Managing collaborators on a project

You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.

Public project versioning on the Edge Impulse dashboardPublic project versioning on the Edge Impulse dashboard

Public project versioning on the Edge Impulse dashboard

What are the minimum hardware requirements to run the Edge Impulse inferencing library on my embedded device?

The minimum hardware requirements for the embedded device depends on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video, view our inference performance metrics for more details.

What frameworks does Edge Impulse use to train the machine learning models?

We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking , and selecting Switch to expert mode.

Another big part of Edge Impulse are the processing blocks, as they clean up the data, and already extract important features from your data before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: edgeimpulse/processing-blocks (and you can build your own processing blocks as well).

Is there a downside to enabling the EON Compiler?

The EON Compiler compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware.

By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.

Can I use a model that has been trained elsewhere in Edge Impulse?

You cannot import a pretrained model, but you can import your model architecture and then retrain. Add a neural network block to your impulse, go to the block, click , and select Switch to expert mode. You then have access to the full Keras API.

How does the feature explorer visualize data that has more that 3 dimensions?

Edge Impulse uses UMAP (a dimensionality reduction algorithm) to project high dimensionality input data into a 3 dimensional space. This even works for extremely high dimensionality data such as images.

Does Edge impulse integrate with other cloud services?

Yes. The enterprise version of Edge Impulse can integrate directly with your cloud service to access and transform data.

What is the typical power consumption of the Edge Impulse machine learning processes on my device?

Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in the Studio. Multiply this by active power consumption of your MCU to get an indication of power cost per inference.

More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).

What is the .eim model format for Edge Impulse for Linux?

See .eim models? on the Edge Impulse for Linux pages.

How is the labeling of the data performed?

Using the Edge Impulse Studio data acquisition tools (like the serial daemon or data forwarder), you can collect data samples manually with a pre-defined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the Edge Impulse CLI, data ingestion API, web uploader, enterprise data storage bucket tools or enterprise upload portals. You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.

Updated 4 months ago


Frequently asked questions


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.