Skip to main content

Data

Yes. The enterprise version of Edge Impulse can integrate directly with your cloud storage provider to access and transform data.
Using the Edge Impulse Studio data acquisition tools (like the serial daemon or data forwarder), you can collect data samples manually with a pre-defined label.If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the Edge Impulse CLI, Ingestion API, web uploader, enterprise data storage bucket tools or enterprise upload portals. You can then use the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.

Processing

A big part of Edge Impulse are the processing blocks, as they clean up the data, and extract important features from your data before passing it to a machine learning model.The source code for these processing blocks can be found on GitHub: edgeimpulse/processing-blocks (and you can build your own processing blocks as well).
Edge Impulse uses UMAP (a dimensionality reduction algorithm) to project high dimensionality input data into a 2 or 3 dimensional space. This even works for extremely high dimensionality data such as images.

Learning

We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google’s Object Detection API, and for ‘classic’ non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking , and selecting Switch to expert mode.
Yes you can! Check out our documentation on Bring your own model (BYOM) to see how to import your model into your Edge Impulse project, and using the Edge Impulse Python SDK!

Deployment

The minimum hardware requirements for the embedded device depends on the use case. Anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video should work.View our inference performance metrics for more details.
Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in Studio. Multiply this by the active power consumption of your MCU to get an indication of power cost per inference.More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don’t run at full speed when sampling low-resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).Also see Analyse Power Consumption in Embedded ML Solutions.
It depends on the hardware.For general-purpose MCUs we typically use EON Compiler with TFLite Micro and additional kernels (including hardware optimization, e.g. via CMSIS-NN, ESP-NN).On Linux, if you run the impulse on CPU, we use LiteRT (previously Tensorflow Lite).For accelerators we use a wide variety of other runtimes, e.g. hardcoded network in silicon for Syntiant, custom SNN-based inference engine for Brainchip Akida, DRP-AI for Renesas RZV2L, etc…
The EON Compiler compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than LiteRT (previously Tensorflow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware).By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.
See .eim models? on the Edge Impulse for Linux pages.
Yes! A “supported board” simply means that there is an official or community-supported firmware that has been developed specifically for that board that helps you collect data and run impulses. Edge Impulse is designed to be extensible to computers, smartphones, and a nearly endless array of microcontroller build systems.You can collect data from you custom board and upload it to Edge Impulse in a variety of ways. For example:Your trained model can be deployed as part a C++ library. It requires some effort, but most build systems will work with our C++ library, as long as that build system has a C++ compiler and there is enough flash/RAM on your device to run the library (which includes the DSP block and model).

Other

To collaboration on your projects, go to your project dashboard, find the Collaborators section, and click the ’+’ icon.

Managing collaborators on a project

You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.

Public project versioning on the Edge Impulse dashboard

If you use Edge Impulse in a scientific publication, we would appreciate citations to the following paper:Edge Impulse: An MLOps Platform for Tiny Machine Learning
@misc{hymel2023edgeimpulsemlopsplatform,
      title={Edge Impulse: An MLOps Platform for Tiny Machine Learning},
      author={Shawn Hymel and Colby Banbury and Daniel Situnayake and Alex Elium and Carl Ward and Mat Kelcey and Mathijs Baaijens and Mateusz Majchrzycki and Jenny Plunkett and David Tischler and Alessandro Grande and Louis Moreau and Dmitry Maslov and Artie Beavis and Jan Jongboom and Vijay Janapa Reddi},
      year={2023},
      eprint={2212.03332},
      archivePrefix={arXiv},
      primaryClass={cs.DC},
      url={https://arxiv.org/abs/2212.03332},
}
I