Edge Impulse for Linux is the easiest way to build Machine Learning solutions on real embedded hardware. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
This is a list of development boards that are fully supported by Edge Impulse for Linux. Follow the instructions to get started:
Different development board? Probably no problem! You can use the Raspberry Pi getting started guide to set up the Edge Impulse for Linux CLI tool, and you can run your impulse on any ARMv7 or AARCH64 Linux target. For support please head to the forums.
To build your own applications, or collect data from new sensors, you can use the high-level language SDKs. These use full hardware acceleration, and let you integrate your Edge Impulse models in a few lines of code:
Edge Impulse for Linux models are delivered in
.eim format. This is an executable that contains your signal processing and ML code, compiled with optimizations for your processor or GPU (e.g. NEON instructions on ARM cores) plus a very simple IPC layer (over a Unix socket). We do this because your model file is now completely self-contained, it does not depend on anything (except glibc) and thus you don't need specific TensorFlow versions, avoid Python dependency hell, and will never have to worry about why you're not running at full native speed.
The Node.js / Python / Go SDKs talk to the model through the IPC layer to run inference, so these SDKs are very thin, and just need the ability to spawn a binary. The SDKs are open source if you want to take a look, e.g. here is the Node.js IPC client.
Updated about a month ago