Skip to main content
This guide provides a step-by-step process for validating that your Linux device is ready to run on-device inference with Edge Impulse models. By the end of this section, you will understand how to run models using the CPU (AI accelerator usage is discussed separately).

Linux Inferencing Process Overview

Linux Inference Methods

Several methods can be used for running an Edge Impulse model on a Linux device; each method suited for different needs. Each link below contains a guide to install all the necessary dependencies and test inference on your device for the selected method Before proceeding with each method, ensure you have a model trained in your Edge Impulse account. It will be used for deployment in all the methods. For testing purposes, we recommend cloning the public Cars In Parking Garage Project, but you can use another model you may already have.

Note

If the project is cloned correctly, the project should appear in the list of projects in your profile. Regardless of the project you will be using - it should be fully completed - all the dots in the menu should be green and not grey. This means the model is ready for deployment.

Completion and Next steps

Completion of this process demonstrates that your device is “Edge Impulse Ready” for inferencing. Now that you have successfully run an inference on your device, you can explore further integration with Edge Impulse. Consider the following steps:
  1. Data Collection: Implement data collection from your device to Edge Impulse for model training and improvement.
  2. Custom Sensors: Integrate custom sensors by modifying the data acquisition code to suit your hardware.
  3. Optimize Performance: Explore SDK hardware acceleration options available on your device to optimize inference performance.

Many Linux Targets Have Support Already!

Please see the list of officially supported Linux devices. If your device is close to one of those devices you might be able to start from those documents in order to run data collection and inferencing. If your device is a derivative it is likely that the Edge Impulse on-device features work on your device. In general the process to test is as follows:
  1. Review the device support documentation and install the pre-requisites.
  2. Install the Edge Impulse Linux Command Line Interface on your device.
  3. Complete an Edge Impulse Project and run edge-impulse-linux-runner to run an inference.