Linux C++ SDK

This library lets you run machine learning models and collect sensor data on Linux machines using C++. The SDK is open source and hosted on GitHub: edgeimpulse/example-standalone-inferencing-linux.

Installation guide

  1. Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).

  2. Clone this repository and initialize the submodules:

    git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
    cd example-standalone-inferencing-linux && git submodule update --init --recursive
  3. If you want to use the audio or camera examples, you'll need to install libasound2 and OpenCV, you can do so via:

    Linux

    sudo apt install libasound2
    sh build-opencv-linux.sh          # only needed if you want to run the camera example

    macOS

    sh build-opencv-mac.sh            # only needed if you want to run the camera example

    Note that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

To collect data from the camera or microphone, follow the getting started guide for your development board.

Collecting data from other sensors

To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example that you can build via:

APP_COLLECT=1 make -j

Classifying data

This repository comes with four classification examples:

  • custom - classify custom sensor data (APP_CUSTOM=1).

  • audio - realtime audio classification (APP_AUDIO=1).

  • camera - realtime image classification (APP_CAMERA=1).

  • .eim model - builds an .eim file to be used from Node.js, Go or Python (APP_EIM=1).

To build an application:

  1. Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.

  2. Build the application via:

    APP_CUSTOM=1 make -j

    Replace APP_CUSTOM=1 with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.

  3. The application is in the build directory:

    ./build/custom

Hardware acceleration

For many targets there is hardware acceleration available. To enable this:

Raspberry Pi 4 (and other Armv7l Linux targets)

Build with the following flags:

APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j

Jetson Nano (and other AARCH64 targets)

See the TensoRT section below for information on enabling GPUs. To build with hardware extensions for running on the CPU:

  1. Install Clang:

    sudo apt install -y clang
  2. Build with the following flags:

    APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j

Linux x86 targets

Build with the following flags:

APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j

Intel-based Macs

Build with the following flags:

APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j

M1-based Macs

Build with the following flags:

APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 arch -x86_64 make -j

Note that this does build an x86 binary, but it runs very fast through Rosetta.

TensorRT

On the Jetson Nano you can also build with support for TensorRT, this fully leverages the GPU on the Jetson Nano. This is not available for SSD object detection models, but available for FOMO object detection and regular classification/regression models.

To build with TensorRT:

  1. Go to the Deployment page in the Edge Impulse Studio.

  2. Select the 'TensorRT library', and the 'float32' optimizations.

  3. Build the library and copy the folders into this repository.

  4. Build your application with:

    APP_CUSTOM=1 TARGET_JETSON_NANO=1 make -j

Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.

You can also build .eim files for high-level languages using TensorRT via:

APP_EIM=1 TARGET_JETSON_NANO=1  make -j

Long warm-up time and under-performance

By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.

ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.

To enable maximum performance, run:

sudo /usr/bin/jetson_clocks

Building .eim files

To build Edge Impulse for Linux models (eim files) that can be used by the Python, Node.js or Go SDKs build with APP_EIM=1:

APP_EIM=1 make -j

The model will be placed in build/model.eim and can be used directly by your application.

Troubleshooting and more docs

A troubleshooting guide, e.g. to deal with "Failed to allocate TFLite arena" or "Make sure you apply/link the Flex delegate before inference" is listed in the docs for example-standalone-inferencing-linux.

Last updated

Revision created

Merge branch 'main' into brickml