LogoLogo
HomeDocsAPIProjectsForum
  • Getting Started
    • For beginners
    • For ML practitioners
    • For embedded engineers
  • Frequently asked questions
  • Tutorials
    • End-to-end tutorials
      • Continuous motion recognition
      • Responding to your voice
      • Recognize sounds from audio
      • Adding sight to your sensors
        • Collecting image data from the Studio
        • Collecting image data with your mobile phone
        • Collecting image data with the OpenMV Cam H7 Plus
      • Object detection
        • Detect objects using MobileNet SSD
        • Detect objects with FOMO
      • Sensor fusion
      • Sensor fusion using Embeddings
      • Processing PPG input with HR/HRV Features Block
      • Industrial Anomaly Detection on Arduino® Opta® PLC
    • Advanced inferencing
      • Continuous audio sampling
      • Multi-impulse
      • Count objects using FOMO
    • API examples
      • Running jobs using the API
      • Python API Bindings Example
      • Customize the EON Tuner
      • Ingest multi-labeled data using the API
      • Trigger connected board data sampling
    • ML & data engineering
      • EI Python SDK
        • Using the Edge Impulse Python SDK with TensorFlow and Keras
        • Using the Edge Impulse Python SDK to run EON Tuner
        • Using the Edge Impulse Python SDK with Hugging Face
        • Using the Edge Impulse Python SDK with Weights & Biases
        • Using the Edge Impulse Python SDK with SageMaker Studio
        • Using the Edge Impulse Python SDK to upload and download data
      • Label image data using GPT-4o
      • Label audio data using your existing models
      • Generate synthetic datasets
        • Generate image datasets using Dall·E
        • Generate keyword spotting datasets
        • Generate physics simulation datasets
        • Generate audio datasets using Eleven Labs
      • FOMO self-attention
    • Lifecycle Management
      • CI/CD with GitHub Actions
      • OTA Model Updates
        • with Nordic Thingy53 and the Edge Impulse APP
      • Data Aquisition from S3 Object Store - Golioth on AI
    • Expert network projects
  • Edge Impulse Studio
    • Organization hub
      • Users
      • Data campaigns
      • Data
      • Data transformation
      • Upload portals
      • Custom blocks
        • Transformation blocks
        • Deployment blocks
          • Deployment metadata spec
      • Health Reference Design
        • Synchronizing clinical data with a bucket
        • Validating clinical data
        • Querying clinical data
        • Transforming clinical data
        • Buildling data pipelines
    • Project dashboard
      • Select AI Hardware
    • Devices
    • Data acquisition
      • Uploader
      • Data explorer
      • Data sources
      • Synthetic data
      • Labeling queue
      • AI labeling
      • CSV Wizard (Time-series)
      • Multi-label (Time-series)
      • Tabular data (Pre-processed & Non-time-series)
      • Metadata
      • Auto-labeler [Deprecated]
    • Impulse design & Experiments
    • Bring your own model (BYOM)
    • Processing blocks
      • Raw data
      • Flatten
      • Image
      • Spectral features
      • Spectrogram
      • Audio MFE
      • Audio MFCC
      • Audio Syntiant
      • IMU Syntiant
      • HR/HRV features
      • Building custom processing blocks
        • Hosting custom DSP blocks
      • Feature explorer
    • Learning blocks
      • Classification (Keras)
      • Anomaly detection (K-means)
      • Anomaly detection (GMM)
      • Visual anomaly detection (FOMO-AD)
      • Regression (Keras)
      • Transfer learning (Images)
      • Transfer learning (Keyword Spotting)
      • Object detection (Images)
        • MobileNetV2 SSD FPN
        • FOMO: Object detection for constrained devices
      • NVIDIA TAO (Object detection & Images)
      • Classical ML
      • Community learn blocks
      • Expert Mode
      • Custom learning blocks
    • EON Tuner
      • Search space
    • Retrain model
    • Live classification
    • Model testing
    • Performance calibration
    • Deployment
      • EON Compiler
      • Custom deployment blocks
    • Versioning
  • Tools
    • API and SDK references
    • Edge Impulse CLI
      • Installation
      • Serial daemon
      • Uploader
      • Data forwarder
      • Impulse runner
      • Blocks
      • Himax flash tool
    • Edge Impulse for Linux
      • Linux Node.js SDK
      • Linux Go SDK
      • Linux C++ SDK
      • Linux Python SDK
      • Flex delegates
    • Edge Impulse Python SDK
  • Run inference
    • C++ library
      • As a generic C++ library
      • On your desktop computer
      • On your Zephyr-based Nordic Semiconductor development board
    • Linux EIM Executable
    • WebAssembly
      • Through WebAssembly (Node.js)
      • Through WebAssembly (browser)
    • Docker container
    • Edge Impulse firmwares
  • Edge AI Hardware
    • Overview
    • MCU
      • Nordic Semi nRF52840 DK
      • Nordic Semi nRF5340 DK
      • Nordic Semi nRF9160 DK
      • Nordic Semi nRF9161 DK
      • Nordic Semi nRF9151 DK
      • Nordic Semi nRF7002 DK
      • Nordic Semi Thingy:53
      • Nordic Semi Thingy:91
    • CPU
      • macOS
      • Linux x86_64
    • Mobile Phone
    • Porting Guide
  • Integrations
    • Arduino Machine Learning Tools
    • NVIDIA Omniverse
    • Embedded IDEs - Open-CMSIS
    • Scailable
    • Weights & Biases
  • Pre-built datasets
    • Continuous gestures
    • Running faucet
    • Keyword spotting
    • LiteRT (Tensorflow Lite) reference models
  • Tips & Tricks
    • Increasing model performance
    • Data augmentation
    • Inference performance metrics
    • Optimize compute time
    • Adding parameters to custom blocks
    • Combine Impulses
  • Concepts
    • Glossary
    • Data Engineering
      • Audio Feature Extraction
      • Motion Feature Extraction
    • ML Concepts
      • Neural Networks
        • Layers
        • Activation Functions
        • Loss Functions
        • Optimizers
          • Learned Optimizer (VeLO)
        • Epochs
      • Evaluation Metrics
    • Edge AI
      • Introduction to edge AI
      • What is edge computing?
      • What is machine learning (ML)?
      • What is edge AI?
      • How to choose an edge AI device
      • Edge AI lifecycle
      • What is edge MLOps?
      • What is Edge Impulse?
      • Case study: Izoelektro smart grid monitoring
      • Test and certification
    • What is embedded ML, anyway?
    • What is edge machine learning (edge ML)?
Powered by GitBook
On this page
  • Installation guide
  • Collecting data
  • Classifying data
  • Hardware acceleration
  • Building .eim files
  • Troubleshooting and more docs
  1. Tools
  2. Edge Impulse for Linux

Linux C++ SDK

PreviousLinux Go SDKNextLinux Python SDK

Last updated 6 months ago

This library lets you run machine learning models and collect sensor data on machines using C++. The SDK is open source and hosted on GitHub: .

Installation guide

  1. Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).

  2. Clone this repository and initialize the submodules:

    git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
    cd example-standalone-inferencing-linux && git submodule update --init --recursive
  3. If you want to use the audio or camera examples, you'll need to install libasound2 and OpenCV, you can do so via:

    Linux

    sudo apt install libasound2
    sh build-opencv-linux.sh          # only needed if you want to run the camera example

    macOS

    sh build-opencv-mac.sh            # only needed if you want to run the camera example

    Note that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

Collecting data from other sensors

APP_COLLECT=1 make -j

Classifying data

This repository comes with four classification examples:

To build an application:

  1. Train an impulse.

  2. Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.

  3. Build the application via:

    APP_CUSTOM=1 make -j

    Replace APP_CUSTOM=1 with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.

  4. The application is in the build directory:

    ./build/custom

Hardware acceleration

For many targets, there is hardware acceleration available.

Raspberry Pi 4 (and other Armv7l Linux targets)

Build with the following flags:

APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j

NVIDIA Jetson Orin / NVIDIA Jetson Nano (and other AARCH64 targets)

  1. Install Clang:

    sudo apt install -y clang
  2. Build with the following flags:

    APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j

Linux x86 targets

Build with the following flags:

APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j

Intel-based Macs

Build with the following flags:

APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j

M1-based Macs

Build with the following flags:

APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 arch -x86_64 make -j

Note that this does build an x86 binary, but it runs very fast through Rosetta.

TensorRT

'NVIDIA Jetson' refers to the following devices:

  • NVIDIA Jetson Xavier NX Series, Jetson TX2 Series, Jetson AGX Xavier Series, Jetson Nano, Jetson TX1

'NVIDIA Jetson Orin' refers to the following devices:

  • NVIDIA Jetson AGX Orin Series, Jetson Orin NX Series, Jetson Orin Nano Series

'Jetson' refers to all NVIDIA Jetson devices.

On NVIDIA Jetson Orin and NVIDIA Jetson you can also build with support for TensorRT, this fully leverages the GPU on the jetson device. This is not available for SSD object detection models, but available for FOMO, YOLOv5 and TAO object detection models, and regular classification/regression models.

To build with TensorRT:

  1. Go to the Deployment page in the Edge Impulse Studio.

  2. Select the 'TensorRT library', and the 'float32' optimizations.

  3. Build the library and copy the folders into this repository.

  4. Build your application with:

    1. NVIDIA Jetson Orin

      APP_CUSTOM=1 TARGET_JETSON_ORIN=1 make -j
    2. NVIDIA Jetson

      APP_CUSTOM=1 TARGET_JETSON=1 make -j

Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.

You can also build .eim files for high-level languages using TensorRT via:

NVIDIA Jetson:

APP_EIM=1 TARGET_JETSON_ORIN=1  make -j

NVIDIA Jetson:

APP_EIM=1 TARGET_JETSON=1  make -j

Long warm-up time and under-performance

By default, the Jetson enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson.

ONLY DO THIS IF YOU ARE POWERING YOUR JETSON FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON THROUGH USB.

To enable maximum performance, run:

sudo /usr/bin/jetson_clocks

Building .eim files

APP_EIM=1 make -j

The model will be placed in build/model.eim and can be used directly by your application.

Troubleshooting and more docs

To collect data from the camera or microphone, follow the for your development board.

To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. that you can build via:

- classify custom sensor data (APP_CUSTOM=1).

- realtime audio classification (APP_AUDIO=1).

- realtime image classification (APP_CAMERA=1).

- builds an .eim file to be used from Node.js, Go or Python (APP_EIM=1).

See the section below for information on enabling GPUs. To build with hardware extensions for running on the CPU:

To build Edge Impulse for Linux models () that can be used by the Python, Node.js or Go SDKs build with APP_EIM=1:

A troubleshooting guide, e.g. to deal with "Failed to allocate TFLite arena" or "Make sure you apply/link the Flex delegate before inference" is listed in the docs for .

Linux
edgeimpulse/example-standalone-inferencing-linux
getting started guide
Here's an end-to-end example
custom
audio
camera
.eim model
eim files
example-standalone-inferencing-linux
TensoRT