Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 146 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Documentation

Loading...

Loading...

Loading...

Loading...

Edge Impulse Studio

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Development Platforms

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Edge Impulse for Linux

Loading...

Loading...

Loading...

Loading...

Loading...

Edge Impulse CLI

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Deployment

Loading...

Loading...

Loading...

Getting Started

Welcome to Edge Impulse! We enable developers to create the next generation of intelligent device solutions with embedded Machine Learning. In the documentation you'll find user guides, tutorials and API documentation. For support, visit the forums.

If you're new to the idea of embedded machine learning, or machine learning in general, you may enjoy our quick guide: What is embedded ML, anyway?

Get started with any device

Follow these three steps to build your first embedded Machine Learning model - no worries, you can use almost any device to get started.

  1. You'll need some data:

    • If you have an existing development board or device, you can collect data with a few lines of code using the Data forwarder or the Edge Impulse for Linux SDK.

    • If you want to collect live data from a supported development kit, select your board from the list of fully supported development boards and follow the instructions to connect your board to edge impulse.

    • If you already have a dataset, you can upload it via the Uploader.

    • If you have a mobile phone you can use it as a sensor to collect data, see Mobile phone.

  2. Try the tutorials on continuous motion recognition, responding to your voice, recognizing sounds from audio, adding sight to your sensors or object detection. These will let you build machine learning models that detect things in your home or office.

  3. After training your model you can run your model on your device:

    • If you want to integrate the model with your own firmware or project you can export your complete model (including all signal processing code and machine learning models) to a C++ or Arduino library with no external dependencies (open source and royalty-free), see Running your impulse locally.

    • If you have a fully supported development board (or your mobile phone) you can build new firmware - which includes your model - directly from the UI. It doesn't get easier than that!

    • If you have a gateway, a computer or a web browser where you want to run your model, you can export to WebAssembly and run it anywhere you can run JavaScript.

Suitable for any type of embedded ML application

We have some great tutorials, but you have full freedom in the models that you design in Edge Impulse. You can plug in new signal processing blocks, and completely new neural networks. See Building custom processing blocks, or click the three dots on a neural network page and select 'Switch to Keras (expert) mode'.

API Documentation

You can access any feature in the Edge Impulse Studio through the Edge Impulse API. We also have the Ingestion service if you want to send data directly, and we have an open Remote management protocol to control devices from the Studio.

Enterprise version

For larger teams, and companies with lots of data we offer an enterprise version of Edge Impulse. The enterprise version offers team collaboration on projects, a dataset builder that makes your internal data available to your whole team, integrations with your cloud buckets, transformation blocks that let you extract ML features from thousands of files in one go, and custom processing and deployment blocks for your organization. You can find documentation under Organizations or contact us via [email protected] for more information.

API and SDK references

The API references for the ingestion service, remote management service, and the studio API; plus SDK documentation for the acquisition and inferencing libraries can be found in the .

Services

SDK documentation

  • SDKs for Node.js, Python, Go and C++

API references
Studio API reference
Ingestion service
Remote management service
Data acquisition SDK
Inferencing SDK
Edge Impulse for Linux

IMU Syntiant

The IMU Syntiant block rescales raw data to 8 bits values to match the NDP101 chip input requirements.

Parameters

Scaling

  • Scale 16 bits to 8 bits: Scale data to 8-bits values in the [-1, 1] range, raw data is divided by 2G (2 * 9.80665). Using Edge Impulse official firmwares, this parameter should be enabled as raw data is not rescaled. If this parameter is disabled the data samples will not be rescaled, you should disable this parameter if your raw data samples are already normalized to the [-1, 1] range.

How does the IMU Syntiant block work?

The IMU Syntiant block retrieves raw samples and applies the Scale 16 bits to 8 bits parameter.

Officially supported MCU targets

Officially supported CPU/GPU targets

Community boards

Devices

There is a wide variety of devices that you can connect to your Edge Impulse project. These devices can help you collect datasets for your project, test your trained ML model and even deploy your ML model directly to your development board with a pre-built binary application (for fully supported development platforms).

On the Devices tab, you'll find a list of all your connected devices and a guide on how to connect new devices that are currently supported by Edge Impulse.

Devices tab displaying connected devices.

To connect a new device, click on the Connect a new device button on the top right of your screen.

Connect a new device button.

You will get a pop-up with multiple options of devices you can connect to your Edge Impulse project. Available options include:

  • Connecting a fully supported development board.

  • Using your mobile phone.

  • Using your computer (Linux/Mac).

  • Using the serial port to connect devices that are currently not fully supported.

Uploader

You can upload an already existing dataset to your project directly through the Edge impulse Studio. The data should be in the Data Acquisition Format (CBOR, JSON, CSV), or as WAV, JPG or PNG files.

To upload data using the uploader, go to the Data acquisition page and click on the uploader button as shown in the image below:

When uploading your data, you can choose the category you want your data to fall in i.e training set, testing set or automatically split the dataset between training and testing set. You can also choose whether to infer labels from files name or enter a label of which the files should automatically fall in.

Spectrogram

The Spectrogram processing block extracts time and frequency features from a signal. It performs well on audio data for non-voice recognition use cases, or on any sensor data with continuous frequencies.

GitHub repository containing all DSP block code: .

Spectrogram parameters

Spectrogram

  • Frame length: The length of each frame in seconds

  • Frame stride: The step between successive frame in seconds

  • Frequency bands: The FFT size

Normalization

  • Noise floor (dB): signal lower than this level will be dropped

How does the spectrogram block work?

It first divides the window in multiple overlapping frames. The size and number of frames can be adjusted with the parameters Frame length and Frame stride. For example with a window of 1 second, frame length of 0.02s and stride of 0.01s, it will create 99 time frames.

Each time frame is then divided in frequency bins using an FFT (Fast Fourier Transform) and we compute its power spectrum. The number of frequency bins equals to the Frequency bands parameter divided by 2 plus 1. We recommend keeping the Frequency bands (a.k.a. FFT size) value as a power of 2 for performances purpose. Finally the Noise floor value is applied to the power spectrum.

The features generated by the Spectrogram block are equal to the number of generated time frames times the number of frequency bins.

Frequency bands and frame length

There is a correlation between the Frequency bands (FFT size) parameter and the frame length. The frame length will be cropped or padded to the Frequency bands value while applying the FFT. For example, with a 8kHz sampling frequency and a time frame of 0.02s, each time frame contains 160 samples (8k * 0.02). If your FFT size is set 128, time frames will be cropped to 128 samples. If your FFT size is set to 256, time frames will be padded with zeros.

Audio Syntiant

The Audio Syntiant processing block extracts time and frequency features from a signal. It is similar to the but performs additional processing specific to the Syntiant NDP101 chip. This block can be used only with Syntiant targets.

Audio Syntiant parameters

Log Mel-filterbank energy features

  • Frame length: The length of each frame in seconds

  • Frame stride: The step between successive frame in seconds

  • Filter number (fixed): The number of triangular filters applied to the spectrogram

  • FFT length (fixed): The FFT size

  • Low frequency (fixed): Lowest band edge of Mel-scale filterbanks

  • High frequency (fixed): Highest band edge of Mel-scale filterbanks

  • Coefficient: Pre-emphasis coefficient

How does the Syntiant block work?

The features' extractions is a proprietary algorithm from Syntiant. However parameters are very close to the . Pre-emphasis coefficient is applied first to amplify higher frequencies. The signal is then divided in overlapping frames, defined by the Frame length and Frame stride to extract speech features.

Sampling frequency

The Audio Syntiant block only supports a 16 kHz frequency. You can adjust the sampling frequency in the "Create Impulse" section.

Retrain model

Training and deploying high performing ML models is usually considered as a continuous process rather than a one time exercise. When you are validating your model and discover an overfit, you might consider adding some more diverse data then perform model retraining while maintaining the initially set DSP and Neural Network block configurations.

Also during inference If you find that the data distribution has drifted significantly from the initial training distribution, it is usually a good common practice to retrain your model on the newer data distribution to keep up with the high model performance.

The “Retrain Model” feature in the Edge Impulse Studio is usually useful when adding new data to your project. It uses known parameters from your selected DSP and ML blocks then uses them to automatically regenerate new features and retrain the Neural Network model in one single step. You can consider this a shortcut for retraining your model since you don’t need to go through all the blocks in your impulse one by one again.

To retrain your model after adding some data, navigate to "Retrain Model" on the studio and click "Train model"

uploader icon
uploader screenshot
edgeimpulse/processing-blocks
Spectrogram of an alarm (1-sec window)
Audio MFE
Audio MFE
Syntiant spectrogram of the sentence "Hello, World"
Model retraining

Processing Blocks

Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio):

  • Raw Data

  • Flatten

  • Image

  • Spectral features

  • Spectrogram

  • Audio MFE

  • Audio MFCC

  • Audio Syntiant

  • IMU Syntiant

The source code of these blocks are available in the Edge Impulse processing blocks GitHub repository.

If you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing, follow our tutorial on Building custom processing blocks.

Creating transformation blocks

Transformation blocks take raw data from your organizational datasets and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

Transformation blocks can output data to either:

  1. A project - here the data in your dataset is placed in an Edge Impulse project, and you'll have all the normal features from the studio available to build your machine learning models. This is great if you already have an idea what you want with the data, and are looking for a reproducible pipeline.

  2. Back in the dataset - here data is placed back in the dataset. This is great for extracting long running features, batch jobs, or combining data from multiple sources - even when you don't want to place the data in a project yet.

For both of these you can find tutorials here:

  • Project transformation block

  • Dataset transformation block

Porting Guide

If you have a development board that is not officially supported by Edge Impulse, no problem. This guide contains information on connecting any device to Edge Impulse.

Collecting data

Edge Impulse can handle data from any device, whether it's coming from a new development board or from a device that has been in production for years. Just post your data to the ingestion service and it will automatically show up in the studio. You can either do this directly from your device (if it has an IP connection) or through an intermediate protocol like a phone application. To deal with data that is already collected we have the Uploader tools, which can label and import data.

A quick way of getting data from devices is using the Data forwarder. This lets you forward data collected over a serial interface to the studio. This method only works on sensors with lower sampling frequencies (f.e. no audio), does not allow sensor selection, and does not sign data on device. It's however a really easy way to collect data from existing devices with just a few lines of code.

Running impulses

The Inferencing SDK enables you to run impulses locally and on-device. The SDK contains efficient native implementations of all processing and learning blocks. The SDK was written in portable C++11 with as little dependencies as possible, and the best way of testing out whether it works on your platform is through the Deployment page in the studio. From here you can export a library with all blocks, configuration and the SDK. See the Running your impulse locally tutorials.

If you need to make changes to the SDK to get it to run on your device we welcome contributions. We also welcome contributions which add optimized code paths for your specific hardware. The SDK documentation has more information on where to add these.

Controlling the device from the studio

Devices can be controlled from the studio through the Remote management interface. This is a service where devices connect to, either over a web socket or through a serial connection (with the help of the Serial daemon). The studio lists these devices, and can instruct them to start sampling straight from the UI.

To add full support for your development board you'll need to implement the serial protocol and (if your device has an IP connection) the websocket protocol. Alternatively you can also implement the web socket protocol through an intermediate layer (like a mobile phone app). There are end-to-end integration tests available at edgeimpulse/integration-tests-firmware which validate both the serial and websocket protocols on a development board.

Devices that connect through the data forwarder can be controlled by the studio, but have a limited integration. They don't support sensor or frequency selection.

Full integration in Edge Impulse, and help porting?

Do you want help porting? Or want to get the best integration in Edge Impulse, including full studio support, and want to let users build binaries directly from the UI? Let us know at [email protected] and we'll let you know the possibilities.

Edge Impulse for Linux

Edge Impulse for Linux is the easiest way to build Machine Learning solutions on real embedded hardware. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.

Development boards

This is a list of development boards that are fully supported by Edge Impulse for Linux. Follow the instructions to get started:

  • Raspberry Pi 4.

  • NVIDIA Jetson Nano.

  • Mac.

  • Linux x86_64 devices.

Different development board? Probably no problem! You can use the Linux x86_64 getting started guide to set up the Edge Impulse for Linux CLI tool, and you can run your impulse on any x86_64, ARMv7 or AARCH64 Linux target. For support please head to the forums.

SDKs

To build your own applications, or collect data from new sensors, you can use the high-level language SDKs. These use full hardware acceleration, and let you integrate your Edge Impulse models in a few lines of code:

  • Node.js.

  • Python.

  • Go.

  • C++.

.eim models?

Edge Impulse for Linux models are delivered in .eim format. This is an executable that contains your signal processing and ML code, compiled with optimizations for your processor or GPU (e.g. NEON instructions on ARM cores) plus a very simple IPC layer (over a Unix socket). We do this because your model file is now completely self-contained, it does not depend on anything (except glibc) and thus you don't need specific TensorFlow versions, avoid Python dependency hell, and will never have to worry about why you're not running at full native speed.

The Node.js / Python / Go SDKs talk to the model through the IPC layer to run inference, so these SDKs are very thin, and just need the ability to spawn a binary. The SDKs are open source if you want to take a look, e.g. here is the Node.js IPC client.

You can download .eim files using the Edge Impulse for Linux CLI or from the Studio (go to Dashboard, then enable 'Show Linux deploy options', and they'll be listed under Deployment). You can also build the .eim files yourself with the Edge Impulse for Linux C++ SDK, see Building .eim files.

Object Detection (Images)

The two most common image processing problems are image classification and object detection.

Image classification takes an image as an input and outputs what type of object is in the image. This technique works great, even on microcontrollers, as long as we only need to detect a single object in the image.

On the other hand, object detection takes an image and outputs information about the class and number of objects, position, (and, eventually, size) in the image.

Edge Impulse provides two different methods to perform object detection:

  • Using MobileNetV2 SSD FPN

  • Using FOMO

Specifications
MobileNetV2 SSD FPN
FOMO

Labelling method

Bounding boxes

Bounding Boxes

Input size

320x320

Square (any size)

Image format

RGB

Greyscale & RGB

Output

Bounding boxes

Centroids

MCU

❌

✅

CPU/GPU

✅

✅

Limitations

- Works best with big objects - Models use high compute resources (in the edge computing world) - Image size is fixed

- Works best when objects have similar sizes & shapes - The size of the objects are not available - Objects should not be too close to each other

Image

The Image block is dedicated to computer vision applications. It normalizes image data, and optionally reduce the color depth.

GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.

Image parameters

  • Color depth: Color depth to use (RGB or grayscale)

How does the image block work?

The Image performs normalization, converting each pixel's channel of the image to a float value between 0 and 1. If Grayscale is selected, each pixel is converted to a single value following the ITU-R BT.601 conversion (Y' component only).

Audio MFE

Similarly to the Spectrogram block, the Audio MFE processing block extracts time and frequency features from a signal. However it uses a non-linear scale in the frequency domain, called Mel-scale. It performs well on audio data, mostly for non-voice recognition use cases when sounds to be classified can be distinguished by human ear.

GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.

MFE spectrogram of an alarm (1-sec window)

Audio MFE parameters

Mel-filterbank energy features

  • Frame length: The length of each frame in seconds

  • Frame stride: The step between successive frame in seconds

  • Filter number: The number of triangular filters applied to the spectrogram

  • FFT length: The FFT size

  • Low frequency: Lowest band edge of Mel-scale filterbanks

  • High frequency: Highest band edge of Mel-scale filterbanks

Normalization

  • Noise floor (dB): signal lower than this level will be dropped

How does the MFE block work?

The features' extractions is similar to the Spectrogram (Frame length, Frame stride, and FFT length parameters are the same) but it adds 2 extra steps.

After computing the spectrogram, triangular filters are applied on a Mel-scale to extract frequency bands. They are configured with parameters Filter number, Low frequency and High frequency to select the frequency band and the number of frequency features to be extracted. The Mel-scale is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The idea is to extract more features (more filter banks) in the lower frequencies, and less in the high frequencies, thus it performs well on sounds that can be distinguished by human ear.

The last step is to perform a local mean normalization of the signal, applying the Noise floor value to the power spectrum.

Audio MFCC

The Audio MFCC blocks extracts coefficients from an audio signal. Similarly to the Audio MFE block, it uses a non-linear scale called Mel-scale. It is the reference block for speech recognition and can also performs well on some non-human voice use cases.

GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.

Cepstral coefficients of an example sentence "Hello World" (1-sec window)

Audio MFCC parameters

Mel Frequency Cepstral Coefficients

  • Number of coefficients: Number of cepstral coefficients to keep after applying Discrete Cosine Transform

  • Frame length: The length of each frame in seconds

  • Frame stride: The step between successive frame in seconds

  • Filter number: The number of triangular filters applied to the spectrogram

  • FFT length: The FFT size

  • Low frequency: Lowest band edge of Mel-scale filterbanks

  • High frequency: Highest band edge of Mel-scale filterbanks

  • Window size: The size of sliding window for local cepstral mean normalization. Windows size must be odd.

Pre-emphasis

  • Coefficient: The pre-emphasizing coefficient to apply to the input signal (0 equals to no filtering)

  • Note: Shift has been removed and set to 1 for all future projects. Older & existing projects can still change this value or use an existing value.

How does the MFCC block work?

The features' extractions adds one extra step to the MFE block resulting in a compressed representation of the filterbanks. A Discrete Cosine Transform is applied on each filterbank to extract cepstral coefficients. 13 coefficients are usually retained, the rest are discarded as they represent fast changes not useful for speech recognition.

Spectral features

The Spectral features block extracts frequency and power characteristics of a signal. Low-pass and high-pass filters can also be applied to filter out unwanted frequencies. It is great for analyzing repetitive patterns in a signal, such as movements or vibrations from an accelerometer.

GitHub repository containing all DSP block code: .

Spectral analysis parameters

Scaling

  • Scale axes: Multiplies axes by this number to scale data from the sensor

Filter

  • Type: Type of filter to apply to the raw data (low-pass, high-pass, or none)

  • Cut-off frequency: Cut-off frequency of the filter in hertz

  • Order: Order of the Butterworth filter

Spectral power

  • FFT length: The FFT size

  • No. of peaks: Number of spectral power peaks to extract

  • Peaks threshold: Minimum threshold to extract a peak (frequency domain normalized to [0, 1])

  • Power edges: Splits the power spectral density in various buckets (V**2/Hz unit)

How does the spectral features block work?

The spectral analysis block generates 3 types of features per axis:

  • The root mean square of the filter output (1 scalar value)

  • The frequency and height of spectral power peaks

  • The average power spectral density for each bucket

The raw signal is first scaled up or down based on the Scale axes value and offset by its mean value. A Butterworth filter is then applied (except if None selected); the order of the filter indicates how steep the slope is at the cut-off frequency.

At this point the root mean square of the filter output is added to the features' list.

The filter output is then used to extract:

  • Spectral power peaks: after performing the FFT, the No. of peaks peaks with the highest magnitude are stored in the features' list (frequency and height). Peaks threshold can be tuned to define a minimum value to extract a peak.

  • PSD for each bucket: after computing the power spectral density, the signal is divided in power buckets, defined by the Power edges parameter. Each sample frequency of the PSD is added to a power bucket. The power buckets are then averaged and added to the features' list. This process allows to extract a power representation of the signal.

Number of generated features

Let's consider an input signal with 3 axis and the following parameters:

  • No. of peaks = 3

  • Power edges = 0.1, 0.5, 1.0, 2.0, 5.0

The number of generated features per axis is:

  • 1 value for RMS of the filter output

  • 6 values for power peaks (frequency & height for each peak)

  • 4 values for power buckets (number of Power edges - 1)

33 features are generated in total for the input signal.

Linux Node.js SDK

This library lets you run machine learning models and collect sensor data on machines using Node.js. The SDK is open source and hosted on GitHub: .

Installation guide

Add the library to your application via:

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

To collect data from the camera or microphone, follow the for your development board.

Collecting data from other sensors

To collect data from other sensors you'll need to write some code where you instantiate a DataForwarder object, write data samples, and finally call finalize() which uploads the data to Edge Impulse. .

Classifying data

To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:

  1. Train your model in Edge Impulse.

  2. .

  3. Download the model file via:

    This downloads the file into modelfile.eim. (Want to switch projects? Add --clean)

Then you can start classifying realtime sensor data. We have examples for:

  • - grabs data from the microphone and classifies it in realtime.

  • - as above, but shows how to use the moving-average filter to smooth your data and reduce false positives.

  • - grabs data from a webcam and classifies it in realtime.

  • - classifies custom sensor data.

Himax flash tool

The Himax flash tool uploads new binaries to the over a serial connection.

You upload a new binary via:

This will yield a response like this:

Other options

  • --baud-rate <n> - sets the baud rate of the bootloader. This should only be used during development.

  • --verbose - enable debug logs, including all communication received from the device.

Overview

This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:

  • - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.

  • - allows uploading and signing local files.

  • - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.

  • - show the impulse running on your device.

  • - create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.

  • - to flash the Himax WE-I Plus.

Did you know you can also connect devices directly to your browser?

Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See for more information.

Collaborating on projects

Within an you can work on one project with multiple people. These can be colleagues, outside researchers, or even members of the community. They will only get access to the specific data in the project, and not to any of the raw data in your organizational datasets.

Only available for enterprise customers

Organizational features are only available for enterprise customers. for more information.

To give someone access, go to your project's dashboard, and find the "Collaborators" widget. Click the '+' icon, and type the username or e-mail address of the other user. This user needs to have an Edge Impulse account already.

$ npm install edge-impulse-linux
$ edge-impulse-linux-runner --download modelfile.eim
Linux
edgeimpulse/edge-impulse-linux-cli
getting started guide
Here's an end-to-end example
Install the Edge Impulse for Linux CLI
Audio
Audio (moving average filter)
Camera
Custom data
$ himax-flash-tool -f path/to/a/firmware.img
[HMX] Connecting to /dev/tty.usbserial-DT04551Q...
[HMX] Connected, press the **RESET** button on your Himax WE-I now
[HMX] Restarted into bootloader. Sending file.
[HMX] Sending 2964 blocks
 ████████████████████████████████████████ 100% | ETA: 0s | 2964/2964
[HMX] Firmware update complete
[HMX] Press **RESET** to start the application

Flashed your Himax WE-I Plus development board.
To set up your development with Edge Impulse, run 'edge-impulse-daemon'
To run your impulse on your development board, run 'edge-impulse-run-impulse'
Himax WE-I Plus
edge-impulse-daemon
edge-impulse-uploader
edge-impulse-data-forwarder
edge-impulse-run-impulse
edge-impulse-blocks
himax-flash-tool
this blog post
edgeimpulse/processing-blocks
Filtering out high frequencies from an accelerometer for movement detection
organization
View our pricing
Giving a user access to your Edge Impulse project through the Collaborators widget.

What is embedded ML, anyway?

A gentle introduction to the exciting field of embedded machine learning.

Machine learning (ML) is a way of writing computer programs. Specifically, it’s a way of writing programs that process raw data and turn it into information that is meaningful at an application level.

For example, one ML program might be designed to determine when an industrial machine has broken down based on readings from its various sensors, so that it can alert the operator. Another ML program might take raw audio data from a microphone and determine if a word has been spoken, so it can activate a smart home device.

Unlike normal computer programs, the rules of ML programs are not determined by a developer. Instead, ML uses specialized algorithms to learn rules from data, in a process known as training.

In a traditional piece of software, an engineer designs an algorithm that takes an input, applies various rules, and returns an output. The algorithm’s internal operations are planned out by the engineer and implemented explicitly through lines of code. To predict breakdowns in an industrial machine, the engineer would need to understand which measurements in the data indicate a problem and write code that deliberately checks for them.

This approach works fine for many problems. For example, we know that water boils at 100°C at sea level, so it’s easy to write a program that can predict whether water is boiling based on its current temperature and altitude. But in many cases, it can be difficult to know the exact combination of factors that predicts a given state. To continue with our industrial machine example, there might be various different combinations of production rate, temperature, and vibration level that might indicate a problem but are not immediately obvious from looking at the data.

To create an ML program, an engineer first collects a substantial set of training data. They then feed this data into a special kind of algorithm, and let the algorithm discover the rules. This means that as ML engineers, we can create programs that make predictions based on complex data without having to understand all of the complexity ourselves.

Through the training process, the ML algorithm builds a model of the system based on the data we provide. We run data through this model to make predictions, in a process called inference.

There are many different types of machine learning algorithms, each with their own unique benefits and drawbacks. Edge Impulse helps engineers select the right algorithm for a given task.

Where can machine learning help?

Machine learning is an excellent tool for solving problems that involve pattern recognition, especially patterns that are complex and might be difficult for a human observer to identify. ML algorithms excel at turning messy, high-bandwidth raw data into usable signals, especially combined with conventional signal processing.

For example, the average person might struggle to recognize the signs of a machine failure given ten different streams of dense, noisy sensor data. However, a machine learning algorithm can often learn to spot the difference.

But ML is not always the best tool for the job. If the rules of a system are well defined and can be easily expressed with hard-coded logic, it’s usually more efficient to work that way.

Limitations of machine learning

Machine learning algorithms are powerful tools, but they can have the following drawbacks:

  • They output estimates and approximations, not exact answers

  • ML models can be computationally expensive to run

  • Training data can be time consuming and expensive to obtain

It can be tempting to try and apply ML everywhere—but if you can solve a problem without ML, it is usually better to do so.

What is embedded ML?

Recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest of microcontrollers. Embedded machine learning, also known as TinyML, is the field of machine learning when applied to embedded systems such as these.

There are some major advantages to deploying ML on embedded devices. The key advantages are neatly expressed in the unfortunate acronym BLERP, coined by Jeff Bier. They are:

Bandwidth—ML algorithms on edge devices can extract meaningful information from data that would otherwise be inaccessible due to bandwidth constraints.

Latency—On-device ML models can respond in real-time to inputs, enabling applications such as autonomous vehicles, which would not be viable if dependent on network latency.

Economics—By processing data on-device, embedded ML systems avoid the costs of transmitting data over a network and processing it in the cloud.

Reliability—Systems controlled by on-device models are inherently more reliable than those which depend on a connection to the cloud.

Privacy—When data is processed on an embedded system and is never transmitted to the cloud, user privacy is protected and there is less chance of abuse.

Learn more

The best way to learn about embedded machine learning is to see it for yourself. To train your own model and deploy it to any device, including your mobile phone, follow our Getting Started guide.

Flatten

The Flatten block performs statistical analysis on the signal. It is useful for slow-moving averages like temperature data, in combination with other blocks.

GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.

Flatten parameters

Scaling

  • Scale axes: Multiplies axes by this number

Method

  • Average: Calculates the average value for the window

  • Minimum: Calculates the minimum value in the window

  • Maximum: Calculates the maximum value in the window

  • Root-mean square: Calculates the RMS value of the window

  • Standard deviation: Calculates the standard deviation of the window

  • Skewness: Calculates the skewness of the window

  • Kurtosis: Calculates the kurtosis of the window

How does the flatten block work?

The Flatten block first rescales axes of the signal if value is different than 1. Then statistical analysis is performed on each window, computing between 1 and 7 features for each axis, depending on the number of selected methods.

Learning Blocks

After extracting meaningful features from the raw signal using signal processing, you can now train your model using a learning block. We provide a number of pre-defined learning blocks:

  • Classification (Keras).

  • Regression (Keras).

  • Anomaly Detection (K-means).

  • Images Classification (using Transfer Learning).

  • Object Detection (using MobileNetV2 SSD FPN).

  • Object Detection (using FOMO).

  • Custom Transfer Learning (Enterprise feature).

For most of the learning blocks (except K-means Anomaly Detection), you can use the Switch to expert mode button to access the full Keras API for custom architectures, rebalancing your weights, and more.

Raw Data

The Raw Data block generates windows from data samples without any specific signal processing. It is great for signals that have already been pre-processed and if you just need to feed your data into the Neural Network block.

GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.

Raw data parameters

Scaling

  • Scale axes: Multiplies each axis by this number. This can be used to normalize your data between 0 and 1.

How does the raw data block work?

The Raw Data block retrieves raw samples and applies the Scaling parameter.

Organizations

Your Edge Impulse organization helps your team with the full lifecycle of your TinyML deployment. It contains tools to collect and maintain large datasets, allows your data scientists to quickly access relevant data through their familiar tools, adds versioning and traceability to your machine learning models, and lets you quickly create new Edge Impulse projects for on-device deployment.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

To get started, follow these tutorials:

  • Collaborating on projects - to work with your colleagues on one project.

  • Building your first dataset - to build up a shared dataset for your organization.

  • Upload portals - to allow external parties to securely contribute data to your datasets.

  • Creating a transformation block - to quickly extract features from your dataset.

  • Building deployment blocks - to create custom deployment targets for your products.

  • Hosting custom DSP blocks - to create and host your custom signal processing techniques and use it directly in your projects.

  • Adding custom transfer learning models - to use your custom neural networks architectures and load pre-trained weights.

Model Testing

When collecting data, we split the dataset into training and testing sets. The model was trained with only the training set, and the testing set is used to validate how well the model will perform on un-seen data. This will ensure that the model has not learned to overfit the training data, which is a common occurrence.

To test your model, go to Model testing, and click Test all. The model will classify all of the test set samples and give you an overall accuracy of how your model performed.

classify all test images

This is also accompanied by a confusion matrix to show you how your models performs for each class.

Model testing confusion matrix

Evaluating individual samples

To see a classification in detail, go to the individual sample you are want to evaluate and click the three dots next to it, then just select show classification. This will open a new window that will display the expected outcome, and the predicted output of your model with its accuracy. This detailed view can also give you a hint on why an item has been misclassified.

View of detailed classification

Setting confidence threshold

Every learning block has a threshold. This can be the minimum confidence that a neural network needs to have, or the maximum anomaly score before a sample is tagged as an anomaly. You can configure these thresholds to tweak the sensitivity of these learning blocks. This affects both live classification and model testing.

Setting confidence threshold

Transfer Learning (Images)

When creating an impulse to solve an image classification problem, you will most likely want to use 'transfer learning' as learning block. This is particularly true especially when working with a relatively small dataset.

Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks.

To choose transfer learning as your learning block, go to create impulse and click on 'Add a Learning Block', and select 'Transfer Learning'

Impulse set up for image classification

To choose your preferred pretrained network, go to Transfer learning on the left side of your screen and click 'choose a different model'. A pop up will appear on your screen with a list of models to choose from as shown in the image below.

Choose different model transfer learning model

Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on ImageNet dataset as it's pretrained network for you to fine-tune for your specific application. The pretrained networks comes with varying input blocks ranging from 96x96 to 320x320 and both RGB & Grayscale images for you to choose from depending on your application & target deployment hardware.

Transfer Learning available models

Neural Network Settings

Before you start training your model, you need to set the following neural network configurations:

NN Settings
  • Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.

  • Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate

  • Validation set size: The percentage of your training set held apart for validation, a good default is 20%.

You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.

The preset configurations just don't work for your model? No worries, Expert Mode is for you! The Expert mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.

Expert mode

You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.

Open MV Cam H7 Plus

The OpenMV Cam is a small and low-power development board with a Cortex-M7 microcontroller supporting MicroPython, a μSD card socket and a camera module capable of taking 5MP images - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models through the studio and the OpenMV IDE. It is available for 80 USD directly from OpenMV.

The OpenMV Cam H7 Plus

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. OpenMV IDE.

Problems installing the CLI?

See the installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse. To make this easy we've put some tutorials together which takes you through all the steps to acquire data, train a model, and deploy this model back to your device.

  • Adding sight to your sensors - end-to-end tutorial.

  • Detecting objects using FOMO.

  • Collecting image data with the OpenMV Cam H7 Plus - collecting datasets using the OpenMV IDE.

  • Running your impulse on your OpenMV camera - run your trained impulse on the OpenMV Cam H7 Plus.

Upload portals

Upload portals are a secure way to let external parties upload data to your datasets. Through an upload portal they get an easy user interface to add data, but they have no access to the content of the dataset, nor can they delete any files. Data that is uploaded through the portal can be stored on-premise or in your own cloud infrastructure.

In this tutorial we'll set up an upload portal, show you how to add new data, and how to show this data in Edge Impulse for further processing.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

1. Configuring a storage bucket

Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. Follow the Building your first dataset tutorial to set up your storage bucket.

2. Creating an upload portal

With your storage bucket configured you're ready to set up your first upload portal. In your organization go to Data > Upload portals and choose Create new upload portal. Here, select a name, a description, the storage bucket, and a path in the storage bucket.

Creating an upload portal

Note: You'll need to enable CORS headers on the bucket. If these are not configured you'll get prompted with instructions. Talk to your user success engineer (when your data is hosted by Edge Impulse), or your system administrator to configure this.

After your portal is created a link is shown. This link contains an authentication token, and can be shared directly with the third party.

An active upload portal

Click the link to open the portal. If you ever forget the link: no worries. Click the ⋮ next to your portal, and choose View portal.

3. Uploading data to the portal

An upload portal with two folders.

To upload data you can now drag & drop files or folders to the drop zone on the right, or use Create new folder to first create a folder structure. There's no limit to the amount of files you can upload here, and all files are hashed, so if you upload a file that's already present the file will be skipped.

Note: Files with the same name but with a different hash are overwritten.

4. Adding the data to your dataset

To view the uploaded data in your dataset now go to your organization, and select Data. Then select Add data > Add dataset from bucket. Here, enter a name for the dataset, and select the same bucket path as you used for the portal. Then click Add data.

Adding the data to your dataset

You now have the data in your Edge Impulse organization, ready to be applied to your next machine learning project.

The data in Edge Impulse

You can run the 'Add dataset from bucket' step any time new data was added. Data is automatically de-duplicated and new files will be picked up. You can also automate this through our API.

5. Recap

If you need a secure way for external parties to contribute data to your datasets then upload portals are the way to go. They offer a friendly user interface, upload data directly into your storage buckets, and give you an easy way to use the data directly in Edge Impulse. 🚀

Any questions, or interested in the enterprise version of Edge Impulse? Contact us for more information.

Create Impulse

After collecting data for your project, you can now create your Impulse. A complete Impulse will consist of 3 main building blocks: input block, processing block and a learning block.

This view is one of the most important, here you will build your own machine learning pipeline.

Impulse example for movement classification using accelerometer data:

Impulse example for object detection using images:

Input block

The input block indicates the type of input data you are training your model with. This can be time series (audio, vibration, movements) or images.

Time series (audio, vibration, movements)

  • The input axes field lists all the axis referenced from your training dataset

  • The window size is the size of the raw features that is used for the training

  • The window increase is used to artificially create more features (and feed the learning block with more information)

  • The frequency is automatically calculated based on your training samples. You can modify this value but you currently cannot use values lower than 0.000016 (less than 1 sample every 60s).

  • Zero-pad data: Adds zero values when raw feature is missing

Below is a sketch to summarize the role of each parameters:

Images

  • Axes: Images

  • Image width & height: Most of our pre-trained models work with square images.

  • Resize mode: You have three options, Squash, Fit to the shortest axis, Fit to the longest axis

Processing Block

A is basically a feature extractor. It consists of DSP (Digital Signal Processing) operations that are used to extract features that our model learns on. These operations vary depending on the type of data used in your project.

You don't have much experience with DSP? No problem, Edge Impulse usually uses a star to indicate the most recommended processing block based on your input data as shown in the image below.

In the case where the available processing blocks aren't suitable for your application, you can and import into your project.

Learning blocks

After adding your , it is now time to add a to make your impulse complete. A learning block is simply a neural network that is trained to learn on your data.

vary depending on what you want your model to do. It can be , , , or . It can also be a (enterprise feature)

Learning blocks available with time-series projects:

Learning blocks available with image projects:

Learning blocks available with object detection projects:

Linux x86_64

You can use your Linux x86_64 device or computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a webcam and microphone plugged into your system, they are automatically detected and can be used to build models.

Instruction set architectures

If you are not sure about your instruction set architectures, use:

1. Installing dependencies

To set this device up in Edge Impulse, run the following commands:

Ubuntu/Debian:

3. Connecting to Edge Impulse

With all software set up, connect your camera and microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

4. Verifying that your device is connected

That's all! Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.

Deploying back to device

To run your impulse locally run on your Linux platform:

This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.

Image model?

If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:

Serial daemon

The serial daemon is used to onboard new devices, configure upload settings, and acts as a proxy for devices without an IP connection.

Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the serial daemon. See for more information.

To use the daemon, connect a fully-supported development board to your computer and run:

The daemon will ask you for the server you want to connect to, prompt you to log in, and then configure the device. If your device does not have the right firmware yet, it will also prompt you to upgrade this.

This is an example of the output of the daemon:

Note: Your credentials are never stored. When you log in these are exchanged for a token. This token is used to further authenticate requests.

Clearing configuration

To clear the configuration, run:

This resets both the daemon configuration as well as the on-device configuration. If you still run into issues, you can connect to the device using a serial monitor (on baud rate 115,200) and run AT+CLEARCONFIG. This removes all configuration from the device.

Devices without an IP connection

If your device is not connected to the remote management interface - for example because it does not have an IP connection, or because WiFi is out of range - the daemon will act as a proxy. It will register with Edge Impulse on behalf of the device, and proxy events through over serial. For this to work your device needs to support the Edge Impulse AT command set, please refer to the documentation for more information.

Silent mode

To skip any wizards (except for the login prompt) you can run the daemon in silent mode via:

This is useful in environments where there is no internet connection, as the daemon won't prompt to connect to WiFi.

Switching projects

You can use one device for many projects. To switch projects run:

And select the new project. The device will remain listed in the old project, and if you switch back will retain the same name and last seen date.

Troubleshooting

Unable to set up WiFi with ST B-L475E-IOT01A development board

If you are using the development board, you may experience the following error when attempting to connect to a WiFi network:

There is a with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.

Linux Python SDK

This library lets you run machine learning models and collect sensor data on machines using Python. The SDK is open source and hosted on GitHub: .

Installation guide

  1. Install a recent version of (>=3.7).

  2. Install the SDK

    Raspberry Pi

    Jetson Nano

    Other platforms

  3. Clone this repository to get the examples:

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

To collect data from the camera or microphone, follow the ) for your development board.

Collecting data from other sensors

To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. .

Classifying data

To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:

  1. Train your model in Edge Impulse.

  2. Install the .

  3. Download the model file via:

    This downloads the file into modelfile.eim. (Want to switch projects? Add --clean)

Then you can start classifying realtime sensor data. We have examples for:

  • - grabs data from the microphone and classifies it in realtime.

  • - grabs data from a webcam and classifies it in realtime.

  • - classifies custom sensor data.

Troubleshooting

[Errno -9986] Internal PortAudio error (macOS)

If you see this error you can re-install portaudio via:

Abort trap (6) (macOS)

This error shows when you want to gain access to the camera or the microphone on macOS from a virtual shell (like the terminal in Visual Studio Code). Try to run the command from the normal Terminal.app.

Labeling Queue (Images)

If you are working on an object detection project, you will most likely see "labelling queue" bar on your data acquisition page. The labeling queue shows you all the data that has not been labelled in your dataset.

Can't see the labeling queue? Go to Dashboard, and under 'Project info > Labeling method' select 'Bounding boxes (object detection)'.

In object detection, labelling is the process of adding a bounding box around specific objects in an image so that your machine learning model can learn and infer from it. Edge impulse studio has an inbuilt data annotation tool with AI assisted labelling to assist you in your labelling workflows as we will see.

In the Edge Impulse studio, labelling your data is as easy as dragging a box around the object, then entering a label and saving as shown below.

However, as simple the manual labelling process might look like, it sometimes can become tedious and time consuming especially when dealing with huge datasets. To make your life easier, Edge Impulse studio has an inbuilt AI-assisted labelling feature to automatically assist you in your labelling workflows.

AI Assisted labelling

there are 3 ways you can use to perform AI assisted labelling on the Edge Impulse Studio:

  • Using yolov5

  • Using your own model

  • Using object tracking

Using YoloV5

By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!

To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.

Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!

Using your own model

You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab. Then, from the “Labeling queue”, click the Label suggestions dropdown and select “Classify using ”:

You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.

Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!

Using Object tracking

If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.

Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:

Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!

We are excited to see what you build with the AI-Assisted Labeling feature in Edge Impulse, please post your project on our forum or tag us on social media, @Edge Impulse!

Blues Wireless Swan

Community board

This is a community board by Blues Wireless, and is not maintained by Edge Impulse. For support head to the .

The Blues Wireless Swan is a development board featuring a 120MHz ARM Cortex-M4 from STMicroelectronics with 2MB of flash and 640KB of RAM. Blues Wireless has created an on how to get started using the Swan with Edge Impulse, including how to collect new data from a triple axis accelerometer and how to train and deploy your Edge Impulse models to the Swan. For more details and ordering information, visit the Blues Wireless Swan .

Setting up your development board

To set up your Blues Wireless Swan, follow this complete guide: .

Next steps: building a machine learning model

The Blues Wireless Swan will guide you through how to create a simple classification model with an accelerometer designed to analyze movement over a brief period of time (2 seconds) and infer how the motion correlates to one of the following four states:

  1. Idle (no motion)

  2. Circle

  3. Slash

  4. An up-and-down motion in the shape of the letter "W"

For more insight into using a triple axis accelerometer to build an embedded machine learning model visit the .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your Blues Wireless Swan. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Blues Wireless' [Using Swan with Edge Impulse] (https://dev.blues.io/get-started/swan/using-swan-with-edge-impulse) tutorial for more information on deploying your model onto the device.

Live Classification

Live classification lets you validate your model with data captured directly from any device or supported development board. This gives you a picture on how your model will perform with real world data. To achieve this, go to Live classification and connect the device or development board you want to capture data from.

Using a fully supported development board

All of your connected devices and sensors will appear under Devices as shown below. The devices can be connected through the or :

Using your mobile phone

To perform live classification using your phone, go to Devices and click Connect a new device then select "Use your mobile phone". Scan the QR code using your phone then click Switch to classification mode and start sampling.

Using your computer

To perform live classification using your computer, go to Devices and click Connect a new device then select "Use your computer". Give permissions on your computer then click Switch to classification mode and start sampling.

Impulse runner

The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.

You start the impulse via:

This will sample data from your real sensors, classify the data, then print the results. E.g.:

Other options

  • --debug - run the impulse in debug mode, this will print the intermediate DSP results. For image models, a live feed of the camera and inference results will also be locally hosted and available in your browser.

  • --continuous - run the impulse in continuous mode (not available on all platforms).

Seeed Wio Terminal

Community board

This is a community board by Seeed Studios, and it's not maintained by Edge Impulse. For support head to the .

The Seeed Wio Terminal is a development board from Seeed Studios with a Cortex-M4 microcontroller, motion sensors, an LCD display, and Grove connectors to easily connect external sensors. Seeed Studio has added support for this development board to Edge Impulse, so you can sample raw data and build machine learning models from the studio. The board is available for 29 USD directly from .

Setting up your development board

To set up your Seeed Wio Terminal, follow this guide: .

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with this full end-to-end course from Seeed's EDU team: .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your Wio Terminal. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single library that you can run on your development board.

The easiest way to deploy your impulse to the Seeed Wio Terminal is via an Arduino library. See for more information.

$ edge-impulse-run-impulse
edge-impulse-run-impulse
Edge Impulse impulse runner v1.7.3
[SER] Connecting to /dev/tty.usbmodem401103
[SER] Serial is connected, trying to read config...
[SER] Retrieved configuration
[SER] Device is running AT command version 1.3.0
[SER] Started inferencing...
Inferencing settings:
        Interval: 16.00 ms.
        Frame size: 375
        Sample length: 2000 ms.
        No. of classes: 4
Starting inferencing, press 'b' to break
Sampling... Storing in file name: /fs/device-classification.4
Predictions (DSP: 16 ms., Classification: 1 ms., Anomaly: 2 ms.):
    idle: 0.91016
    snake: 0.08203
    updown: 0.00391
    wave: 0.00391
    anomaly score: -0.067
Finished inferencing, raw data is stored in '/fs/device-classification.4'. Use AT+UPLOADFILE to send back to Edge Impulse.
$ edge-impulse-daemon
Edge Impulse serial daemon v1.1.0
? To which server do you want to connect? (you can override this by setting EI_HOST environmental variable) edgeimpulse.com
? What is your user name or e-mail address? [email protected]
? What is your password? [hidden]
Endpoints:
    Websocket: wss://remote-mgmt.edgeimpulse.com
    API:       https://studio.edgeimpulse.com
    Ingestion: https://ingestion.edgeimpulse.com

[SER] Connecting to /dev/tty.usbmodem401203
[SER] Serial is connected, trying to read config...
[SER] Retrieved configuration
? To which project do you want to add this device? accelerometer-demo-1
Configuring API key in device... OK
Configuring HMAC key in device... OK
? What name do you want to give this device? Jan's DISCO-L475VG
Setting upload host in device... OK
Configuring remote management settings... OK
? WiFi is not connected, do you want to set up a WiFi network now? Yes
Scanning WiFi networks... OK
? Select WiFi network SSID: edgeimpulse-office, Security: WPA2 (3), RSSI: -60 dBm
? Enter password for network "edgeimpulse-office" 0624710192
Connecting to "edgeimpulse-office"... OK
[SER] Device is connected over WiFi to remote management API, no need to run the daemon. Exiting...
$ edge-impulse-daemon --clean
$ edge-impulse-daemon --silent
$ edge-impulse-daemon --clean
? WiFi is not connected, do you want to set up a WiFi network now? Yes
Scanning WiFi networks...Error while setting up device Timeout
this blog post
ST B-L475E-IOT01A
known issue
$ sudo apt-get install libatlas-base-dev libportaudio0 libportaudio2 libportaudiocpp0 portaudio19-dev 
$ pip3 install edge_impulse_linux -i https://pypi.python.org/simple
$ sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev
$ pip3 install edge_impulse_linux
$ pip3 install edge_impulse_linux
$ git clone https://github.com/edgeimpulse/linux-sdk-python
$ edge-impulse-linux-runner --download modelfile.eim
brew uninstall --ignore-dependencies portaudio
brew install portaudio --HEAD​
Linux
edgeimpulse/linux-sdk-python
Python 3
getting started guide
Here's an end-to-end example
Edge Impulse for Linux CLI
Audio
Camera
Custom data
processing block
build your own custom processing blocks
processing block
learning block
Learning blocks
classification
regression
anomaly detection
image transfer learning
object detection
custom transfer learning block
Completed impulse for accelerometer motion classification.
Completed impulse for object detection.
Window size vs. window increase.
Processing blocks available.
Learning blocks available with time-series projects.
Learning blocks available with image projects.
Learning blocks available with object detection projects.
$ uname -m
x86_64
sudo apt install -y curl
curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
edge-impulse-linux
edge-impulse-linux-runner
your Edge Impulse project
Responding to your voice
Recognize sounds from audio
Adding sight to your sensors
Object detection
Counting objects using FOMO
Linux SDK
Linux SDK
Linux x86
Device connected to Edge Impulse.
Live feed with classification results
screenshot showing the labelling queue
labeling window
using YoloV5
using-your-own model
using object tracking
Blues Wireless homepage
in-depth tutorial
product page
Using Swan with Edge Impulse
tutorial
Edge Impulse continuous motion recognition tutorial
Data forwarder
Blues Wireless Swan: 120MHz ARM Cortex-M4 from STMicroelectronics with 2MB of flash and 640KB of RAM
Edge Impulse CLI
WebUSB
Connected devices
QR code for live classification using your mobile phone
Live classification using your computer
Seeed Forum
Seeed
Getting started with Edge Impulse - Seeed Wiki
TinyML with Wio Terminal Course
Data forwarder
Running your impulse locally on your Arduino
Seeed Wio Terminal

Overview

There is a list of development boards that are fully supported by Edge Impulse. These boards come with a special firmware which enables data collection from all their sensors, allows you to build new ready-to-go binaries that include your trained impulse, and come with examples on integrating your impulse with your custom firmware. These boards are the perfect way to start building Machine Learning solutions on real embedded hardware.

Officially supported MCU targets

  • Arduino Nano 33 BLE Sense

  • Arduino Nicla Sense ME

  • Arduino Nicla Vision

  • Arduino Portenta H7 + Vision Shield

  • Espressif ESP32

  • Himax WE-I Plus

  • Nordic Semi nRF52840 DK

  • Nordic Semi nRF5340 DK

  • Nordic Semi nRF9160 DK

  • Nordic Semi Thingy:91

  • Open MV Cam H7 Plus

  • Silicon Labs xG24 Dev Kit

  • Silicon Labs Thunderboard Sense 2

  • Sony's Spresense

  • ST B-L475E-IOT01A

  • Syntiant Tiny ML Board

  • TI CC1352P Launchpad

  • Raspberry Pi RP2040

Officially supported CPU/GPU targets

  • Intel Based Macs

  • Linux x86_64

  • NVIDIA Jetson Nano

  • Raspberry Pi 4

Community boards

  • Seeed Wio Terminal

  • Arducam Pico4ML TinyML Dev Kit

  • Blues Wireless Swan

Different development board? No problem, you can always collect data using the Data forwarder or the Edge Impulse for Linux SDK, and deploy your model back to the device with the Running your impulse locally tutorials. Also, if you feel like porting your board, use this Porting guide.

Just want to experience Edge Impulse? You can also use your Mobile phone!

Linux Go SDK

This library lets you run machine learning models and collect sensor data on Linux machines using Go. The SDK is open source and hosted on GitHub: edgeimpulse/linux-sdk-go.

Installation guide

  1. Install Go 1.15 or higher.

  2. Clone this repository:

    $ git clone https://github.com/edgeimpulse/linux-sdk-go
  3. Find the example that you want to build and run go build:

    $ cd cmd/eimclassify
    $ go build
  4. Run the example:

    $ ./eimclassify

    And follow instructions.

  5. This SDK is also published to pkg.go.dev, so you can pull the package from there too.

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

To collect data from the camera or microphone, follow the getting started guide for your development board.

Collecting data from other sensors

To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example.

Classifying data

To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:

  1. Train your model in Edge Impulse.

  2. Install the Edge Impulse for Linux CLI.

  3. Download the model file via:

    $ edge-impulse-linux-runner --download modelfile.eim

    This downloads the file into modelfile.eim. (Want to switch projects? Add --clean)

Then you can start classifying realtime sensor data. We have examples for:

  • Audio - grabs data from the microphone and classifies it in realtime.

  • Camera - grabs data from a webcam and classifies it in realtime.

  • Custom data - classifies custom sensor data.

Syntiant Tiny ML Board

The Syntiant TinyML Board is a tiny development board with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor™, featuring ultra low-power consumption, a fully connected neural network architecture, and fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance audio interfaces.

The Edge Impulse firmware for this development board is open source and hosted on GitHub.

Syntiant TinyML Board

IMU data acquisition - SD Card

An SD Card is required to use IMU data acquisition as the internal RAM of the MCU is too small. You don't need the SD Card for inferencing only or for audio projects.

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  • Arduino CLI

  • Edge Impulse CLI

Connecting to Edge Impulse

1. Download the firmware

Select one of the 2 firmwares below for audio or IMU projects:

  • Audio firmware

  • IMU firmware

Insert SD Card if you need IMU data acquisition and connect the USB cable to your computer. Double-click on the script for your OS. The script will flash the Arduino firmware and a default model on the NDP101 chip.

Flashing issues

0x000000: read 0x04 != expected 0x01

Some flashing issues can occur on the Serial Flash. In this case, open a Serial Terminal on the TinyML board and send the command: :F. This will erase the Serial Flash and should fix the flashing issue.

2. Connect the development board to your computer

Connect the Syntiant TinyML Board directly to your computer's USB port. Linux, Mac OS, and Windows 10 platforms are supported.

3. Setup the Syntiant TinyML Board to collect data

Audio - USB microphone (macOS/Linux only)

Check that the Syntiant TinyML enumerates as "TinyML" or "Arduino MKRZero". For example, in Mac OS you'll find it under System Preferences/Sound:

Syntiant TinyML Board Enumerated as Arduino MKRZero

Audio acquisition - Windows OS

Using the Syntiant TinyML board as an external microphone for data collection doesn't currently work on Windows OS.

IMU

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model and evaluate it using the Syntiant TinyML Board with this tutorial:

  • Responding to your voice - Syntiant (RC Commands)

  • Motion recognition - Syntiant

FAQ

  • How to use Arduino-CLI with macOS M1 chip? You will need to install Rosetta2 to run the Arduino-CLI. See details on Apple website.

  • Board is detected as MKRZero and not TinyML: when compiling using the Arduino IDE, the board name will change from TinyML to MKRZero as it automatically retrieves the name from the board type. This doesn't affect the execution of the firmware.

  • How to label my classes? The NDP101 chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset (we prefix it with 'z' in order to get this class last in the list).

Himax WE-I Plus

The Himax WE-I Plus is a tiny development board with a camera, a microphone, an accelerometer and a very fast DSP - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 65 USD from Sparkfun.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-himax-we-i-plus.

Himax WE-I Plus

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  3. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Responding to your voice

  • Recognize sounds from audio

  • Adding sight to your sensors

  • Object detection

  • Counting objects using FOMO

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

All licenses are in use by other developers.

If you export to the Himax WE-I Plus you could receive the error: "All licenses are in use by other developers.". Unfortunately we have a limited number of licenses for the MetaWare compiler and these are shared between all Studio users. Try again in a little bit, or export your project as a C++ Library, add it to the edgeimpulse/firmware-himax-we-i-plus project and compile locally.

COM port not detected

If no device shows up in your OS (ie: COMxx, /dev/tty.usbxx) after connecting the board and your USB cable supports data transfer, you may need to install FTDI VCP driver.

On your Mbed-enabled development board

Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an Mbed OS application to classify sensor data.

Knowledge required

This tutorial assumes that you're familiar with Mbed OS, and have installed Mbed CLI. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.

Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for the ST IoT Discovery Kit has that. See edgeimpulse/firmware-st-b-l475e-iot01a.

Prerequisites

Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also install the following software:

  • Mbed CLI - make sure mbed is in your PATH.

  • GNU ARM Embedded Toolchain 9 - make sure arm-none-eabi-gcc is in your PATH.

Cloning the base repository

We created an example repository which contains a small Mbed OS application, which takes the raw features as an argument, and prints out the final classification. Import this repository using Mbed CLI:

$ mbed import https://github.com/edgeimpulse/example-standalone-inferencing-mbed

Deploying your impulse

Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library and click Build to create the library.

Download the .zip file and place the contents in the 'example-standalone-inferencing-mbed' folder (which you downloaded above). Your final folder structure should look like this:

example-standalone-inferencing-mbed
|_ Makefile
|_ README.md
|_ build.sh
|_ edge-impulse-sdk
|_ model-parameters
|_ source
|_ tflite-model

Running the impulse

With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.

Selecting the row with timestamp '320' under 'Detailed result'.

To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.

Copying the raw features.

Open main.cpp and paste the raw features inside the static const float features[] definition, for example:

static const float features[] = {
    -19.8800, -0.6900, 8.2300, -17.6600, -1.1300, 5.9700, ...
};

Then build and flash the application to your development board with Mbed CLI:

$ mbed compile -t GCC_ARM -m auto -f

Seeing the output

To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board (e.g. by pressing the black button on the ST B-L475E-IOT01A. You can do this with your favourite serial monitor or with the Edge Impulse CLI:

$ edge-impulse-run-impulse --raw

This will run the signal processing pipeline, and then classify the output:

Edge Impulse standalone inferencing (Mbed)
Running neural network...
Predictions (time: 0 ms.):
idle:   0.015319
snake:  0.000444
updown: 0.006182
wave:   0.978056
Anomaly score (time: 0 ms.): 0.133557
run_classifier_returned: 0
[0.01532, 0.00044, 0.00618, 0.97806, 0.134]

Which matches the values we just saw in the studio. You now have your impulse running on your Mbed-enabled development board!

Connecting sensors?

A demonstration on how to plug sensor values into the classifier can be found here: Data forwarder - classifying data (Mbed OS).

MobileNetV2 SSD FPN

It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.

Tutorial

Want to see the MobileNetV2 SSD FPN-Lite models in action? Check out our Detect objects with bounding boxes tutorial.

How to get started?

To build your first object detection models using MobileNetV2 SSD FPN-Lite :

  1. Create a new project in Edge Impulse.

  2. Make sure to set your labelling method to 'Bounding boxes (object detection)'.

  3. Collect and prepare your dataset as in Object detection

  4. Resize your image to fit 320x320px

  5. Add an 'Object Detection (Images)' block to your impulse.

  6. Under Images, choose RGB.

  7. Under Object detection, select 'Choose a different model' and select 'MobileNetV2 SSD FPN-Lite 320x320'

  8. You can start your training with a learning rate of '0.15'

Select model
  1. Click on 'Start training'

Object Detection view

MobileNetV2 SSD FPN-Lite 320x320 is available with Edge Impulse for Linux

How does this 🪄 work?

Here, we are using the MobileNetV2 SSD FPN-Lite 320x320 pre-trained model. The model has been trained on the COCO 2017 dataset with images scaled to 320x320 resolution.

In the MobileNetV2 SSD FPN-Lite, we have a base network (MobileNetV2), a detection network (Single Shot Detector or SSD) and a feature extractor (FPN-Lite).

Base network:

MobileNet, like VGG-Net, LeNet, AlexNet, and all others, are based on neural networks. The base network provides high-level features for classification or detection. If you use a fully connected layer and a softmax layer at the end of these networks, you have a classification.

Example of a network composed of many convolutional layers. Filters are applied to each training image at different resolutions, and the output of each convolved image is used as input to the next layer (source Mathworks)

But you can remove the fully connected and the softmax layers, and replace it with detection networks, like SSD, Faster R-CNN, and others to perform object detection.

Detection network:

The most common detection networks are SSD (Single Shot Detection) and RPN (Regional Proposal Network).

When using SSD, we only need to take one single shot to detect multiple objects within the image. On the other hand, regional proposal networks (RPN) based approaches, such as R-CNN series, need two shots, one for generating region proposals, one for detecting the object of each proposal.

As a consequence, SSD is much faster compared with RPN-based approaches but often trades accuracy with real-time processing speed. They also tend to have issues in detecting objects that are too close or too small.

Feature Pyramid Network:

Detecting objects in different scales is challenging in particular for small objects. Feature Pyramid Network (FPN) is a feature extractor designed with feature pyramid concept to improve accuracy and speed.

Regression (Keras)

Solving regression problems is one of the most common applications for machine learning models, especially in supervised machine learning. Models are trained to understand the relationship between independent variables and an outcome or dependent variable. The model can then be leveraged to predict the outcome of new and unseen input data, or to fill a gap in missing data.

Prerequisites

Labelling

To build a regression model you collect data as usual, but rather than setting the label to a text value, you set it to a numeric value.

Regression data samples labelled with numerical values

Processing blocks

You can use any of the built-in signal processing blocks to pre-process your vibration, audio or image data, or use custom processing blocks to extract novel features from other types of sensor data.

An impulse with a regression block

Train your regression block

You have full freedom in modifying your neural network architecture - whether visually or through writing Keras code.

Regression view
  • Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.

  • Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate

  • Validation set size: The percentage of your training set held apart for validation, a good default is 20%

  • Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.

Test your regression model

If you want to see the accuracy of your model across your test dataset, go to Model testing. You can adjust the Maximum error percentage by clicking on "⋮" button.

Testing regression model

Additional resources

  • Predict the Future with Regression Models

  • Estimate Weight From a Photo Using Visual Regression in Edge Impulse

Arducam Pico4ML TinyML Dev Kit

Community board

This is a community board by Arducam, and it's not maintained by Edge Impulse. For support head to the Arducam support page.

The Arducam Pico4ML TinyML Dev Kit is a development board from Arducam with a RP2040 microcontroller, QVGA camera, bluetooth module (depending on your version), LCD screen, onboard microphone, accelerometer, gyroscope, and compass. Arducam has created in depth tutorials on how to get started using the Pico4ML Dev Kit with Edge Impulse, including how to collect new data and how to train and deploy your Edge Impulse models to the Pico4ML. The Arducam Pico4ML TinyML Dev Kit has two versions, the version with BLE is available for 55 USD and the version without BLE is available for 50 USD.

Arducam Pico4ML TinyML Dev Kit: RP2040 Board w/ QVGA Camera, Bluetooth module (with or without), LCD Screen, Onboard Audio, & IMU

Setting up your development board

To set up your Arducam Pico4ML TinyML Dev Kit, follow this guide: Arducam: How to use Edge Impulse to train machine learning models for Raspberry Pico.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with the Edge Impulse continuous motion recognition tutorial.

Or you can follow Arducam's tutorial on How to build a Magic Wand with Edge Impulse for Arducam Pico4ML-BLE.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your Arducam Pico4ML TinyML Dev Kit. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Arducam's How to use Edge Impulse to train machine learning models for Raspberry Pico tutorial for more information on deploying your model onto the device.

Anomaly Detection (K-means)

Neural networks are great, but they have one big flaw. They're terrible at dealing with data they have never seen before (like a new gesture). Neural networks cannot judge this, as they are only aware of the training data. If you give it something unlike anything it has seen before it'll still classify as one of the four classes.

Tutorial

Want to see the Anomaly Detection in action? Check out our Continuous Motion Recognition tutorial.

K-means clustering

This method looks at the data points in a dataset and groups those that are similar into a predefined number K of clusters. A threshold value can be added to detect anomalies: if the distance between a data point and its nearest centroid is greater than the threshold value, then it is an anomaly.

The main difficulty resides in choosing K, since data in a time series is always changing and different values of K might be ideal at different times. Besides, in more complex scenarios where there are both local and global outliers, many outliers might pass under the radar and be assigned to a cluster.

Features importance (optional)

In most of your DSP blocks, you have an option to calculate the feature importance. Edge Impulse Studio will then output a Feature Importance graphic that will help you determine which axes and values generated from your DSP block are most significant to analyze when you want to do anomaly detection.

features importance

This process of generating features and determining the most important features of your data will further reduce the amount of signal analysis needed on the device with new and unseen data.

Setting up the anomaly detection block

In your anomaly detection block, you can click on the Select suggested axes button to harness the value of the feature importance output.

Anomaly detection view

Here is the process in the background:

  • Create X number of clusters and group all the data.

  • For each of these clusters we store the center and the size of the cluster.

  • During inference we calculate the closest cluster for a new data point, and show the distance from the edge of the cluster. If it’s within a cluster (no anomaly)you thus get a value below 0.

Anomaly explorer

In the above picture, known clusters in are in blue, new classified data in orange. It's clearly outside of any known clusters and can thus be tagged as an anomaly.

Additional resources

  • Tutorial: Continuous Motion Recognition

  • Blog post: Advanced Anomaly Detection with Feature Importance

Nordic Semi nRF5340 DK

The Nordic Semiconductor nRF5340 DK is a development board with dual Cortex-M33 microcontrollers, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF5340 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF5340 DK is available for around 50 USD from a .

If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: .

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. .

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Plugging in the X-NUCLEO-IKS02A1 MEMS expansion shield

Remove the pin header protectors on the nRF5340 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.

Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.

2. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.

3. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK. Make sure you can see this drive.

  2. .

  3. Drag the nrf5340-dk.bin file to the JLINK drive.

  4. Wait 20 seconds and press the BOOT/RESET button.

4. Setting keys

From a command prompt or terminal, run:

This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

The nRF5340 DK exposes multiple UARTs. If prompted, choose the bottom one:

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.

5. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • .

  • .

  • .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

Failed to flash

If your board fails to flash new firmware (a FAIL.txt file might appear on the JLINK drive) you can also flash using nrfjprog.

  1. Install the .

  2. Flash new firmware via:

Dashboard

After creating your Edge Impulse Studio project, you will be directed to the project's dashboard. The dashboard gives a quick overview of your project such as your project ID, the number of devices connected, the amount of data collected, the preferred labeling method, among other editable properties. You can also enable some additional capabilities to your project such as collaboration, making your project public, and showcasing your public projects using Markdown READMEs as we will see.

The figure below shows the various sections and widgets of the dashboard that we will cover here.

1. Showcasing your public projects with Markdown READMEs

The project README enables you to explain the details of your project in a short way. Using this feature, you can add visualizations such as images, GIFs, code snippets, and text to your project in order to bring your colleagues and project viewers up to speed with the important details of your project. In your README you might want to add things like:

  • What the project does

  • Why the project is useful

  • Motivations of the project

  • How to get started with the project

  • What sensors and target deployment devices you used

  • How you plan to improve your project

  • Where users can get help with your project

To create your first README, navigate to the "about this project" widget and click "add README"

For more README inspiration, check out the public Edge Impulse project tutorials below:

  • .

  • .

  • .

  • .

2. Making your project public

To share your private project with the world, and click Make this project public.

By doing this, all of your data, block configurations, intermediate results, and final models will be shared with the world. Your project will be publicly accessible and can be cloned with a single click with the provided URL:

3. Collaboration

You can invite up to three collaborators to join and contribute to your project. To have unlimited collaborators, your project needs to be part of an to access unlimited team collaborations.

To add a collaborator, go to your project's dashboard and find the "Collaborators" widget. Click the '+' icon and type the username or e-mail address of the other user. The user will be invited to create an Edge Impulse account if it doesn't exist.

The user will be automatically added to the project and will get an email notification inviting them to start contributing to your project. To remove a user, simply click on the three dots besides the user then tap ‘Delete’ and they will be automatically removed.

4. Project info

The project info widget shows the project's specifications such as the project ID, labeling method, and latency calculations for your target device.

  • The project ID is a unique numerical value that identifies your project. Whenever you have any issue with your project on the studio, you can always share your project ID on the for assistance from edge Impulse staff.

  • On the labeling method dropdown, you need to specify the type of labeling your dataset and model expect. This can be either one label per data item or bounding boxes. Bounding boxes only work for object detection tasks in the studio. Note that if you interchange the labeling methods, learning blocks will appear to be hidden when building your impulse.

  • One of the amazing Edge Impulse superpowers is the latency calculation component. This is an approximate time in milliseconds that the trained model and DSP operations are going to take during inference based on the selected target device. This hardware in the loop approach ensures that the target deployment device compute resources are not underutilized or over-utilized. It also saves developers' time associated with numerous inference iterations back and forth the studio in search of optimum models.

5. Block Outputs

In the Block Output section, you can download the results of the DSP and ML operations of your impulse.

The downloadable assets include the extracted features, Tensorflow SavedModel, and both quantized and unquantized TensorFlow lite models. This is particularly helpful when you want to perform other operations to the output blocks outside the Edge Impulse studio. For example, if you need a TensorflowJS model, you will just need to download the TensorFlow saved model from the dashboard and convert it to TensorFlowJS model format to be served on a browser.

6. Performance Settings

Changing Performance Settings is only available for enterprise customers

Organizational features are only available for enterprise customers. View our for more information.

This section consists of editable parameters that directly affect the performance of the studio when building your impulse. Depending on the selected or available settings, your jobs can either be fast or slow.

The use of GPU for training and Parallel DSP jobs is currently an internal experimental feature that will be soon released.

7. Administrative Zone

To bring even more flexibility in projects, the administrative zone gives developers the power to enable other additional features that are not found in edge impulse projects by default. Most of these features are usually advanced features intended for organizations or sometimes experimental features.

To activate these features you just need to check the boxes against the specific features you want to use and click save experiments.

8. Danger Zone

The danger zone widget consists of irrevocable actions that let you to:

  • Delete your project. This action removes all devices, data, and impulses from your project.

  • Delete all data in this project.

  • Perform train/test split. This action re-balances your dataset by splitting all your data automatically between the training and testing set and resets the categories for all data

  • Launch the getting started wizard. This will remove all data, and clear out your impulse.

Infineon CY8CKIT-062S2 Pioneer Kit

CY8CKIT-062S2 Pioneer Kit and CY8CKIT-028-SENSE expansion kit required

This guide assumes you have the attached to a

The Infineon Semiconductor enables the evaluation and development of applications using the PSoC 62 Series MCU. This low-cost hardware platform enables the design and debug of the PSoC 62 MCU and the Murata 1LV Module (CYW43012 Wi-Fi + Bluetooth Combo Chip). The PSoC 6 MCU is Infineon' latest, ultra-low-power PSoC specifically designed for wearables and IoT products. The board features a PSoC 6 MCU, and a CYW43012 Wi-Fi/Bluetooth combo module. Infineon CYW43012 is a 28nm, ultra-low-power device that supports single-stream, dual-band IEEE 802.11n-compliant Wi-Fi MAC/baseband/radio and Bluetooth 5.0 BR/EDR/LE. When paired with the , the PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit can be used to easily interface a variety of sensors with the PSoC™ 6 MCU platform, specifically targeted for audio and machine learning applications which are fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models to your PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit, directly from the Edge Impulse Studio.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: .

Installing dependencies

To set this device up with Edge Impulse, you will need to install the following software:

  1. . A utility program we will use to flash firmware images onto the target.

  2. The which will enable you to connect your CY8CKIT-062S2 Pioneer Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.

Problems installing the CLI?

See the guide.

Updating the firmware

Edge Impulse Studio can collect data directly from your CY8CKIT-062S2 Pioneer Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your CY8CKIT-062S2 Pioneer Kit you first need to flash it with our .

1. Download the base firmware image

, and unzip the file. Once downloaded, unzip it to obtain the firmware-infineon-cy8ckit-062s2.hex file, which we will be using in the following steps.

2. Connect the CY8CKIT-062S2 Pioneer Kit to your computer

Use a micro-USB cable to connect the CY8CKIT-062S2 Pioneer Kit to your development computer (where you downloaded and installed ).

3. Load the base firmware image with Infineon CyProgrammer

You can use to flash your CY8CKIT-062S2 Pioneer Kit with our . To do this, first select your board from the dropdown list on the top left corner. Make sure to select the item that starts with CY8CKIT-062S2-43012:

Then select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-infineon-cy8ckit-062s2.hex). You can now press the Connect button to connect to the board, and finally the Program button to load the base firmware image onto the CY8CKIT-062S2 Pioneer Kit.

Keep Handy

will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.

Connecting to Edge Impulse

With all the software in place, it's time to connect the CY8CKIT-062S2 Pioneer Kit to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Setting keys

From a command prompt or terminal, run:

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.

3. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices on the left sidebar. The device will be listed there:

Next steps: Build a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • .

  • .

  • .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

NVIDIA Jetson Nano

The Jetson Nano is an embedded Linux dev kit featuring a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Jetson Nano is available from 59 USD from a wide range of distributors, including .

In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.

Powering your Jetson

Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via a 5V, 4A power supply on the DC barrel connector. Don't forget to change the jumper! is an example power supply for sale.

An added bonus to powering via the DC barrel plug, you can carry out your first boot w/o an external monitor or keyboard.

1. Setting up your Jetson Nano

Depending on your hardware, follow NVIDIA's setup instructions ( or ) for both "Write Image to SD Card" and "Setup and First Boot." When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)

2. Installing dependencies

Make sure your ethernet is connected to the Internet

Issue the following command to check:

The result should look similar to this:

Running the setup script

To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).

3. Connecting to Edge Impulse

With all software set up, connect your camera and microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

4. Verifying that your device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.

Deploying back to device

To run your impulse locally, just connect to your Jetson again, and run:

This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.

Image model?

If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:

Running models on the GPU

Due to some incompatibilities we don't run models on the GPU by default. You can enable this by following the in the C++ SDK.

Troubleshooting

edge-impulse-linux reports "[Error: Input buffer contains unsupported image format]"

This is probably caused by a missing dependency on libjpeg. If you run:

The end of the output should show support for file import/export with libjpeg, like so:

If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.

edge-impulse-linux reports "Failed to start device monitor!"

If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):

Long warm-up time and under-performance

By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.

ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.

To enable maximum performance, run:

Nordic Semi nRF9160 DK

The Nordic Semiconductor nRF9160 DK is a development board with an nRF9160 SIP incorporating a Cortex M-33 for your application, a full LTE-M/NB-IoT modem with GPS along with 1 MB of flash and 256 KB RAM. It also includes an nRF52840 board controller with Bluetooth Low Energy connectivity. The Development Kit is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF9160 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF9160 DK is available for around 150 USD from a variety of distributors including .

If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: .

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. .

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Plugging in the X-NUCLEO-IKS02A1 MEMS expansion shield

Remove the pin header protectors on the nRF9160 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.

Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications. You can also remove the shield before flashing the board.

2. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.

3. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK. Make sure you can see this drive.

  2. Install the .

  3. .

  4. Flash the board controller, you only need to do this once. Go to step 4 if you've performed this step before.

    • Ensure that the PROG/DEBUG switch is in nRF52 position.

    • Copy board-controller.bin to the JLINK mass storage device.

  1. Flash the application:

    • Ensure that the PROG/DEBUG switch is in nRF91 position.

    • Run the flash script for your Operating System.

  2. Wait 20 seconds and press the BOOT/RESET button.

4. Setting keys

From a command prompt or terminal, run:

This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

The nRF9160 DK exposes multiple UARTs. If prompted, choose the top one:

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.

5. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • .

  • .

  • .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

Frequently asked questions

How can I share my Edge Impulse project?

The enterprise version of Edge Impulse offers on projects, go to Dashboard, find the Collaborators section, and click the '+' icon. If you have an interesting research or community project we can enable collaboration on the free version of Edge Impulse as well, by emailing [email protected].

You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.

What are the minimum hardware requirements to run the Edge Impulse inferencing library on my embedded device?

The minimum hardware requirements for the embedded device depends on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video, view our for more details.

What frameworks does Edge Impulse use to train the machine learning models?

We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking ⋮, and selecting Switch to expert mode.

Another big part of Edge Impulse are the processing blocks, as they clean up the data, and already extract important features from your data before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: (and you can build as well).

Is there a downside to enabling the EON Compiler?

The compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware.

By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.

Can I use a model that has been trained elsewhere in Edge Impulse?

You cannot import a pretrained model, but you can import your model architecture and then retrain. Add a neural network block to your impulse, go to the block, click ⋮, and select Switch to expert mode. You then have access to the full Keras API.

How does the feature explorer visualize data that has more that 3 dimensions?

Edge Impulse uses (a dimensionality reduction algorithm) to project high dimensionality input data into a 3 dimensional space. This even works for extremely high dimensionality data such as images.

Does Edge impulse integrate with other cloud services?

Yes. The enterprise version of Edge Impulse can integrate directly with your cloud service to access and transform data.

What is the typical power consumption of the Edge Impulse machine learning processes on my device?

Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in the Studio. Multiply this by active power consumption of your MCU to get an indication of power cost per inference.

More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).

What is the .eim model format for Edge Impulse for Linux?

See on the Edge Impulse for Linux pages.

How is the labeling of the data performed?

Using the Edge Impulse Studio data acquisition tools (like the or ), you can collect data samples manually with a pre-defined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the , , , or . You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.

macOS

You can use your Intel or M1-based Mac computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a Macbook, the webcam and microphone of your system are automatically detected and can be used to build models.

1. Connecting to your Mac

To connect your Mac to Edge Impulse:

  1. Install .

  2. Install .

  3. Open a terminal window and install the dependencies:

  1. Last, install the Edge Impulse CLI:

Problems installing the CLI?

See the guide.

2. Connecting to Edge Impulse

With the software installed, open a terminal window and run::

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

3. Verifying that your device is connected

That's all! Your Mac is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.

Deploying back to device

To run your impulse locally, just open a terminal and run:

This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.

Image model?

If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:

Mobile Phone

You can use any smartphone with a modern browser as a fully-supported client for Edge Impulse. You'll be able to sample raw data (from the accelerometer, microphone and camera), build models, and deploy machine learning models directly from the studio. Your phone will behave like any other device, and data and models that you create using your mobile phone can also be deployed to embedded devices.

The mobile client is open source and hosted on GitHub: . As there are thousands of different phones and operating system versions we'd love to hear from you there if something is amiss.

There's also a video version of this tutorial:

Connecting to Edge Impulse

To connect your mobile phone to Edge Impulse, go to the , and head to the Devices page. Then click Connect a new device.

Select Mobile phone, and a QR code will appear. Either scan the QR code with the camera of your phone - many phones will automatically recognize the code and offer to open a browser window - or click on the link above the QR code to open the mobile client.

This opens the mobile client, and registers the device directly. On your phone you see a Connected message.

That's all! Your device is now connected to Edge Impulse. If you return to the Devices page in the studio, your phone now shows as connected. You can change the name of your device by clicking on ⋮.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • .

Your phone will show up like any other device in Edge Impulse, and will automatically ask permission to use sensors.

No data (using Chrome on Android)?

You might need to enable motion sensors in the Chrome settings via Settings > Site settings > Motion sensors.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your phone. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single WebAssembly package that you can straight from the browser.

To do so, just click Switch to classification mode at the bottom of the mobile client. This will first build the impulse, and then samples data from the sensor, run the signal processing code, and then classify the data:

Victory! You're now running your machine learning model locally in your browser - you can even turn on airplane mode and the model will continue running. You can also package to include in your own website or Node.js application. 🚀

Responding to your voice
Continuous motion recognition
Adding sight to your sensors
Recognize sounds from audio
organization
forum
pricing
The Edge Impulse Studio dashboard.
The Edge Impulse Studio project README.
Make your Edge Impulse Studio project public.
A public project in the Edge Impulse Studio.
Collaborators
Edge Impulse Studio project information.
Download Edge Impulse Studio project block output.
The Edge Impulse Studio project performance settings.
Edge Impulse Studio project danger zone settings.
edge-impulse-daemon
IoT sense expansion kit (CY8CKIT-028-SENSE)
PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit
PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit (Cypress CY8CKIT-062S2)
IoT sense expansion kit
edgeimpulse/firmware-infineon-cy8ckit-062s2
Infineon CyProgrammer
Edge Impulse CLI
Installation and troubleshooting
base firmware image
Download the latest Edge Impulse firmware
Infineon CyProgrammer
Infineon CyProgrammer
base firmware image
Infineon CyProgrammer
Infineon CyProgrammer
this blog post
your Edge Impulse project
Building a continuous motion recognition system
Recognizing sounds from audio
Responding to your voice
Data forwarder
Infineon IoT sense expansion kit (CY8CKIT-028-SENSE) attached to Infineon CY8CKIT-062S2 Pioneer Kit
Connecting the CY8CKIT-062S2 Pioneer Kit to your computer
Connecting the CY8CKIT-062S2 Pioneer Kit to Infineon CyProgrammer
Flashing the CY8CKIT-062S2 Pioneer Kit base image
Device connected to Edge Impulse.
ping -c 3 www.google.com
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
wget -q -O - https://cdn.edgeimpulse.com/firmware/linux/jetson.sh | bash
edge-impulse-linux
edge-impulse-linux-runner
vips --vips-config
file import/export with libjpeg: yes (pkg-config)
image pyramid export: no
use libexif to load/save JPEG metadata: no
alex@jetson1:~$ 
sudo chown -R $(whoami) $HOME
sudo /usr/bin/jetson_clocks
Sparkfun
Here
NVIDIA Jetson Nano Developer Kit
NVIDIA Jetson Nano 2GB Developer Kit
your Edge Impulse project
Responding to your voice
Recognize sounds from audio
Adding sight to your sensors
Object detection
Counting objects using FOMO
Linux SDK
Linux SDK
TensorRT instructions
NVIDIA Jetson Nano
Device connected to Edge Impulse.
Live feed with classification results
team collaboration
inference performance metrics
edgeimpulse/processing-blocks
your own processing blocks
EON Compiler
UMAP
.eim models?
serial daemon
data forwarder
Edge Impulse CLI
data ingestion API
web uploader
enterprise data storage bucket tools
enterprise upload portals
Managing collaborators on a project
Public project versioning on the Edge Impulse dashboard
$ brew install sox 
$ brew install imagesnap
$ npm install edge-impulse-linux -g
$ edge-impulse-linux
$ edge-impulse-linux-runner
Node.js
Homebrew
Installation and troubleshooting
your Edge Impulse project
Responding to your voice
Recognize sounds from audio
Adding sight to your sensors
Object detection
Counting objects using FOMO
Linux SDK
Linux SDK
Macbook Pro
Device connected to Edge Impulse.
Live feed with classification results

Linux C++ SDK

This library lets you run machine learning models and collect sensor data on Linux machines using C++. The SDK is open source and hosted on GitHub: edgeimpulse/example-standalone-inferencing-linux.

Installation guide

  1. Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).

  2. Clone this repository and initialize the submodules:

    $ git clone https://github.com/edgeimpulse/example-standalone-inferencing-linux
    $ cd example-standalone-inferencing-linux && git submodule update --init --recursive
  3. If you want to use the audio or camera examples, you'll need to install libasound2 and OpenCV 4. You can do so via:

    Linux

    $ sudo apt install libasound2
    $ sh build-opencv-linux.sh          # only needed if you want to run the camera example

    macOS

    $ sh build-opencv-mac.sh            # only needed if you want to run the camera example

    Note that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.

Collecting data

Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.

Collecting data from the camera or microphone

To collect data from the camera or microphone, follow the getting started guide for your development board.

Collecting data from other sensors

To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. Here's an end-to-end example that you can build via:

$ APP_COLLECT=1 make -j

Classifying data

This repository comes with four classification examples:

  • custom - classify custom sensor data (APP_CUSTOM=1).

  • audio - realtime audio classification (APP_AUDIO=1).

  • camera - realtime image classification (APP_CAMERA=1).

  • .eim model - builds an .eim file to be used from Node.js, Go or Python (APP_EIM=1).

To build an application:

  1. Train an impulse.

  2. Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.

  3. Build the application via:

    $ APP_CUSTOM=1 make -j

    Replace APP_CUSTOM=1 with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.

  4. The application is in the build directory:

    $ ./build/custom

Hardware acceleration

For many targets there is hardware acceleration available. To enable this:

Raspberry Pi 4 (and other Armv7l Linux targets)

Build with the following flags:

$ APP_CUSTOM=1 TARGET_LINUX_ARMV7=1 USE_FULL_TFLITE=1 make -j

Jetson Nano (and other AARCH64 targets)

See the TensoRT section below for information on enabling GPUs. To build with hardware extensions for running on the CPU:

  1. Install Clang:

    $ sudo apt install -y clang
  2. Build with the following flags:

    $ APP_CUSTOM=1 TARGET_LINUX_AARCH64=1 USE_FULL_TFLITE=1 CC=clang CXX=clang++ make -j

Linux x86 targets

Build with the following flags:

APP_CUSTOM=1 TARGET_LINUX_X86=1 USE_FULL_TFLITE=1 make -j

Intel-based Macs

Build with the following flags:

$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 make -j

M1-based Macs

Build with the following flags:

$ APP_CUSTOM=1 TARGET_MAC_X86_64=1 USE_FULL_TFLITE=1 arch -x86_64 make -j

Note that this does build an x86 binary, but it runs very fast through Rosetta.

TensorRT

On the Jetson Nano you can also build with support for TensorRT, this fully leverages the GPU on the Jetson Nano. Unfortunately this is currently not available for object detection models - which is why this is not enabled by default. To build with TensorRT:

  1. Go to the Deployment page in the Edge Impulse Studio.

  2. Select the 'TensorRT library', and the 'float32' optimizations.

  3. Build the library and copy the folders into this repository.

  4. Download the shared libraries via:

    $ sh ./tflite/linux-jetson-nano/download.sh
  5. Build your application with:

    $ APP_CUSTOM=1 TARGET_JETSON_NANO=1 make -j

Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.

You can also build .eim files for high-level languages using TensorRT via:

$ APP_EIM=1 TARGET_JETSON_NANO=1  make -j

Long warm-up time and under-performance

By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.

ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.

To enable maximum performance, run:

sudo /usr/bin/jetson_clocks

Building .eim files

To build Edge Impulse for Linux models (eim files) that can be used by the Python, Node.js or Go SDKs build with APP_EIM=1:

$ APP_EIM=1 make -j

The model will be placed in build/model.eim and can be used directly by your application.

edge-impulse-daemon
? Which device do you want to connect to? (Use arrow keys)
  /dev/tty.usbmodem0009601707953 (SEGGER)
  /dev/tty.usbmodem0009601707951 (SEGGER)
❯ /dev/tty.usbmodem0009601707955 (SEGGER)
nrfjprog --program path-to-your.bin -f NRF53 --sectoranduicrerase
X-NUCLEO-IKS02A1
variety of distributors
Data forwarder
Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board
edgeimpulse/firmware-nrf52840-5340
Edge Impulse CLI
Installation and troubleshooting
Download the latest Edge Impulse firmware
this blog post
your Edge Impulse project
Building a continuous motion recognition system
Recognizing sounds from audio
Responding to your voice
Data forwarder
nRF Command Line Tools
Nordic Semiconductors nRF5340 DK development board
X-NUCLEO-IKS02A1 shield plugged in to the nRF5340 DK
Make sure the shield does not touch any of the pins in the middle of the development board.
Connect a micro USB cable to the short USB port on the short side of the board (red). Make sure the power switch is toggled on.
Device connected to Edge Impulse.
edge-impulse-daemon
? Which device do you want to connect to? (Use arrow keys)
❯ /dev/tty.usbmodem0009601707951 (SEGGER)
   /dev/tty.usbmodem0009601707953 (SEGGER)
   /dev/tty.usbmodem0009601707955 (SEGGER)
X-NUCLEO-IKS02A1
Digikey
Data forwarder
Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board
edgeimpulse/firmware-nrf-91
Edge Impulse CLI
Installation and troubleshooting
nRF Command Line Tools
Download the latest Edge Impulse firmware
this blog post
your Edge Impulse project
Building a continuous motion recognition system
Recognizing sounds from audio
Responding to your voice
Data forwarder
X-NUCLEO-IKS02A1 shield plugged in to the nRF9160 DK
Make sure the shield does not touch any of the pins in the middle of the development board.
Connect a micro USB cable to the short USB port on the short side of the board (red). Make sure the power switch is toggled on.
Ensure that the PROG/DEBUG switch is in nRF52 position to flash the board controller.
Device connected to Edge Impulse.
edgeimpulse/mobile-client
your Edge Impulse project
Building a continuous motion recognition system
Responding to your voice
Recognize sounds from audio
Adding sight to your sensors
Object detection
Counting objects using FOMO
download the WebAssembly
Devices page in the studio
Scanning a QR code to connect your phone to Edge Impulse
Connected message in the mobile client
Devices page showing a mobile phone as a connected device
Sampling data from the accelerometer on a phone
Running the mobile classifier on a keyword spotting model

Adding custom learning blocks

Enterprise customers can add fully custom learning models in Edge Impulse. These models can be represented in any deep learning framework as long as it can output TFLite files.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

Custom learning blocks for organizations go beyond project's expert mode, which lets you to design your own neural network architectures on a project by project basis, but has some caveats:

  • You can't load pretrained weights from expert mode, and

  • Your expert mode model needs to be representable in Keras / TensorFlow.

Custom Learning Blocks (Organizations)
Expert Mode (Projects)

Load pretrained weights

✅

❌

Use any ML framework to define your model

✅

Keras only

This tutorial describes how to build these models. Alternatively, we've put together two example projects, which bring these models into Edge Impulse:

  • YOLOv5 - brings a YOLOv5 transfer learning model (trained with PyTorch) into Edge Impulse

  • Keras - shows how to bring custom Keras blocks into Edge Impulse.

  • PyTorch - shows how to bring custom PyTorch blocks into Edge Impulse.

Prerequisites

  • The Edge Impulse CLI

    • If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

  • Docker desktop - Custom learning models use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package.

Data formats

To bring custom learning models into Edge Impulse you'll need to encapsulate your training pipeline into a container. This container takes in the training data, trains the model and then spits out TFLite files.

To see the data that will be passed into the container:

  1. Create a project in Edge Impulse and add your data.

  2. Under Create impulse add the DSP block that you'll use (e.g. 'Image' for images, or 'Spectrogram' for audio) and a random neural network block.

  3. Generate features for the DSP block.

  4. Now go to Dashboard, and under 'Download block data' grab the two items marked 'Training data' / 'Training labels'.

Downloading DSP block output

This data is in the following formats:

  • Training data: a numpy matrix with one sample per row containing the output of the DSP block (use np.load('X_train_features.npy') to see).

  • Training labels

    • If you're using object detection: a JSON file (despite the '.npy' extension) containing structured information about the bounding boxes. Every item in the samples array maps to one row in the data matrix. The label is the index of the class, and is 1-based.

    • If you're not using object detection: a numpy matrix with one sample per row, and the first column of every row is the index of the class. The last three columns are the sampleId and the start / end time of the sample (when going through time series). During training you can typically discard this.

This data is passed into the container as files, located here:

  • Data: /home/X_train_features.npy

  • Labels: /home/y_train.npy

After training you need to output a TFLite file. You need to write these here:

  • Float32 (unquantized) model: /home/model.tflite

  • Int8 quantized model with int8 inputs: /home/model_quantized_int8_io.tflite

Wrapping your training scripts in a Docker container

Docker containers are a virtualization technique which lets developers package up an application with all dependencies in a single package. To train your custom model you'll need to wrap all the required packages, your scripts, and (if you use transfer learning) your pretrained weights into this container. When running in Edge Impulse the container does not have network access, so make sure you don't download dependencies while running (fine when building the container).

A typical Dockerfile might look like:

# syntax = docker/dockerfile:experimental
FROM ubuntu:20.04
WORKDIR /app

ARG DEBIAN_FRONTEND=noninteractive

# Install base packages (like Python and pip)
RUN apt update && apt install -y curl zip git lsb-release software-properties-common apt-transport-https vim wget python3 python3-pip
RUN python3 -m pip install --upgrade pip==20.3.4

# Copy Python requirements in and install them
COPY requirements.txt ./
RUN pip3 install -r requirements.txt

# Copy the rest of your training scripts in
COPY . ./

# And tell us where to run the pipeline
ENTRYPOINT ["python3", "-u", "train.py"]

It's important to create an ENTRYPOINT at the end of the Dockerfile to specify which file to run.

Arguments during training

The train script will receive the following arguments:

  • --epochs <epochs> - number of epochs to train (e.g. 50).

  • --learning-rate <lr> - learning rate (e.g. 0.001).

  • --validation-set-size <size> - size of the validation set (e.g. 0.2 for 20% of total training set).

  • --input-shape <shape> - shape of the training data (e.g. (320,320,3) for a 320x320 RGB image).

Running the training script

To run the container, first create a home folder in your script directory and copy the training data / training labels here (named X_train_features.npy and y_train.npy). Then build and run the container:

$ docker build -t mycontainer .
$ docker run --rm -v $PWD/home:/home mycontainer --epochs 50 --learning-rate 0.001 --validation-set-size 0.2 --input-shape "(320,320,3)"

This should train the model and spit out .tflite files.

Pushing the block to Edge Impulse

If your block works you can bring it into Edge Impulse via:

$ edge-impulse-blocks init
$ edge-impulse-blocks push

To edit the block, go to your organization, Custom blocks > Transfer learning models.

The block is now available for every Edge Impulse project under your organization.

Selecting the new learning model.

Object detection output layers

Unfortunately object detection models typically don't have a standard way to go from neural network output layer to bounding boxes. Currently we support the following types of output layers:

  • MobileNet SSD

  • Edge Impulse FOMO

  • YOLOv5

If you have an object detection model with a different output layer then please contact your user success engineer with an example on how to interpret the output, and we can add it.

Nordic Semi nRF52840 DK

The Nordic Semiconductor nRF52840 DK is a development board with a Cortex-M4 microcontroller, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF52840 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF52840 DK is available for around 50 USD from a variety of distributors including Digikey.

If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf52840-5340.

Nordic Semiconductors nRF52840 DK development board

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Plugging in the X-NUCLEO-IKS02A1 MEMS expansion shield

Remove the pin header protectors on the nRF52840 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.

X-NUCLEO-IKS02A1 shield plugged in to the nRF52840 DK

Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.

Make sure the shield does not touch any of the pins in the middle of the development board.

2. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.

Connect a micro USB cable to the short USB port on the short side of the board (red). Make sure the power switch is toggled on.

3. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK. Make sure you can see this drive.

    • If this is not the case, see No JLINK drive at the bottom of this page.

  2. Download the latest Edge Impulse firmware.

  3. Drag the nrf52840-dk.bin file to the JLINK drive.

  4. Wait 20 seconds and press the BOOT/RESET button.

4. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

5. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognizing sounds from audio.

  • Responding to your voice.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

No JLINK drive

If you don't see the JLINK drive show up when you connect your nRF52840 DK you'll have to update the interface firmware.

  1. Set the power switch to 'off'.

  2. Hold BOOT/RESET while you set the power switch to 'on'.

  3. Your development board should be mounted as BOOTLOADER.

  4. Download the latest Interface MCU firmware and drag the .bin file onto the BOOTLOADER drive.

  5. After 20 seconds disconnect the USB cable, and plug the cable back in.

  6. The development board should now be mounted as JLINK.

Failed to flash

If your board fails to flash new firmware (a FAIL.txt file might appear on the JLINK drive) you can also flash using nrfjprog.

  1. Install the nRF Command Line Tools.

  2. Flash new firmware via:

nrfjprog --program path-to-your.bin -f NRF52 --sectoranduicrerase

On your desktop computer

Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build a desktop application to classify sensor data.

Even though this is a C++ library you can link to it from C applications. See 'Using the library from C' below.

Knowledge required

This tutorial assumes that you know how to build C++ applications, and works on macOS, Linux and Windows. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.

Note: This tutorial provides the instructions necessary to build the C++ SDK library locally on your desktop. If you would like a full explanation of the Makefile and how to use the library, please see the deploy your model as a C++ library tutorial.

Looking for examples that integrate with sensors? See the Edge Impulse C++ SDK for Linux.

Prerequisites

Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse. Also install the following software:

macOS, Linux

  • GNU Make - to build the application. make should be in your PATH.

  • A modern C++ compiler. The default LLVM version on macOS works, but on Linux upgrade to LLVM 9 (installation instructions).

Windows

  • MinGW-W64 which includes both GNU Make and a compiler. Make sure mingw32-make is in your PATH. See these instructions for more information.

Cloning the base repository

We created an example repository which contains a Makefile and a small CLI example application, which takes the raw features as an argument, and prints out the final classification. Clone or download this repository at example-standalone-inferencing.

Deploying your impulse

Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library, and click Build to create the library.

Download the .zip file and place the contents in the 'example-standalone-inferencing' folder (which you downloaded above). Your final folder structure should look like this:

example-standalone-inferencing
|_ build.bat
|_ build.sh
|_ CMakeLists.txt
|_ edge-impulse-sdk/
|_ LICENSE
|_ Makefile
|_ model-parameters/
|_ README.md
|_ README.txt
|_ source/
|_ tflite-model/

Add data sample to main.cpp

To get inference to work, we need to add raw data from one of our samples to main.cpp. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'. Make a note of the classification results, as we want our local application to produce the same numbers from inference.

Selecting the row with timestamp '320' under 'Detailed result'.

To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.

Copying the raw features.

Open source/main.cpp in an editor of your choice. Find the following line:

// Raw features copied from test sample
static const float features[] = {
    // Copy raw features here (e.g. from the 'Model testing' page)
};

Paste in your raw sample data where you see // Copy raw features here:

// Raw features copied from test sample
static const float features[] = {
    -19.8800, -0.6900, 8.2300, -17.6600, -1.1300, 5.9700, ...
};

Note: the raw features will likely be longer than what I listed here (the ... won't compile--I just wanted to demonstrate where the features would go).

In a real application, you would want to make the features[] buffer non-const. You would fill it with samples from your sensor(s) and call run_classifier() or run_classifier_continuous(). See deploy your model as a C++ library tutorial for more information.

Save and exit.

Running the impulse

Open a terminal or command prompt, and build the project:

macOS, Linux

$ sh build.sh

Windows

$ build.bat

This will first build the inferencing engine, and then build the complete application. After building succeeded you should have a binary in the build/ directory.

Then invoke the local application by calling the binary name:

macOS, Linux

./build/app

Windows

build\app

This will run the signal processing pipeline using the values you provided in the features[] buffer and then give you the classification output:

run_classifier_returned: 0
Timing: DSP 0 ms, inference 0 ms, anomaly 0 ms
Predictions (time: 0 ms.):
  idle:   0.015319
  snake:  0.000444
  updown: 0.006182
  wave:   0.978056
Anomaly score (time: 0 ms.): 0.133557

Which matches the values we just saw in the studio. You now have your impulse running locally!

Using the library from C

Even though the impulse is deployed as a C++ application, you can link to it from C applications. This is done by compiling the impulse as a shared library with the EIDSP_SIGNAL_C_FN_POINTER=1 and EI_C_LINKAGE=1 macros, then link to it from a C application. The run_classifier can then be invoked from your application. An end-to-end application that demonstrates this and can be used with this tutorial is under example-standalone-inferencing-c.

ST B-L475E-IOT01A

The ST IoT Discovery Kit (also known as the B-L475E-IOT01A) is a development board with a Cortex-M4 microcontroller, MEMS motion sensors, a microphone and WiFi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 50 USD from a variety of distributors including Digikey.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-st-b-l475e-iot01a.

Two variants of this board

There are two variants of this board, the B-L475E-IOT01A1 (US region) and the B-L475E-IOT01A2 (EU region) - the only difference is the sub-GHz radio. Both are usable in Edge Impulse.

ST B-L475E-IOT01A development board

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI).

  2. On Windows:

    • ST Link - drivers for the development board. Run dpinst_amd64 on 64-bits Windows, or dpinst_x86 on 32-bits Windows.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?"

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one the furthest from the buttons.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name DIS_L4IOT. Make sure you can see this drive.

  2. Download the latest Edge Impulse firmware.

  3. Drag the DISCO-L475VG-IOT01A.bin file to the DIS_L4IOT drive.

  4. Wait until the LED stops flashing red and green.

3. Setting keys and WiFi credentials

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, choose an Edge Impulse project, and set up your WiFi network. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognizing sounds from audio.

  • Responding to your voice.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

Unable to set up WiFi with ST B-L475E-IOT01A development board

If you experience the following error when attempting to connect to a WiFi network:

? WiFi is not connected, do you want to set up a WiFi network now? Yes
Scanning WiFi networks...Error while setting up device Timeout

You have hit a known issue with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.

My device is not responding, and nothing happens when I attempt to update the firmware

If the LED does not flash red and green when you copy the .bin file to the device and instead is a solid red color, and you are unable to connect the device with Edge Impulse, there may be an issue with your device's native firmware.

To restore functionality, use the following tool from ST to update your board to the latest version:

  • ST-LINK, ST-LINK/V2, ST-LINK/V2-1, STLINK-V3 boards firmware upgrade

I don't see the DIS_L4IOT drive, or cannot connect over serial to the board (Linux)

You might need to set up udev rules on Linux before being able to talk to the device. Create a file named /etc/udev/rules.d/50-stlink.rules and add the following content:

SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="3748", MODE:="0666"
SUBSYSTEMS=="usb", ATTRS{idVendor}=="0483", ATTRS{idProduct}=="374b", MODE:="0666"

Then unplug the development board and plug it back in.

Raspberry Pi RP2040

The Raspberry Pi RP2040 is the debut microcontroller from Raspberry Pi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around $4 from Raspberry Pi foundation and a wide range of distributors.

To get started with the Raspberry Pi RP2040 and Edge Impulse you'll need:

  • A Raspberry Pi 2040 microcontroller. The pre-built firmware and Edge Impulse Studio exported binary are tailored for Raspberry Pi Pico, but with a few simple steps you can collect the data and run your models with other RP2040-based boards, such as Arduino Nano RP2040 Connect. For more details, check out "Using with other RP2040 boards".

  • (Optional) If you are using the Raspberry Pi Pico, the Grove Shield for Pi Pico makes it easier to connect external sensors for data collection/inference.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-pi-rp2040.

Two RP2040 microcontroller chips.

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. If you'd like to interact with the board using a set of pre-defined AT commands (not necessary for standard ML workflow), you will need to also install a serial communication program, for example minicom, picocom or use Serial Monitor from Arduino IDE (if installed).

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place, it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer while holding down the BOOTSEL button, forcing the Raspberry Pi Pico into USB Mass Storage Mode.

Flashing firmware to the Raspberry Pi Pico.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Drag the ei_rp2040_firmware.uf2 file from the folder to the USB Mass Storage device.

  3. Wait until flashing is complete, unplug and replug in your board to launch the new firmware.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Raspberry Pi Pico board connected to Edge Impulse Studio.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model. Since Raspberry Pi Pico does not have any built-in sensors, we decided to add the following ones to be supported out of the box, with a pre-built firmware:

  • Grove Ultrasonic Ranger (GP16; pin D16 on Grove Shield for Pi Pico).

  • DHT11 Temperature & Humidity sensor (GP18; pin D18 on Grove Shield for Pi Pico).

  • LSM6DS3 Accelerometer & Gyroscope (I2C0).

  • Analog signal sensor (pin A0).

There is a vast variety of analog signal sensors, that can take advantage of RP2040 10-bit ADC (Analog to Digital Converter), from common ones, such as Light sensor, Sound level sensor to more specialized ones, e.g. Carbon Dioxide sensor, Natural Gas sensor or even an EMG Detector.

Once you have the compatible sensors, you can then follow these tutorials:

  • Building a continuous motion recognition system.

  • Building a sensor fusion model.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Using with other RP2040 boards

While RP2040 is a relatively new microcontroller, it was already utilized to build several boards:

  • The official Raspberry Pi Pico RP2040

  • Arduino RP2040 Connect (WiFi, BLuetooth, onboard sensors)

  • Seeed Studio XIAO RP2040 (extremely small footprint)

  • Black Adafruit Feather RP2040 (built-in LiPoly charger)

And others. While pre-built Edge Impulse firmware is mainly tested with Pico board, it is compatible with other boards, with the exception of I2C sensors - different boards use different pins for I2C, so if you’d like to use LSM6DS3 or LSM6DSOX accelerometer & gyroscope modules, you will need to change I2C pin values in Edge Impulse RP2040 firmware source code, recompile it and upload it to the board.

Hosting custom DSP blocks

Building custom processing blocks is available for everyone but has to be self-hosted. If you want to host it on Edge Impulse infrastructures, you can do that within your organization interface.

In this tutorial, you'll learn how to use Edge Impulse CLI to push your custom DSP block to your organisation and how to make this processing block available in the Studio for all users in the organization.

The Custom Processing block we are using for this tutorial can be found here: https://github.com/edgeimpulse/edge-detection-processing-block. It is written in Python. Please note that one of the beauties with custom blocks is that you can write them in any language as we will host a Docker container and we are not tied to a specific runtime.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

Prerequisites

You'll need:

  • The Edge Impulse CLI. If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

  • Docker desktop installed on your machine. Custom blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):

  • A Custom Processing block running with Docker.

Init and upload your custom DSP block

Inside your Custom DSP block folder, run the following command:

edge-impulse-blocks init --clean

The output will look like this:

? What is your user name or e-mail address (edgeimpulse.com)? 
? What is your password? [hidden]
Edge Impulse Blocks v1.14.3
Attaching block to organization 'Demo Team'

? Choose a type of block 
  Transformation block 
  Deployment block 
❯ DSP block 
  Transfer learning block 

? Enter the name of your block Edge: Detection

? Enter the description of your block: Edge Detection processing block using Canny filters in images

Creating block with config: {
  version: 1,
  config: {
    'edgeimpulse.com': {
      name: 'Edge Detection',
      type: 'dsp',
      description: 'Edge Detection processing block using Canny filters in images',
      organizationId: XXX,
      operatesOn: undefined,
      tlObjectDetectionLastLayer: undefined,
      tlOperatesOn: undefined
    }
  }
}
Your new block 'Edge Detection' has been created in '<PATH>'.
When you have finished building your dsp block, run 'edge-impulse-blocks push' to update the block in Edge Impulse.

Modify or update your custom code if needed and run the following command:

edge-impulse-blocks push            

The output will look similar to this:

Edge Impulse Blocks v1.14.3
? What port is your block listening on? 4446

Archiving 'edge-detection-processing-block'...
Archiving 'edge-detection-processing-block' OK (476 KB) /var/folders/7f/pfcmh61s3hg9c59qd0dkkw5w0000gn/T/ei-dsp-block-c729b4a3ff761b64629617c869e9d934.tar.gz

Uploading block 'Edge Detection' to organization 'Demo Team'...
Uploading block 'Edge Detection' to organization 'Demo Team' OK

Building dsp block 'Edge Detection'...
Job started
...
Building dsp block 'Edge Detection' OK

That's it, now your custom DSP block is hosted on your organization. To make sure it is up and running, in your organisation, go to Custom blocks->DSP and you will see the following screen:

DSP Blocks in organization

Use your custom hosted DSP block in your projects

To use your DSP block, simply add it as a processing block in the Create impulse view:

Custom processing block available in your organization's projects

Other resources

  • Full instruction on how to build processing blocks: Building custom processing blocks

  • Blog post: Utilize Custom Processing Blocks in Your Image ML Pipelines

Data Explorer

The data explorer is a visual tool to explore your dataset, find outliers or mislabeled data, and to help label unlabeled data. The data explorer first tries to extract meaningful features from your data (through signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm to map these features to a 2D space. This gives you a one-look overview of your complete dataset.

Showing a keywords dataset, unlabeled data marked in gray.

Using the data explorer

To access the data explorer head to Data acquisition, click Data explorer, then select a way to generate the data explorer. Depending on you data you'll see three options:

  • Using a pre-trained model - here we use a large neural network trained on a varied dataset to generate the embeddings. This works very well if you don't have any labeled data yet, or want to look at new clusters of data. This option is available for keywords and for images.

  • Using your trained impulse - here we use the neural network block in your impulse to generate the embeddings. This typically creates even better visualizations, but will fail if you have completely new clusters of data as the neural network hasn't learned anything about them. This option is only available if you have a trained impulse.

  • Using the preprocessing blocks in your impulse - here we skip the embeddings, and just use your selected signal processing blocks to create the data explorer. This creates a similar visualization as the feature explorer but in a 2D space and with extra labeling tools. This is very useful if you don't have any labeled data yet, or if you have new clusters of data that your neural network hasn't learned yet.

Selecting a way to generate the data explorer

Then click Generate data explorer to create the data explorer. If you want to make a different choice after creating the data explorer click ⋮ in the top right corner and select Clear data explorer.

Want to see examples of the same dataset visualized in different ways? Scroll down!

Viewing and modifying data

To view an item in your dataset just click on any of the dots (some basic information appears on hover). Information about the sample, and a preview of the data item appears at the bottom of the data explorer. You can click Set label (or l on your keyboard) to set a new label for the data item, or press Delete item (or d on your keyboard) to remove the data item. These changes are queued until you click Save labels (at the top of the data explorer).

Changes are queued until you click 'Save labels'.

Assisted labeling

The data explorer marks unlabeled data in gray (with an 'Unlabeled' label). To label this data you click on any gray dot. To then set a label by clicking the Set label button (or by pressing l on your keyboard) and enter a label. Other unlabeled data in the vicinity of this item will automatically be labeled as well. This way you can quickly label clustered data.

To upload unlabeled data you can either:

  • Use the upload UI and select the 'Leave data unlabeled' option.

  • Select the items in your dataset under Data acquisition, select all relevant items, click Edit labels and set the label to an empty string.

  • When uploading data through the ingestion API, set the x-no-label header to 1, and the x-label to an empty string.

Or, if you want to start from scratch, click the three dots on top of the data explorer, and select Clear all labels.

Wait, how does this work?

The data explorer uses a three-stage process:

  1. It runs your data through an input and a DSP block - like any impulse.

  2. It passes the result of 1) through part of a neural network. This forces the neural network to compress the DSP output even further, but to features that are highly specialized to distinguish the exact type of data in your dataset (called 'embeddings').

  3. The embeddings are passed through t-SNE, a dimensionality reduction algorithm.

So what are these embeddings actually? Let's imagine you have the model from the Continuous motion recognition tutorial. Here we slice data up in 2-second windows and run a signal processing step to extract features. Then we use a neural network to classify between motions. This network consists of:

  • 33 input features (from the signal processing step)

  • A layer with 20 neurons

  • A layer with 10 neurons

  • A layer with 4 neurons (the number of different classes)

While training the neural network we try to find the mathematical formula that best maps the input to the output. We do this by tweaking each neuron (each neuron is a parameter in our formula). The interesting part is that each layer of the neural network will start acting like a feature extracting step - just like our signal processing step - but highly tuned for your specific data. For example, in the first layer, it'll learn what features are correlated, in the second it derives new features, and in the final layer, it learns how to distinguish between classes of motions.

In the data explorer we now cut off the final layer of the neural network, and thus we get the derived features back - these are called "embeddings". Contrary to features we extract using signal processing we don't really know what these features are - they're specific to your data. In essence, they provide a peek into the brain of the neural network. Thus, if you see data in the data explorer that you can't easily separate, the neural network probably can't either - and that's a great way to spot outliers - or if there's unlabeled data close to a labeled cluster they're probably very similar - great for labeling unknown data!

Examples of different embeddings

Here's an example of using the data explorer to visualize a very complex computer vision dataset (distinguishing between the four cats of one of our infrastructure engineers).

No embeddings (just running t-SNE over the images)

Visualizing a complex dataset of cats without embeddings

With embeddings from a pretrained MobileNetV2 model

Visualizing a complex dataset of cats with embeddings from a pretrained MobileNetV2 model

With embeddings from a custom ML model

Visualizing a complex dataset of cats with embeddings from a custom ML model

For less complex datasets, or lower-dimensional data you'll typically see more separation, even without custom models.

Questions? Excited?

If you have any questions about the data explorer or embeddings, we'd be happy to help on the forums or reach out to your solutions engineer. Excited? Talk to us to get access to the data explorer, and finally be able to label all that sensor data you've collected!

Installation

This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:

  • - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.

  • - allows uploading and signing local files.

  • - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.

  • - show the impulse running on your device.

  • - create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.

  • - to flash the Himax WE-I Plus.

Connect to devices without the CLI? Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See for more information.

Installation - macOS and Windows

  1. Install on your host computer.

  2. Install v14 or higher on your host computer.

    • For Windows users, install the Additional Node.js tools (called Tools for Native Modules on newer versions) when prompted.

  3. Install the CLI tools via:

You should now have the tools available in your PATH.

  1. If you haven't already, create an . Many of our CLI tools require the user to log in to connect with the Edge Impulse Studio.

Installation - Linux/Ubuntu and Raspbian OS

  1. Install on your host computer.

  2. Install v14 or higher on your host computer.

    Alternatively, run the following commands:

    The last command should return the node version, v14 or above.

    Let's verify the node installation directory:

    If it returns /usr/local/, run the following commands to change npm's default directory:

  3. Install the CLI tools via:

You should now have the tools available in your PATH.

  1. If you haven't already, create an . Many of our CLI tools require the user to log in to connect with the Edge Impulse Studio.

Troubleshooting

If you have issues installing the CLI you can also collect data from directly using recent versions of Google Chrome and Microsoft Edge. See on how to get started.

Error: Could not locate the bindings file. (Windows)

This error indicates an issue occurred when installing the edge-impulse-cli for the first time or you have not selected to install the addition tools when installing NodeJS (not selected by default).

Remove NodeJS and install it again selecting the option:

Re-install the CLI via

Tools version "2.0" is unrecognized (Windows)

If you receive the following error: The tools version "2.0" is unrecognized. Available tools versions are "4.0", launch a new command window as administrator and run:

EACCES: permission denied, access '/usr/local/lib/node_modules' (macOS)

This is indication that the node_modules is not owned by you, but rather by root. This is probably not what you want. To fix this, run:

EACCES user "nobody" does not have permission to access the dev dir (Linux)

Try to set the npm user to root and re-run the installation command. You can do this via:

Error: Can’t find Python executable (Windows)

If you receive an error such as:

You're running an older version of node-gyp (a way to build binary packages). Upgrade via:

The module XXX was compiled against a different Node.js version

This error occurs when you have upgraded Node.js since installing the Edge Impulse CLI. Re-install the CLI via:

Which will rebuild the dependencies.

Error: "gyp: No Xcode or CLT version detected!" (macOS)

This can happen even though you have Xcode CLT installed if you've updated macOS since your install. Follow to reinstall Xcode CLT.

Failed to authenticate with Edge Impulse read ECONNRESET

If you see this error message and you're behind a proxy you will need to set your proxy settings via:

Windows

macOS, Linux

ENOENT: no such file or directory, access ‘~/.npm-global/lib/node_modules/edge-impulse-cli’ (Linux)

Manually delete the Edge Impulse directory from node_modules and reinstall:

Classification (Keras)

If you have selected the Classification learning block in the Create impulse page, a NN Classifier page will show up in the menu on the left. This page becomes available after you've extracted your features from your DSP block.

Tutorials

Want to see the Classification block in action? Check out our tutorials:

  • .

The basic idea is that a neural network classifier will take some input data, and output a probability score that indicates how likely it is that the input data belongs to a particular class.

So how does a neural network know what to predict? The neural network consists of a number of layers, each of which is made up of a number of neurons. The neurons in the first layer are connected to the neurons in the second layer, and so on. The weight of a connection between two neurons in a layer is randomly determined at the beginning of the training process. The neural network is then given a set of training data, which is a set of examples that it is supposed to predict. The network's output is compared to the correct answer and, based on the results, the weights of the connections between the neurons in the layer are adjusted. This process is repeated a number of times, until the network has learned to predict the correct answer for the training data.

A particular arrangement of layers is referred to as an architecture, and different architectures are useful for different tasks. This way, after a lot of iterations, the neural network learns; and will eventually become much better at predicting new data.

On this page, you can configure the model and the training process and, have an overview of your model performances.

Neural Network settings

  • Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.

  • Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate

  • Validation set size: The percentage of your training set held apart for validation, a good default is 20%

  • Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.

Neural Network architecture

Depending on your project type, we may offer to choose between different architecture presets to help you get started.

The neural network architecture takes as inputs your extracted features, and pass the features to each layer of your architecture. In the classification case, the last used layer is a softmax layer. It is this last layer that gives the probability of belonging to one of the classes.

From the visual (simple) mode, you can add the following layers:

If have advanced knowledge in machine learning and Keras, you can switch to the Expert Mode and access the full Keras API to use custom architectures:

Training output

This panel displays the output logs during the training. The previous training logs can also be retrieved from the Jobs tab in the Dashboard page (enterprise feature).

Model performances

This section gives an overview of your model performances and helps you evaluate your model. It can help you determine if the model is capable of meeting your needs or if you need to test other hyper parameters and architectures.

From the Last training performances you can retrieve your validation accuracy and loss.

The Confusion matrix is one of most useful tool to evaluate a model. it tabulates all of the correct and incorrect responses a model produces given a set of data. The labels on the side correspond to the actual labels in each sample, and the labels on the top correspond to the predicted labels from the model.

The features explorer, like in the processing block views, indicated the spatial distribution of your input features. In this page, you can visualize which ones have been correctly classified and which ones have not.

On-device performance: Based on the target you chose in the Dashboard page, we will output estimations for the inferencing time, peak RAM usage and flash usage. This will help you validate that your model will be able to run on your device based on its constraints.

SiLabs xG24 Dev Kit

The Silicon Labs xG24 Dev Kit (xG24-DK2601B) is a compact, feature-packed development platform built for the EFR32MG24 Cortex-M33 microcontroller. It provides the fastest path to develop and prototype wireless IoT products. This development platform supports up to +10 dBm output power and includes support for the 20-bit ADC as well as the xG24's AI/ML hardware accelerator. The platform also features a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models directly from the Edge Impulse Studio - and even stream your machine learning results over BLE to a phone.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: .

Installing dependencies

To set this device up with Edge Impulse, you will need to install the following software:

  1. . A utility program we will use to flash firmware images onto the target.

  2. The which will enable you to connect your xG24 Dev Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.

Problems installing the CLI?

See the guide.

Updating the firmware

Edge Impulse Studio can collect data directly from your xG24 Dev Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your xG24 Dev Kit you first need to flash it with our base firmware image.

1. Download the base firmware image

, and unzip the file. Once downloaded, unzip it to obtain the firmware-xg24.hex file, which we will be using in the following steps.

2. Connect the xG24 Dev Kit to your computer

Use a micro-USB cable to connect the xG24 Dev Kit to your development computer (where you downloaded and installed ).

3. Load the base firmware image with Simplicity Commander

You can use to flash your xG24 Dev Kit with our . To do this, first select your board from the dropdown list on the top left corner:

Then go to the "Flash" section on the left sidebar, and select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-xg24.hex). You can now press the Flash button to load the base firmware image onto the xG24 Dev Kit.

Keep Simplicity Commander Handy

Simplicity Commander will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.

Connecting to Edge Impulse

With all the software in place, it's time to connect the xG24 Dev Kit to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Setting keys

From a command prompt or terminal, run:

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.

3. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices on the left sidebar. The device will be listed there:

Next steps: Build a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • .

  • .

  • .

  • .

Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.

Data Acquisition

All collected data for each project can be viewed on the Data acquisition tab. You can see how your data has been split for train/test set as well as the data distribution for each class in your dataset. You can also send new sensor data to your project either by file upload, WebUSB, Edge Impulse API, or Edge Impulse CLI.

Add data to your project

Record new data

The panel on the right allows you to collect data directly from any fully supported platform:

  • Through .

  • Using the .

  • From the .

The WebUSB and the Edge Impulse daemon work with any fully supported device by flashing the pre-built Edge Impulse firmware to your board. See the list of .

When using the Edge Impulse for Linux CLI, run edge-impulse-linux --clean and it will add your platform to the device list of your project. You will then will be able to interact with it from the Record new data panel.

Other methods

  • .

  • .

  • .

  • .

  • (Enterprise feature).

  • (Enterprise feature).

Dataset train/test split ratio

The train/test split is a technique for training and evaluating the performance of a machine learning algorithms. It indicates how your data is split between training and testing samples. For example, an 80/20 split indicates that 80% of the dataset is used for model training purposes while 20% is used for model testing.

This section also shows how your data samples in each class are distributed to prevent imbalanced datasets which might introduce bias during model training.

Data acquisition filter

Manually navigating to some categories of data can be time consuming, especially when dealing with a large dataset. The data acquisition filter enables the user to filter data samples based on some criteria of choice. This can be based on:

  • Label - class to which a sample represents.

  • Sample name - unique ID representing a sample.

  • Signature validity

  • Enabled and disabled samples

  • Length of sample - duration of a sample.

The filtered samples can then be manipulated by editing label, deleting, moving from trains set to test set and vise versa a shown in the image above.

The data manipulations above can also be applied at the data sample level just by simply navigating to the individual data sample then clicking ⋮ and selecting the type of action you might want to perform to the specific sample. This might be renaming , editing its label, disabling, cropping, splitting, downloading, and even deleting the sample when desired.

Cropping samples

To crop a data sample, go to the sample you want to crop and click ⋮, then select Crop sample. You can specific a length, or drag the handles to resize the window, then move the window around to make your selection.

Made a wrong crop? No problem, just click Crop sample again and you can move your selection around. To undo the crop, just set the sample length to a high number, and the whole sample will be selected again.

Splitting data sample

Besides cropping you can also split data automatically. Here you can perform one motion repeatedly, or say a keyword over and over again, and the events are detected and can be stored as individual samples. This makes it easy to very quickly build a high-quality dataset of discrete events. To do so head to Data acquisition, record some new data, click, and select Split sample. You can set the window length, and all events are automatically detected. If you're splitting audio data you can also listen to events by clicking on the window, the audio player is automatically populated with that specific split.

Samples are automatically centered in the window, which might lead to problems on some models (the neural network could learn a shortcut where data in the middle of the window is always associated with a certain label), so you can select "Shift samples" to automatically move the data a little bit around.

Splitting data is - like cropping data - non-destructive. If you're not happy with a split just click Crop sample and you can move the selection around easily.

Labelling Queue

The labelling queue will only appear on your data acquisition page if you are dealing with an object detection tasks. The labelling queue shows a list of images that have been staged for annotation for your project.

If you are not dealing with an object detection task, you can simply disable the labelling queue bar by going to Dashboard > Project info > Labeling method and clicking the dropdown and selecting "one label per data item" as shown in the image below.

For more information about the labelling queue and how to perform data annotation using AI assisted labelling on Edge Impulse, you can have a look at our documentation .

npm install -g edge-impulse-cli --force
curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v
npm config get prefix
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.profile
npm install -g edge-impulse-cli
npm uninstall -g edge-impulse-cli
npm install -g edge-impulse-cli
npm install --global --production windows-build-tools
npm config set msvs_version 2015 --global
sudo chown -R $USER /usr/local/lib/node_modules
npm config set user root
gyp ERR! stack Error: Can’t find Python executable “C:\Users\vale.windows-build-tools\python27\python.exe”, you can set the PYTHON env variable.
npm install node-gyp@latest -g
npm uninstall -g edge-impulse-cli
npm install -g edge-impulse-cli
set HTTPS_PROXY=...
edge-impulse-daemon
HTTPS_PROXY=... edge-impulse-daemon
cd ~/.npm-global/lib/node_modules
rm -rf edge-impulse-cli
npm install -g edge-impulse-cli
edge-impulse-daemon
edge-impulse-uploader
edge-impulse-data-forwarder
edge-impulse-run-impulse
edge-impulse-blocks
himax-flash-tool
this blog post
Python 3
Node.js
edge impulse account
Python 3
Node.js
edge impulse account
fully-supported development boards
this blog post
this guide
Continuous Motion Recognition
Responding to your voice
Recognize sounds from audio
Sensor Fusion
NN Classifier
NN Settings
NN Architecture
NN layers
Switch to expert mode
NN expert mode
NN performances
edge-impulse-daemon
edgeimpulse/firmware-silabs-xg24
Simplicity Commander
Edge Impulse CLI
Installation and troubleshooting
Download the latest Edge Impulse firmware
Simplicity Commander
Simplicity Commander
base firmware image
this blog post
your Edge Impulse project
Object detection on the SiLabs xG24 Dev Kit
Building a continuous motion recognition system
Recognizing sounds from audio
Responding to your voice
Data forwarder
Silicon Labs xG24 Dev Kit Hardware Layout
Connecting the xG24 Dev Kit to your computer
Connecting the xG24 Dev Kit to Simplicity Commander
Flashing the xG24 Dev Kit base image
Device connected to Edge Impulse.
WebUSB
Edge Impulse CLI daemon
Edge Impulse for Linux CLI
fully supported boards
Studio uploader
CLI uploader
CLI data forwarder
Ingestion API
Import from S3 buckets
Upload portals
here
Edge Impulse Studio - Data acquisition view.
Rebalance panel
Filters
Actions
Crop
Split
Labelling method

Arduino Portenta H7 + Vision Shield

The Portenta H7 is a powerful development board from Arduino with both a Cortex-M7 microcontroller and a Cortex-M4 microcontroller, a BLE/WiFi radio, and an extension slot to connect the Portenta vision shield - which adds a camera and dual microphones. At the moment the Portenta H7 is partially supported by Edge Impulse, letting you collect data from the camera, build computer vision models, and deploy trained machine learning models back to the development board. The Portenta H7 and the vision shield are available directly from Arduino for ~$150 in total.

There are two versions of the vision shield: one that has an Ethernet connection and one with a LoRa radio. Both of these can be used with Edge Impulse.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-portenta-h7.

Portenta H7 development board

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. Arduino CLI.

    • Here's an instruction video for Windows.

    • The Arduino website has instructions for macOS and Linux.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the vision shield

Using the vision shield using two edge connectors on the back Portenta H7.

Portenta vision shield (with a LoRa radio) connected to the Portenta H7.

2. Connect the development board to your computer

Use a USB-C cable to connect the development board to your computer. Then, double-tap the RESET button to put the device into bootloader mode. You should see the green LED on the front pulsating.

3. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Double press on the RESET button on your board to put it in the bootloader mode.

  3. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  4. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

4. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

5. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Responding to your voice

  • Recognize sounds from audio

  • Adding sight to your sensors

  • Object detection

  • Counting objects using FOMO

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Deploying back to device

  • Download your custom firmware from the Deployment tab in the Studio and install the firmware with the same method as in the "Update the firmware" section and run the edge-impulse-run-impulse command:

Note that it may take up to 10 minutes to compile the firmware for the Arduino Portenta H7

  • Use the Running your impulse locally: On your Arduino tutorial and select one of the portenta examples:

  • For an end-to-end example that classifies data and then sends the result over LoRaWAN. Please see the example-portenta-lorawan example.

Troubleshooting

If you come across this issue:

Finding Arduino Mbed core...
arduino:mbed_portenta 2.6.1     2.6.1  Arduino Mbed OS Portenta Boards                                                  
Finding Arduino Mbed core OK
Finding Arduino Portenta H7...
Finding Arduino Portenta H7 OK at Arduino
dfu-util 0.10-dev

Copyright 2005-2009 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2021 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to http://sourceforge.net/p/dfu-util/tickets/

Warning: Invalid DFU suffix signature
A valid DFU suffix will be required in a future dfu-util release
No DFU capable USB device available
Error during Upload: uploading error: uploading error: exit status 74
Flashing failed. Here are some options:
If your error is 'incorrect FQBN' you'll need to upgrade the Arduino core via:
     $ arduino-cli core update-index
     $ arduino-cli core install arduino:[email protected]
Otherwise, double tap the RESET button to load the bootloader and try again
Press any key to continue . . .

You probably forgot to double press the RESET button before running the flash script.

SiLabs Thunderboard Sense 2

The Silicon Labs Thunderboard Sense 2 is a complete development board with a Cortex-M4 microcontroller, a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio - and even stream your machine learning results over BLE to a phone. It's available for around 20 USD directly from Silicon Labs.

The Edge Impulse firmware for this development board is open source and hosted on on GitHub: edgeimpulse/firmware-silabs-thunderboard-sense-2.

Silicon Labs Thunderboard Sense 2

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. The development board should mount as a USB mass-storage device (like a USB flash drive), with the name TB004. Make sure you can see this drive.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware.

  2. Drag the silabs-thunderboard-sense2.bin file to the TB004 drive.

  3. Wait 30 seconds.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognizing sounds from audio.

  • Responding to your voice.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Did you know? You can also stream the results of your impulse over BLE to a nearby phone or gateway: see Streaming results over BLE to your phone.

Troubleshooting

Dragging and dropping Edge Impulse .bin file results in FAIL.TXT

When dragging and dropping an Edge Impulse pre-built .bin firmware file, the binary seems to flash, but when the device reconnects a FAIL.TXT file appears with the contents "Error while connecting to CPU" and the following errors appear from the Edge Impulse CLI impulse runner:

$ edge-impulse-run-impulse
Edge Impulse impulse runner v1.12.5
[SER] Connecting to /dev/tty.usbmodem0004401612721
[SER] Serial is connected, trying to read config...
[SER] Failed to get info off device:undefined. Is this device running a binary built through Edge Impulse? Reconnecting in 5 seconds...
[SER] Serial is connected, trying to read config...

To fix this error, install the Simplicity Studio 5 IDE and flash the binary through the IDE's built in "Upload application..." menu under "Debug Adapters", and select your Edge Impulse firmware to flash:

Simplicity Studio 5 IDE Debug Adapters window

Your Edge Impulse inferencing application should then run successfully with edge-impulse-run-impulse.

Arduino Nano 33 BLE Sense

The Arduino Nano 33 BLE Sense is a tiny development board with a Cortex-M4 microcontroller, motion sensors, a microphone and BLE - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 30 USD from Arduino and a wide range of distributors.

You can also use the Arduino Tiny Machine Learning Kit to run image classification models on the edge with the Arduino Nano and attached OV7675 camera module (or connect the hardware together via jumper wire and a breadboard if purchased separately).

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-nano-33-ble-sense.

Arduino Nano 33 BLE Sense

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. Arduino CLI.

    • Here's an instruction video for Windows.

    • The Arduino website has instructions for macOS and Linux.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. Then press RESET twice to launch into the bootloader. The on-board LED should start pulsating to indicate this.

Press RESET twice quickly to launch the bootloader on the Arduino Nano 33 BLE Sense.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  3. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Responding to your voice

  • Recognize sounds from audio

  • Adding sight to your sensors

  • Object detection

  • Counting objects using FOMO

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

Connecting an off-the-shelf OV7675 camera module

You will need the following hardware:

  • Arduino Nano 33 BLE Sense board with headers.

  • OV7675 camera module.

  • Micro-USB cable.

  • Solderless breadboard and female-to-male jumper wires.

First, slot the Arduino Nano 33 BLE Sense board into a solderless breadboard:

Arduino Nano 33 BLE Sense board with headers inserted into a solderless breadboard.

With female-to-male jumper wire, use the following wiring diagram, pinout diagrams, and connection table to link the OV7675 camera module to the microcontroller board via the solderless breadboard:

OV7675 camera module with female headers connected.
Wiring diagram showing the OV7675 connections to Arduino Nano 33 BLE Sense.

Download the full pinout diagram of the Arduino Nano 33 BLE Sense here.

Table with connections between the OV7675 camera module pins and the Arduino Nano 33 BLE Sense.

Finally, use a micro-USB cable to connect the Arduino Nano 33 BLE Sense development board to your computer.

Showing all connections between the OV7675 camera module and the Arduino Nano 33 BLE Sense.

Now build & train your own image classification model and deploy to the Arduino Nano 33 BLE Sense with Edge Impulse!

Deployment

After training and validating your model, you can now deploy it to any device. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption.

The Deployment page consists of a variety of deploy options to choose from depending on your target device. Regardless of whether you are using a fully supported development board or not, Edge Impulse provides deploy options through C++ library in which you can use to deploy your model on any targets (as long as the target has enough compute can handle the task).

The following are the 4 main categories of deploy options currently supported by Edge Impulse:

  1. Deploy as a customizable library

  2. Deploy as a pre-built firmware - for fully supported development boards

  3. Run directly on your phone or computer

  4. Use Edge Impulse for Linux for Linux targets

Deploying as a customizable library

This deploy option lets you turn your impulse into a fully optimized source code that can be further customized and integrated with your application. This option supports the following libraries:

Available deployment libraries

Arduino Library

You can run your impulse locally as an Arduino library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package.

To deploy as an Arduino library, select Arduino library on the Deployment page and click Build to create the library. Download the .ZIP file and import it as a sketch in your Arduino IDE then run your application.

For a full tutorial on how to run your impulse locally as an arduino library, have a look at Running your impulse locally - Arduino.

C++ Library

You can run your Impulse as a C++ library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package that can be easily ported to your custom applications.

Visit Running your impulse locally for a deep dive on how to deploy your impulse as a C++ library.

Cube.MX CMSIS-PACK library

If you want to deploy your impulse to an STM32 MCU, you can use the Cube.MX CMSIS-PACK. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in any STM32 project with a single function call.

Have a look at Running your impulse locally - using CubeAI for a deep dive on how to deploy your impulse on STM32 based targets using the Cube.MX CMSIS-PACK.

WebAssembly Library

When you want to deploy your impulse to a web app you can use the WebAssembly library.This packages all your signal processing blocks, configuration and learning blocks up into a single package that can run without any compilation.

Have a look at Running your impulse locally - through WebAssembly (Browser) fora deep dive on how you can run your impulse to classify sensor data in your Node.js application.

Deploy as a pre-built firmware

For this option, you can use a ready-to-go binary for your development board that bundles signal processing blocks, configuration and learning blocks up into a single package. This option is currently only available for fully supported development boards as shown in the image below:

Pre-built firmware for fully supported development boards.

To deploy your model using ready to go binaries, select your target device and click "build". Flash the downloaded firmware to your device then run the following command:

 $ edge-impulse-run-impulse

The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.

Edge Impulse for Linux

Deploying using Edge Impulse for Linux SDKs

If you are developing for Linux based devices, you can use Edge Impulse for Linux for deployment. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.

For a deep dive on how to deploy your impulse to linux targets using Edge Impulse for linux, you can visit the Edge Impulse for Linux tutorial.

Deploy to your mobile phone/computer

Deploying to your mobile phone or computer

You can run your impulse directly on your computer/mobile phone without the need of additional app. To run on your computer, you simply just need to select "computer" then click "Switch to classification mode". To run on your mobile phone, select 'Mobile Phone' then scan the QR code and click 'switch to classification mode".

Optimizations

Enabling EON Compiler

When building your impulse for deployment, Edge Impulse gives you the option of adding another layer of optimization to your impulse using the EON compiler. The EON Compiler lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.

To activate the EON Compiler, select you preferred deployment option then go to Enable EON™ Compiler then enable it and click 'Build' to build your impulse for deployment.

Enabling EON Compiler

To have a peek of how your impulse would utilize compute resources of your target device, Edge Impulse also gives an estimate of latency, flash, RAM to be consumed by your target device even before deploying your impulse locally. This can really save you a lot of engineering time costs incurred by recurring iterations and experiments.

Changing precision of your model for deployment

You can also select whether to run the unquantized float32 or the quantized int8 models as shown in the image below.

The above confusion matrix is only based on the test data to help you know how your model performs on unseen real world data. It can also help you know whether your model has learned to overfit on your training data which is a common occurrence.

Building your first dataset

Organizational datasets allow you to build a large collection of organized sensor data that is internal to your organization. This data can then be used to create new Edge Impulse projects, imported in Pandas or Matlab for internal exploration by your data scientists, or be processed and shared with partners. Data files within the datasets can be stored on-premise or in your own cloud infrastructure.

In this tutorial we'll set up a first dataset, explore the powerful query tool, and show how to create new Edge Impulse projects from raw data.

Only available for enterprise customers

Organizational features are only available for enterprise customers. for more information.

1. Configuring a storage bucket

Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. If you choose to host the data yourself your infrastructure should be available through the , and you are responsible for setting up proper backups. To configure a new storage bucket, head to your organization, choose Data > Buckets, click Add new bucket, and fill in your access credentials. Make sure to name your storage bucket Internal datasets, as we'll need it to upload data later.

2. Uploading your first dataset

2.1 About datasets

With the storage bucket in place you can upload your first dataset. Datasets in Edge Impulse have three layers: 1) the dataset, a larger set of data items, grouped together. 2) data item, an item with metadata and files attached. 3) data file, the actual files. For example, if we're collecting data on physical activities from many subjects, we can have:

  • Dataset: 'Activities Field Study September 1994'.

    • Data item: 'Forrest Gump Running', with metadata fields "name=Forrest Gump" and "activity=running".

      • Data file: 'running01.parquet', with raw sensor data.

      • Data file: 'running02.parquet', with raw sensor data.

From here you can query and group the data. For example, you can retrieve all data from the 'Activities Field Study September 1994' dataset that was tagged with the 'running' activity. Or, you can select all the files that are smaller than 1MB and were generated by 'Forrest Gump' over all datasets.

2.2 Importing the continuous gestures dataset

For this tutorial we'll use a dataset containing 9 minutes of accelerometer data for a gesture recognition system. Download and unzip it in a convenient location.

No required format for data files

There is no required format for data files. You can upload data in any format, whether it's CSV, Parquet, or a proprietary data format.

There are three ways of uploading data to your dataset. You can either:

  1. Upload the files directly with the UI (we'll do this in this tutorial).

  2. Upload data through the .

  3. Or, upload data directly to the storage bucket (recommended for large datasets). In this case use Add data... > Add dataset from bucket and the data will be discovered automatically.

For this dataset we want to create four data items, one for every class ('idle', 'snake', 'updown', 'wave'). On the Data page, select Add data... > Add data item, set the name to 'Idle', the dataset to 'Gestures study', the metadata to { "gesture": "idle" }, and select all 'idle' files.

Do the same for the 'snake', 'updown' and 'wave' data, so you end up with four data items with 70 files in total.

3. Querying and downloading data

Organizational datasets contain a powerful query system which lets you explore and slice data. You control the query system through the 'Filter' text box, and you use a language which is very similar to SQL (). For example, here are some queries that you can make:

  • dataset = 'Gestures study' - returns all items and files from the study.

  • bucket_name = 'Internal datasets' AND name IN ('Updown', 'Snake') - returns data whose name is either 'Updown' or 'Snake, and that is stored in the 'Internal datasets' bucket.

  • metadata->gesture = 'updown' - return data that have a metadata field 'gesture' which contains 'updown'.

  • created > DATE('2020-03-01') - returns all data that was created after March 1, 2020.

After you've created a filter, you can select one or more data items, and select Download selected to create a ZIP file with the data files. The file count reflects the number of files returned by the filter.

The previous queries all returned all files for a data item. But you can also query files through the same filter. In that case the data item will be returned, but only with the files selected. For example:

  • file_name LIKE '%.0.cbor' - returns all files that end with .0.cbor.

If you have an interesting query that you'd like to share with your colleagues, you can just share the URL. The query is already added to it automatically.

3.1 All available fields

These are all the available fields in the query interface:

  • dataset - Dataset.

  • bucket_id - Bucket ID.

  • bucket_name - Bucket name.

  • bucket_path - Path of the data item within the bucket.

  • id - Data item ID.

  • name - Data item name.

  • total_file_count - Number of files for the data item.

  • total_file_size - Total size of all files for the data item.

  • created - When the data item was created.

  • metadata->key - Any item listed under 'metadata'.

  • file_name - Name of a file.

  • file_names - All filenames in the data item, that you can use in conjunction with CONTAINS. E.g. find all items with file X, but not file Y: file_names CONTAINS 'x' AND not file_names CONTAINS 'y'.

4. Importing data in an Edge Impulse project

If you have an interesting subset of data, and want to train a machine learning on this data, you can export the data into a new Edge Impulse project. This will make a copy of the data, that you can then manipulate and explore like any other project, or share with outside researchers without any risk of leaking the rest of your dataset. Data is also stripped of any metadata, like the name of the data item, or any metadata that you attached to the files.

Edge Impulse data acquisition format

This section only applies if your data is already in either the (CBOR and JSON both work), or in WAV, JPG or PNG format. For other data you'll need to use a before being able to create a new project.

Let's put this in practice. You need to select some data for the new project. Go to the Data page and set the filter to:

Then, select all items and click Transform selected (70 files)

This redirects you to the 'Transformation job' page. Under 'Import data into', select 'Project'. Under 'Project' select '+ Create new project', and enter a name. Next, select the category. This determines whether this is 'training' or 'testing' data, or that the data should be split up between these two categories. For now, select 'Split'. Then, click Create project to import the data.

This pulls down the gesture data from the bucket, and then imports it into the project. You don't need to stay on the page, the job will continue running in the background.

If you now go back to your project you have a copy of the organizational dataset to your disposal, ready to build your next machine learning model. You can also add colleagues or outside collaborators to this specific project by going to Dashboard, and selecting the "Collaborators" widget. And if you want to do another experiment with the same data, you can easily create a new project with the same flow without any fear of changing any of the source data. 🚀

Any questions, or interested in the enterprise version of Edge Impulse? for more information.

Appendix: advanced features

Checklists

You can optionally show a check mark in the list of data items, and show a check list for data items. This can be used to quickly view which data items are complete (if you need to capture data from multiple sources) or whether items are in the right format.

Checklists are driven by the metadata for a data item. Set the ei_check metadata item to either 0 or 1 to show a check mark in the list. Set an ei_check_KEYNAME metadata item to 0 or 1 to show the item in the check list.

To query for items with or without a check mark, use a filter in the form of:

To make it easy to create these lists on the fly you can set these metadata items directly from a .

EON Tuner

The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.

Getting Started

First, make sure you have an audio, motion, or image classification project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:

  • .

  • .

  • .

  • .

  1. Log in to the and open a project.

  2. Select the EON Tuner tab.

  3. Click the Configure target button to select your model’s dataset category, target device, and time per inference (in ms).

  4. Click on the Dataset category dropdown and select the use case unique to your motion, audio, or image classification project.

  5. Click Save and then select Start EON Tuner

  6. Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:

  7. Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware

  8. Now you’re your automatically configured Edge Impulse model to your target edge device!

Features

The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.

Targets

The Tuner can directly analyze the performance on any device by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.

Dataset Categories

The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or dataset categories.

Input

The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.

Processing Blocks

Depending on the selected dataset category, the EON Tuner considers a variety of when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.

Learning Blocks

Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance

Tuner Operation and Results

During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.

Detailed logs of the run are also available. To view them, click on the button next to Target shown below.

As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.

Filters

While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.

Views

View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32, or the quantized int8, version of the neural network.

Sort

Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.

The selected sorting criteria will be shown in the top left corner of each result.

Arduino Nicla Sense ME

The Nicla Sense ME is a tiny, low-power tool that sets a new standard for intelligent sensing solutions. With the simplicity of integration and scalability of the Arduino ecosystem, the board combines four state-of-the-art sensors from Bosch Sensortec:

  • BHI260AP motion sensor system with integrated AI.

  • BMM150 magnetometer.

  • BMP390 pressure sensor.

  • BME688 4-in-1 gas sensor with AI and integrated high-linearity, as well as high-accuracy pressure, humidity and temperature sensors.

Designed to easily analyze motion and the surrounding environment – hence the “M” and “E” in the name – it measures rotation, acceleration, pressure, humidity, temperature, air quality and CO2 levels by introducing completely new Bosch Sensortec sensors on the market.

Its tiny size and robust design make it suitable for projects that need to combine sensor fusion and AI capabilities on the edge, thanks to a strong computational power and low-consumption combination that can even lead to standalone applications when battery operated.

The Arduino Nicla Sense ME is available for around 55 USD from the .

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. .

  2. .

    • Here's an .

    • The has instructions for macOS and Linux.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. .

  2. Open the nicla_sense_ingestion.ino sketch in a text editor or the Arduino IDE.

  3. For data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the Environmental sensors:

  4. Then, from your sketch's directory, run the Arduino CLI to compile:

  5. Then flash to your Nicla Sense using the Arduino CLI:

  6. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

3. Data forwarder

From a command prompt or terminal, run:

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_sense_ingestion.ino sketch). If you want to switch projects/sensors run the command with --clean. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:

Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Sense.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with the .

Looking to connect different sensors? Use the nicla_sense_ingestion sketch and the Edge Impulse to easily send data from any sensor on the Nicla Sense into your Edge Impulse project.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Sense ME. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.

Use the tutorial and select one of the Nicla Sense examples.

dataset = 'Gestures study'
metadata->ei_check = 1
View our pricing
S3 API
the dataset
Edge Impulse API
documentation
Edge Impulse Data acquisition format
transformation block
Contact us
transformation block
Storage buckets overview with a single bucket configured.
Adding a new data item
First dataset in Edge Impulse
Downloading files from organizational datasets.
Selecting only a subset of files through advanced filters.
Create a filter with the data files that interest you, and select 'Create project from selected'.
Importing data from an organization into an Edge Impulse project.
Seeing the progress on an import job.
Checklists in the your Data overview
/**
 * @brief   Sample & upload data to Edge Impulse Studio.
 * @details Select 1 or multiple sensors by un-commenting the defines and select
 * a desired sample frequency. When this sketch runs, you can see raw sample
 * values outputted over the serial line. Now connect to the studio using the
 * `edge-impulse-data-forwarder` and start capturing data
 */
// #define SAMPLE_ACCELEROMETER
// #define SAMPLE_GYROSCOPE
// #define SAMPLE_ORIENTATION
#define SAMPLE_ENVIRONMENTAL
// #define SAMPLE_ROTATION_VECTOR

/**
 * Configure the sample frequency. This is the frequency used to send the data
 * to the studio regardless of the frequency used to sample the data from the
 * sensor. This differs per sensors, and can be modified in the API of the sensor
 */
#define FREQUENCY_HZ        10
cd arduino-nicla-sense-me/nicla_sense_ingestion
arduino-cli compile --fqbn arduino:mbed_nicla:nicla_sense --output-dir .
arduino-cli upload --fqbn arduino:mbed_nicla:nicla_sense
edge-impulse-data-forwarder

Sensor

Axis names

#define SAMPLE_ACCELEROMETER

accX, accY, accZ

#define SAMPLE_GYROSCOPE

gyrX, gyrY, gyrZ

#define SAMPLE_ORIENTATION

heading, pitch, roll

#define SAMPLE_ENVIRONMENTAL

temperature, barometer, humidity, gas

#define SAMPLE_ROTATION_VECTOR

rotX, rotY, rotZ, rotW

Arduino Store
Edge Impulse CLI
Arduino CLI
instruction video for Windows
Arduino website
Installation and troubleshooting
Download the latest Edge Impulse ingestion sketch
your Edge Impulse project
Edge Impulse continuous motion recognition tutorial
Data forwarder
Running your impulse locally: On your Arduino
Arduino Nicla Sense ME
Configuring the data forwarder for the Nicla Sense.
The Nicla Sense Data forwarder in the Devices tab.

Deployment metadata spec

This is the specification for the deployment-metadata.json file from Building deployment blocks.

export interface DeploymentMetadataV1 {
    version: 1;
    // Global deployment counter
    deployCounter: number;
    // The output classes (for classification)
    classes: string[];
    // The number of samples to be taken per inference (e.g. 100Hz data, 3 axis, 2 seconds => 200)
    samplesPerInference: number;
    // Number of axes ((e.g. 100Hz data, 3 axis, 2 seconds => 3)
    axesCount: number;
    // Frequency of the data
    frequency: number;
    // TFLite models (already converted and quantized)
    tfliteModels: {
        // Information about the model type, e.g. quantization parameters
        details: KerasModelIODetails;
        // Name of the input tensor
        inputTensor: string;
        // Name of the output tensor
        outputTensor: string;
        // Path of the model on disk
        modelPath: string;
        // Calculated arena size when running TFLite in interpreter mode
        arenaSize: number;
        // Number of values to be passed into the model
        inputFrameSize: number;
    }[];
    // Project information
    project: {
        name: string;
        // API key, only set for deploy blocks with privileged flag and development keys set
        apiKey: string | undefined;
    };
    // Impulse information
    impulse: DeploymentMetadataImpulse;
    // Sensor guess based on the input
    sensor: 'camera' | 'microphone' | 'accelerometer' | undefined;
    // Folder locations
    folders: {
        // Input files are here, the input folder contains 'edge-impulse-sdk', 'model-parameters', 'tflite-model'
        input: string;
        // Write your output file here
        output: string;
    };
}

export type ResizeEnum = 'squash' | 'fit-short' | 'fit-long' | 'crop';
export type CropAnchorEnum = 'top-left' | 'top-center' | 'top-right' |
                             'middle-left' | 'middle-center' | 'middle-right' |
                             'bottom-left' | 'bottom-center' | 'bottom-right';

export interface CreateImpulseStateInput {
    id: number;
    type: 'time-series' | 'image';
    name: string;
    title: string;
    windowSizeMs?: number;
    windowIncreaseMs?: number;
    imageWidth?: number;
    imageHeight?: number;
    resizeMode?: ResizeEnum;
    cropAnchor?: CropAnchorEnum;
}

export interface CreateImpulseStateDsp {
    id: number;
    type: string | 'custom';
    name: string;
    axes: string[];
    title: string;
    customUrl?: string;
}

export interface CreateImpulseStateLearning {
    id: number;
    type: string;
    name: string;
    dsp: number[];
    title: string;
}

export interface CreateImpulseState {
    inputBlocks: CreateImpulseStateInput[];
    dspBlocks: CreateImpulseStateDsp[];
    learnBlocks: CreateImpulseStateLearning[];
}

export interface DSPConfig {
    options: { [k: string ]: string | number | boolean };
}

export type DSPFeatureMetadataOutput = {
    type: 'image',
    shape: { width: number, height: number, channels: number }
} | {
    type: 'spectrogram',
    shape: { width: number, height: number }
} | {
    type: 'flat',
    shape: { width: number }
};

export interface DSPFeatureMetadata {
    created: Date;
    dspConfig: DSPConfig;
    labels: string[];   // the training labels
    featureLabels: string[];
    valuesPerAxis: number;
    windowCount: number;
    windowSizeMs: number;
    windowIncreaseMs: number;
    frequency: number;
    includedSamples: { id: number, windowCount: number }[];
    outputConfig: DSPFeatureMetadataOutput;
}

/**
 * Information necessary to quantize or dequantize the contents of a tensor
 */
export type KerasModelTensorDetails = {
    dataType: 'float32'
} | {
    dataType: 'int8';
    // Scale and zero point are used only for quantized tensors
    quantizationScale?: number;
    quantizationZeroPoint?: number;
};

export type KerasModelTypeEnum = 'int8' | 'float32' | 'requiresRetrain';

/**
 * Information required to process a model's input and output data
 */
export interface KerasModelIODetails {
    modelType: KerasModelTypeEnum;
    inputs: KerasModelTensorDetails[];
    outputs: KerasModelTensorDetails[];
}

export interface DeploymentMetadataImpulse {
    inputBlocks: CreateImpulseStateInput[];
    dspBlocks: (CreateImpulseStateDsp & { metadata: DSPFeatureMetadata | undefined })[];
    learnBlocks: CreateImpulseStateLearning[];
}

C++ library

The provided methods package all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally.

C++ Libraries

Impulses can be deployed as a C++ library. The library does not have any external dependencies and can be built with any C++11 compiler, see Running your impulse as a C++ library.

We have end-to-end guides for:

  • Running your impulse on your desktop

  • Running your impulse on an Mbed-enabled development board

  • Running your impulse on Zephyr on a Nordic semiconductor development board

  • Running your impulse in Simplicity Studio on the TB Sense 2

  • Running your impulse on STM32 using STM32Cube.MX

  • Running your impulse on the Himax WE-I Plus

  • Running your impulse on the Espressif ESP-EYE (ESP32)

  • Running your impulse on the Raspberry Pi RP2040

  • Running your impulse on the Sony Spresense

  • Running your impulse on the Syntiant TinyML Board

  • Running your impulse on the TI LaunchPad using GCC and the SimpleLink SDK

We also have tutorials for:

Using Arduino IDE

  • Running your impulse in an Arduino sketch

Using OpenMV IDE

  • Running your impulse on the OpenMV Cam H7 Plus

On Linux-based devices

  • Running your impulse on a Linux system with our C++, Node.js, Python or Go SDKs.

Using WebAssembly

  • Running your impulse in Node.js

  • Running your impulse in the browser

These tutorials show you how to run your impulse, but you'll need to hook in your sensor data yourself. We have a number of examples on how to do that in the Data forwarder documentation, or you can use the full firmware for any of the fully supported development boards as a starting point - they have everything (including sensor integration) already hooked up. Or keep reading for documentation about the sensor format and inputs that we expect.

Did you know?

You can build binaries for supported development boards straight from the studio. These will include your full impulse. See Edge Impulse Firmwares

Input to the run_classifier function

The input to the run_classifier function is always a signal_t structure with raw sensor values. This structure has two properties:

  • total_length - the total number of values. This should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE (from model_metadata.h). E.g. if you have 3 sensor axes, 100Hz sensor data, and 2 seconds of data this should be 600.

  • get_data - a function that retrieves slices of data required by the DSP process. This is used in some DSP algorithms (like all audio-based ones) to page in the required data, and thus saves memory. Using this function you can store (f.e.) the raw data in flash or external RAM, and page it in when required.

F.e. this is how you would page in data from flash:

// this is placed in flash
const float features[300] = { 0 };

// function that pages the data in
int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    memcpy(out_ptr, features + offset, length * sizeof(float));
    return 0;
}

int main() {
    // construct the signal
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application

If you have your data already in RAM you can use the signal_from_buffer function to construct the signal:

float features[30] = { 0 };
signal_t signal;
numpy::signal_from_buffer(features, 30, &signal);
// ... rest of the application

The get_data function expects floats to be returned, but you can use the int8_to_float and int16_to_float helper functions if your own buffers are int8_t or int16_t (useful to save memory). E.g.:

const int16_t features[300] = { 0 };

int raw_feature_get_data(size_t offset, size_t length, float *out_ptr) {
    return numpy::int16_to_float(features + offset, out_ptr, length);
}

int main() {
    signal_t signal;
    signal.total_length = 300;
    signal.get_data = &raw_feature_get_data;
    // ... rest of the application

Signal layout for time-series data

Signals are always a flat buffer, so if you have multiple sensor data you'll need to flatten it. E.g. for sensor data with three axes:

Input data:
Axis 1:  9.8,  9.7,  9.6
Axis 2:  0.3,  0.4,  0.5
Axis 3: -4.5, -4.6, -4.8

Signal: 9.8, 0.3, -4.5, 9.7, 0.4, -4.6, 9.6, 0.5, -4.8

Signal layout for image data

The signal for image data is also flattened, starting with row 1, then row 2 etc. And every pixel is a single value in HEX format (RRGGBB). E.g.:

Input data (3x2 pixel image):
BLACK RED  RED
GREEN BLUE WHITE

Signal: 0x000000, 0xFF0000, 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFFFF

We do have an end-to-end example on constructing a signal from a frame buffer in RGB565 format, which is easily adaptable to other image formats, see: example-signal-from-rgb565-frame-buffer.

Directly quantize image data

If you're doing image classification and have a quantized model, the data is automatically quantized when reading the data from the signal to save memory. This is automatically enabled when you call run_impulse. To control the size of the buffer that's used to read from the signal in this case you can set the EI_DSP_IMAGE_BUFFER_STATIC_SIZE macro (which also allocates the buffer statically).

Static allocation

To statically allocate the neural network model, set this macro:

  • EI_CLASSIFIER_ALLOCATION_STATIC=1

Additionally we support full static allocation for quantized image models. To do so set this macro:

  • EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024

Static allocation is not supported for other DSP blocks at the moment.

Adding sight to your sensors
Building a continuous motion recognition system
Recognizing sounds from audio
Responding to your voice
Edge Impulse Studio
ready to deploy
fully supported
Processing blocks
Motion Categories
Example input settings selected by the tuner, using a one second window size and one second increment
Example processing block selected by the tuner, using a 32ms frame length and stride with a 40 count filter bank
Example learning block selected by the tuner, showing a convolutional network architecture with data augmentation
Accuracy and performance results
Filter options for an audio project
Sorting options for an audio project
Dataset categories for audio projects. Motion and image projects have their own unique categories
Audio (MFE) DSP Impulse design page
Neural Network (Keras) Impulse design page

Sony's Spresense

Sony's Spresense is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio.

To get started with the Sony Spresense and Edge Impulse you'll need:

  • The Spresense main development board - available for around 55 USD from a wide range of distributors.

  • The Spresense extension board - to connect external sensors.

  • A micro-SD card to store samples.

In addition you'll want some sensors, these ones are fully supported (note that you can collect data from any sensor on the Spresense with the data forwarder):

  • For image models: the Spresense CXD5602PWBCAM1 camera add-on.

  • For accelerometer models: the Spresense Sensor EVK-70 add-on.

  • For audio models: an electret microphone and a 2.2K Ohm resistor, wired to the extension board's audio channel A, following this schema (picture here).

    • Note: for audio models you must also have a FAT formatted SD card for the extension board, with the Spresense's DSP files included in a BIN folder on the card, see instructions here and a screenshot of the SD card directory here.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-sony-spresense.

The Spresense product family.

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the optional camera, sensor, extension board, Wi-Fi add-ons, and SD card

Spresense main board with attached camera, sensor add-on, Wi-Fi add-on, and extension board.

Make sure the SD card is formatted as FAT before inserting it into the Spresense.

2. Connect the development board to your computer

Use a micro-USB cable to connect the main development board (not the extension board) to your computer.

3. Update the bootloader and the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Install Python 3.7 or higher.

  2. Download the latest Edge Impulse firmware, and unzip the file.

  3. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  4. Wait until flashing is complete. The on-board LEDs should stop blinking to indicate that the new firmware is running.

4. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

Mac: Device choice

If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.SLAB_USBtoUART or /dev/cu.usbserial-*

This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

5. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Responding to your voice

  • Recognize sounds from audio

  • Adding sight to your sensors

  • Object detection

  • Counting objects using FOMO

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Troubleshooting

Error when flashing

If you see:

ValueError: dlsym(RTLD_DEFAULT, kIOMasterPortDefault): symbol not found

Upgrade pyserial:

pip3 install --upgrade pyserial

Daemon does not start

If the edge-impulse-daemon or edge-impulse-run-impulse commands do not start it might be because of an error interacting with the SD card or because your board has an old version of the bootloader. To see the debug logs, run:

edge-impulse-run-impulse --raw

And press the RESET button on the board. If you see Welcome to nash you'll need to update the bootloader. To do so:

  1. Install and launch the Arduino IDE.

  2. Go to Preferences and under 'Additional Boards Manager URLs' add https://github.com/sonydevworld/spresense-arduino-compatible/releases/download/generic/package_spresense_index.json (if there's already text in this text box, add a , before adding the new URL).

  3. Then go to Tools > Boards > Board manager, search for 'Spresense' and click Install.

  4. Select the right board via: Tools > Boards > Spresense boards > Spresense.

  5. Select your serial port via: Tools > Port and selecting the serial port for the Spresense board.

  6. Select the Spresense programmer via: Tools > Programmer > Spresense firmware updater.

  7. Update the bootloader via Tools > Burn bootloader.

Then update the firmware again (from step 3: Update the bootloader and the firmware).

Raspberry Pi 4

The Raspberry Pi 4 is a versatile Linux development board with a quad-core processor running at 1.5GHz, a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Raspberry Pi 4 is available from 35 USD from a wide range of distributors, including DigiKey.

In addition to the Raspberry Pi 4 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the Camera Module work fine on the development board out of the box.

Raspberry Pi 4

1. Connecting to your Raspberry Pi

Headless

You can set up your Raspberry Pi without a screen. To do so:

Raspberry Pi OS - Bullseye release

Last release of the Raspberry Pi OS requires Edge Impulse Linux CLI version >= 1.3.0.

  1. Flash the Raspberry Pi OS image to an SD card.

  2. After flashing the OS, find the boot mass-storage device on your computer, and create a new file called wpa_supplicant.conf in the boot drive. Add the following code:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
country=<Insert 2 letter ISO 3166-1 country code here>

network={
 ssid="<Name of your wireless LAN>"
 psk="<Password for your wireless LAN>"
}

(Replace the fields marked with <> with your WiFi credentials)

  1. Next, create a new file called ssh into the boot drive. You can leave this file empty.

  2. Plug the SD card into your Raspberry Pi 4, and let the device boot up.

  3. Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router, or by scanning your network. E.g. on macOS and Linux via:

$ arp -na | grep -i dc:a6:32
? (192.168.1.19) at dc:a6:32:f5:b6:7e on en0 ifscope [ethernet]

Here 192.168.1.19 is your IP address.

  1. Connect to the Raspberry Pi over SSH. Open a terminal window and run:

ssh [email protected]
  1. Log in with password raspberry.

With a screen

If you have a screen and a keyboard / mouse attached to your Raspberry Pi:

  1. Flash the Raspberry Pi OS image to an SD card.

  2. Plug the SD card into your Raspberry Pi 4, and let the device boot up.

  3. Connect to your WiFi network.

  4. Click the 'Terminal' icon in the top bar of the Raspberry Pi.

2. Installing dependencies

To set this device up in Edge Impulse, run the following commands:

sudo apt update
sudo apt upgrade
curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm

If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:

sudo raspi-config

Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera. Then reboot the Raspberry.

Install with Docker

If you want to install Edge Impulse on your Raspberry Pi using Docker you can run the following commands:

docker run -it --rm --privileged --network=host -v /dev/:/dev/ --env UDEV=1 --device /dev:/dev --entrypoint /bin/bash ubuntu:20.04

Once on the Docker container, run:

apt-get update
apt-get install wget -y
wgethttps://deb.nodesource.com/setup_12.x
bash setup_12.x
apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps vim v4l-utils usbutils udev
apt-get install npm -y
npm config set user root
npm install edge-impulse-linux -g --unsafe-perm

and

/lib/systemd/systemd-udevd --daemon

You should now be able to run Edge Impulse CLI tools from the container running on your Raspberry.

Note that this will only work using an external USB camera

3. Connecting to Edge Impulse

With all software set up, connect your camera and microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor), and run:

edge-impulse-linux

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

4. Verifying that your device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Responding to your voice

  • Recognize sounds from audio

  • Adding sight to your sensors

  • Object detection

  • Counting objects using FOMO

Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.

Deploying back to device

To run your impulse locally, just connect to your Raspberry Pi again, and run:

edge-impulse-linux-runner

This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.

Image model?

If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:

Live feed with classification results

Nordic Semi Thingy:91

The Nordic Semiconductor Thingy:91 is an easy-to-use battery-operated prototyping platform for cellular IoT using LTE-M, NB-IoT and GPS. It is ideal for creating Proof-of-Concept (PoC), demos and initial prototypes in your cIoT development phase. Thingy:91 is built around the nRF9160 SiP and is certified for a broad range of LTE bands globally, meaning the Nordic Thingy:91 can be used just about anywhere in the world. There is an nRF52840 multiprotocol SoC on the Thingy:91. This offers the option of adding Bluetooth Low Energy connectivity to your project.

Nordic's Thingy:91 is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. Thingy:91 is available for around 120 USD from a variety of distributors.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nordic-thingy91.

Thingy:91

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. nRF Connect for Desktop v3.7.1 - install exactly version 3.7.1, please follow the below instructions to downgrade or newly install v3.71:

    • Instructions on how to downgrade existing nRF Connect installations to v3.7.1

  2. Edge Impulse CLI.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Updating the firmware

Before you start a new project, you need to update the Thingy:91 firmware to our latest build.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer. Then, set the power switch to 'on'.

2. Download the firmware

Download the latest Edge Impulse firmware. The extracted archive contains the following files:

  1. firmware.hex: the Edge Impulse firmware image for the nRF9160 SoC, and

  2. connectivity-bridge.hex: a connectivity application for the nRF52840 that you only need on older boards (hardware version < 1.4)

3. Update the firmware

  1. Open nRF Connect for Desktop and launch the Programmer application.

  2. Scroll down in the menu on the right and make sure Enable MCUboot is selected.

Enable MCUboot
  1. Switch off the Nordic Thingy:91.

  2. Press the multi-function button (SW3) while switching SW1 to the ON position.

Switches
  1. In the Programmer navigation bar, click Select device.

  2. In the menu on the right, click Add HEX file > Browse, and select the firmware.hex file from the firmware previously downloaded.

  3. Scroll down in the menu on the right to Device and click Write:

Flash the firmware
  1. In the MCUboot DFU window, click Write. When the update is complete, a Completed successfully message appears.

  2. You can now disconnect the board.

Thingy:91 hardware version < 1.4.0

Updating the firmware with older hardware versions may fail. Moreover, even if the update works, the device may later fail to connect to Edge Impulse Studio:

[SER] Serial is connected, trying to read config...
[SER] Failed to get info off device Timeout when waiting for >  (timeout: 5000) onConnected

In these cases, you will also need to flash the connectivity-bridge.hex onto the nRF52840 in the Thingy:91. Follow the steps here to update the nRF52840 SOC application with the connectivity-bridge.hex file through USB.

If this method doesn't work, you will need to flash both hex files using an external probe."

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:

edge-impulse-daemon

This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

The Thingy:91 exposes multiple UARTs. If prompted, choose the first one:

? Which device do you want to connect to? (Use arrow keys)
❯ /dev/tty.usbmodem14401 (Nordic Semiconductor)
  /dev/tty.usbmodem14403 (Nordic Semiconductor)

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Thingy:91 in Devices tab

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with this tutorial:

  • Building a continuous motion recognition system.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Espressif ESP-EYE (ESP32)

Espressif ESP-EYE (ESP32) is a compact development board based on Espressif's ESP32 chip, equipped with a 2-Megapixel camera and a microphone. ESP-EYE also offers plenty of storage, with 8 MB PSRAM and 4 MB SPI flash - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 22 USD from Mouser and a wide range of distributors.

There are plenty of other boards built with ESP32 chip - and of course there are custom designs utilizing ESP32 SoM. Edge Impulse firmware was tested with ESP-EYE and ESP FireBeetle boards, but there is a possibility to modify the firmware to use it with other ESP32 designs. Read more on that in Using with other boards section of this documentation.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-espressif-esp32.

Espressif ESP-EYE

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. Python 3.

  3. ESP Tool.

    • The ESP documentation website has instructions for macOS and Linux.

  4. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  3. Wait until flashing is complete.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse.

Next steps: building a machine learning model

With everything set up you can now build your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognizing sounds from audio.

  • Responding to your voice.

  • Adding sight to your sensors.

  • Counting objects using FOMO.

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.

Sensors available

The standard firmware supports the following sensors:

  • Camera: OV2640, OV3660, OV5640 modules from Omnivision

  • Microphone: I2S microphone on ESP-EYE (MIC8-4X3-1P0)

  • LIS3DHTR module connected to I2C (SCL pin 22, SDA pin 21)

  • Any analog sensor, connected to A0

The analog sensor and LIS3DHTR module were tested on ESP32 FireBeetle board and Grove LIS3DHTR module.

DFRobot FireBeetle ESP32

Using with other ESP32 boards

ESP32 is a very popular chip both in a community projects and in industry, due to its high performance, low price and large amount of documentation/support available. There are other camera enabled development boards based on ESP32, which can use Edge Impulse firmware after applying certain changes, e.g.

  • AI-Thinker ESP-CAM

  • M5STACK ESP32 PSRAM Timer Camera X (OV3660)

  • M5STACK ESP32 Camera Module Development Board (OV2640)

The pins used for camera connection on different development boards are not the same, therefore you will need to change the #define here to fit your development board, compile and flash the firmware. Specifically for AI-Thinker ESP-CAM, since this board needs an external USB to TTL Serial Cable to upload the code/communicate with the board, the data transfer baud rate must be changed to 115200 here.

The analog sensor and LIS3DH accelerometer can be used on any other development board without changes, as long as the interface pins are not changed. If I2C/ADC pins that accelerometer/analog sensor are connected to are different, from described in Sensors available section, you will need to change the values in LIS3DHTR component for ESP32, compile and flash it to your board.

Additionally, since Edge Impulse firmware is open-source and available to public, if you have made modifications/added new sensors capabilities, we encourage you to make a PR in firmware repository!

Arduino Nicla Vision

The Nicla Vision is a ready-to-use, standalone camera for analyzing and processing images on the Edge. Thanks to its 2MP color camera, smart 6-axis motion sensor, integrated microphone, and distance sensor, it is suitable for asset tracking, object recognition, and predictive maintenance. Some of its key features include:

  • Powerful microcontroller equipped with a 2MP color camera

  • Tiny form factor of 22.86 x 22.86 mm

  • Integrated microphone, distance sensor, and intelligent 6-axis motion sensor

  • Onboard Wi-Fi and Bluetooth® Low Energy connectivity

  • Standalone when battery-powered

  • Expand existing projects with sensing capabilities

  • Enable fast Machine Vision prototyping

  • Compatible with Nicla, Portenta, and MKR products

Its exceptional capabilities are supported by a powerful STMicroelectronics STM32H747AII6 Dual ARM® Cortex® processor, combining an M7 core up to 480 Mhz and an M4 core up to 240 Mhz. Despite its industrial strength, it keeps energy consumption low for battery-powered standalone applications.

The Arduino Nicla Vision is available for around 95 EUR from the Arduino Store.

Arduino Nicla Vision

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. Arduino CLI.

    • Here's an instruction video for Windows.

    • The Arduino website has instructions for macOS and Linux.

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

2. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse ingestion sketches and unzip the file.

  2. Open the nicla_vision_ingestion.ino (for IMU/proximity sensor) or nicla_vision_ingestion_mic.ino(for microphone) sketch in a text editor or the Arduino IDE.

  3. For IMU/proximity sensor data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the accelerometer sensor:

/**
 * @brief   Sample & upload data to Edge Impulse Studio.
 * @details Select 1 or multiple sensors by un-commenting the defines and select
 * a desired sample frequency. When this sketch runs, you can see raw sample
 * values outputted over the serial line. Now connect to the studio using the
 * `edge-impulse-data-forwarder` and start capturing data
 */
#define SAMPLE_ACCELEROMETER
//#define SAMPLE_GYROSCOPE
//#define SAMPLE_PROXIMITY

/**
 * Configure the sample frequency. This is the frequency used to send the data
 * to the studio regardless of the frequency used to sample the data from the
 * sensor. This differs per sensors, and can be modified in the API of the sensor
 */
#define FREQUENCY_HZ        10

For microphone data ingestion, you do not need to change the default parameters in nicla_vision_ingestion_mic.ino sketch.

  1. Then, from your sketch's directory, run the Arduino CLI to compile:

    arduino-cli compile --fqbn arduino:mbed_nicla:nicla_vision --output-dir .
  2. Then flash to your Nicla Vision using the Arduino CLI:

    arduino-cli upload --fqbn arduino:mbed_nicla:nicla_vision

Alternatively, if you opened the sketch in Arduino IDE, you can compile and upload the sketch from there.

  1. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

3a. Data forwarder (Fusion sensors)

From a command prompt or terminal, run:

edge-impulse-data-forwarder

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_vision_ingestion.ino sketch). If you want to switch projects/sensors run the command with --clean. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:

Sensor

Axis names

#define SAMPLE_ACCELEROMETER

accX, accY, accZ

#define SAMPLE_GYROSCOPE

gyrX, gyrY, gyrZ

#define SAMPLE_PROXIMITY

cm

Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Vision.

3b. Data forwarder (Microphone)

From a command prompt or terminal, run:

edge-impulse-data-forwarder --baud 1000000 --frequency 8000

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor axes - in case of microphone, you need to enter audio. If you want to switch projects/sensors run the command with --clean. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:

Note: These exact axis name is required for the Edge Impulse Arduino library deployment example application for the Nicla Vision Microphone ingestion.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Configuring the data forwarder for the Nicla Vision.
The Nicla Vision Data forwarder in the Devices tab.

With everything set up you can now build your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognizing sounds from audio.

Looking to connect different sensors? Use the nicla_vision_ingestion sketch and the Edge Impulse Data forwarder to easily send data from any sensor on the Nicla Vision into your Edge Impulse project.

Deploying back to device

With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Vision. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.

Use the Running your impulse locally: On your Arduino tutorial and select one of the Nicla Vision examples.

Uploader

The uploader signs local files and uploads them to the . This is useful to upload existing data sets, or to migrate data between Edge Impulse instances. The uploader currently handles these type of files:

  1. .cbor - Files in the Edge Impulse . The uploader will not resign these files, only upload them.

  2. .json - Files in the Edge Impulse . The uploader will not resign these files, only upload them.

  3. .wav - Lossless audio files. It's recommended to use the same frequency for all files in your data set, as signal processing output might be dependent on the frequency.

  4. .jpg - Image files. It's recommended to use the same ratio for all files in your data set.

Upload data from the studio

You can now also upload data directly from the studio. Go to the Data acquisition page, and click the 'upload' icon. You can select files, the category and the label directly from here.

Uploading via the CLI

You can upload files via the Edge Impulse CLI via:

You can upload multiple files in one go via:

The first time you'll be prompted for a server, and your login credentials (see for more information).

Category

Files are automatically uploaded to the training category, but you can override the category with the --category option. E.g.:

Or set the category to split to automatically split data between training and testing sets (recommended for a balanced dataset). This is based on the hash of the file, so this is a deterministic process.

Labeling

A label is automatically inferred from the file name, see the . You can override this with the --label option. E.g.:

Clearing configuration

To clear the configuration, run:

This resets the uploader configuration and will prompt you to log in again.

API Key

You can use an API key to authenticate with:

Note that this resets the uploader configuration and automatically configures the uploader's account and project.

Bounding boxes

If you want to upload data for , the uploader can label the data for you as it uploads it. In order to do this, all you need is to create a bounding_boxes.labels file in the same folder as your image files. The contents of this file are formatted as JSON with the following structure:

You can have multiple keys under the boundingBoxes object, one for each file name. If you have data in multiple folders, you can create a bounding_boxes.labels in each folder.

You don't need to upload bounding_boxes.labels

When uploading one or more images, we check whether a labels file is present in the same folder, and automatically attach the bounding boxes to the image.

So you can just do:

or

Also note that this feature is currently only supported by the uploader, you cannot yet upload object detection data via the studio.

Let the Studio do the work for you!

Unsure about the structure of the bounding boxes file? Label some data in the studio, then export this data by selecting Dashboard > Export. The bounding_boxes.labels file will be included in the exported archive.

Upload data from OpenMV datasets

The uploader data in the OpenMV dataset format. Pass in the option --format-openmv and pass the folder of your dataset in to automatically upload data. Data is automatically split between testing and training sets. E.g.:

Other options

  • --silent - omits information on startup. Still prints progress information.

  • --dev - lists development servers, use in conjunction with --clean.

  • --hmac-key <key> - set the HMAC key, only used for files that need to be signed such as wav files.

  • --concurrency <count> - number of files to uploaded in parallel (default: 20).

  • --progress-start-ix <index> - when set, the progress index will start at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.

  • --progress-end-ix <index> - when set, the progress index will end at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.

  • --progress-interval <interval> - when set, the uploader will not print an update for every line, but every interval period (in ms.).

  • --allow-duplicates - to avoid pollution of your dataset with duplicates, the hash of a file is checked before uploading against known files in your dataset. Enable this flag to skip this check.

Uploading large datasets

When using command line wildcards to upload large datasets you may encounter an error similar to this one:

This happens if the number of .wav files exceeds the total number of arguments allowed for a single command on your shell. You can easily work around this shell limitation by using the find command to call the uploader for manageable batches of files:

You can include any necessary flags by appending them to the xargs portion, for example if you wish to specify a category:

$ edge-impulse-uploader path/to/a/file.wav
$ edge-impulse-uploader path/to/many/*.wav
$ edge-impulse-uploader --category testing path/to/a/file.wav
$ edge-impulse-uploader --label noise path/to/a/file.wav
$ edge-impulse-uploader --clean
$ edge-impulse-uploader --api-key ei_...
{
    "version": 1,
    "type": "bounding-box-labels",
    "boundingBoxes": {
        "mypicture.jpg": [{
            "label": "jan",
            "x": 119,
            "y": 64,
            "width": 206,
            "height": 291
        }, {
            "label": "sami",
            "x": 377,
            "y": 270,
            "width": 158,
            "height": 165
        }]
    }
}
edge-impulse-uploader yourimage.jpg
edge-impulse-uploader *
$ edge-impulse-uploader --format-openmv path/to/your-openmv-dataset
edge-impulse-uploader *.wav
zsh: argument list too long: edge-impulse-uploader
find . -name "*.wav" -print0 | xargs -0 edge-impulse-uploader
find training -name "*.wav" -print0 | xargs -0 edge-impulse-uploader --category training
ingestion service
Data Acquisition format
Data Acquisition format
Edge Impulse Daemon
Ingestion service documentation
object detection
Uploading data from the studio

Blocks

The blocks CLI tool creates different blocks types that are used in organizational features such as:

  • Transformation blocks - to transform large sets of data efficiently.

  • Deployment blocks - to build personalized firmware using your own data or to create custom libraries.

  • Custom DSP blocks - to create and host your custom signal processing techniques and use it directly in your projects.

  • Custom learning models - to use your custom neural networks architectures and load pretrained weights.

With the blocks CLI tool, you can create new blocks, run them locally, and push them to Edge Impulse infrastructure so we can host them for you. Edge Impulse blocks can be written in any language, and are based on Docker container for maximum flexibility.

As an example here, we will show how to create a transformation block.

You can create a new block by running:

$ edge-impulse-blocks init
? What is your user name or e-mail address (edgeimpulse.com)? [email protected]
? What is your password? [hidden]
? In which organization do you want to create this block? EdgeImpulse Inc.
Attaching block to organization 'EdgeImpulse Inc.'
? Choose a type of block Transformation block
? Choose an option Create a new block
? Enter the name of your block Extract voice
? Enter the description of your block Extracts voice from video files
Creating block with config: {
  name: 'Extract voice',
  type: 'transform',
  description: 'Extracts voice from video files',
  organizationId: 4
}
? Would you like to download and load the example repository (Python)? yes
Template repository fetched!
Your new block 'Extract voice' has been created in '/Users/janjongboom/repos/custom-transform-block'.
When you have finished building your transformation block, run "edge-impulse-blocks push" to update the block in Edge Impulse.

When you're done developing the block you can push it to Edge Impulse via:

$ edge-impulse-blocks push
Archiving 'custom-transform-block'...
Archiving 'custom-transform-block' OK (2 KB)

Uploading block 'Extract voice' to organization 'EdgeImpulse Inc.'...
Uploading block 'Extract voice' to organization 'EdgeImpulse Inc.' OK

Building transformation block 'Extract voice'...
INFO[0000] Retrieving image manifest python:3.7.5-stretch
INFO[0000] Retrieving image python:3.7.5-stretch

...

Building transformation block 'Extract voice' OK

Your block has been updated, go to https://studio.edgeimpulse.com/organization/4/data to run a new transformation

The metadata about the block (which organization it belongs to, block ID) is saved in .ei-block-config, which you should commit. To view this data in a convenient format, run:

$ edge-impulse-blocks info
Name: TestDataItemTransform
Description: Data item transformation example
Organization ID: 1
Not pushed
Block type: transform
Operates on: dataitem
Bucket mount points:
    - ID: 1, Mount point: /path/to/bucket

Block runner

Rather than only running custom blocks in the cloud, the edge-impulse-blocks runner command lets developers download, configure, and run custom blocks entirely on their local machine, making testing and development much faster. The options depend on the type of block being run, and they can be viewed by using the help menu:

$ edge-impulse-blocks runner -h
Usage: edge-impulse-blocks runner [options]
Run the current block locally
Options:
  --data-item <dataItem>          Tranformation block: Name of data item
  --file <filename>               File tranformation block: Name of file in data item
  --epochs <number>               Transfer learning: # of epochs to train
  --learning-rate <learningRate>  Transfer learning: Learning rate while training
  --validation-set-size <size>    Transfer learning: Size of validation set
  --input-shape <shape>           Transfer learning: List of axis dimensions. Example: "(1, 4, 2)"
  --download-data                 Transfer learning or deploy: Only download data and don't run the block
  --port <number>                 DSP: Port to host DSP block on
  --extra-args <args>             Pass extra arguments/options to the Docker container
  -h, --help                      display help for command

As seen above, the runner accepts a list of relevant option flags along with a variable number of extra arguments that get passed to the Docker container at runtime for extra flexibility. As an example, here is what happens when edge-impulse-blocks runner is used on a file transformation block:

$ edge-impulse-blocks runner --data-item item1 --file sample_1.cbor
Found data item item1 with id=1, metadata={}
Downloading file sample_1.cbor to /path/to/block/data/dataset_1/item1...
File downloaded
...

Best of all, the runner only downloads data when it isn't present locally, thus saving time and bandwidth.

$ edge-impulse-blocks runner --data-item item1 --file sample_1.cbor
Found data item item1 with id=1, metadata={}
File already present; skipping download...
...

Block structure

Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. Thus, every block needs at least a Dockerfile. This is a file describing how to build the container that powers the block, and it has information about the dependencies for the block - like a list of Python packages your block needs. This Dockerfile needs to declare an ENTRYPOINT: a command that needs to run when the container starts.

An example of a Python container is:

FROM python:3.7.5-stretch

WORKDIR /app

# Python dependencies
COPY requirements.txt ./
RUN pip3 --no-cache-dir install -r requirements.txt

COPY . ./

ENTRYPOINT [ "python3",  "transform.py" ]

Which takes a base-image with Python 3.7.5, then installs all dependencies listed in requirements.txt, and finally starts a script called transform.py.

Note: Do not use a WORKDIR under /home! The /home path will be mounted in by Edge Impulse, making your files inaccessible.

Note: If you use a different programming language, make sure to use ENTRYPOINT to specify the application to execute, rather than RUN or CMD.

Besides your Dockerfile you'll also need the application files, in the example above transform.py and requirements.txt. You can place these in the same folder.

Excluding files

When pushing a new block all files in your folder are archived and sent to Edge Impulse, where the container is built. You can exclude files by creating a file called .ei-ignore in the root folder of your block. You can either set absolute paths here, or use wildcards to exclude many files. For example:

a-large-folder/*
some-path-to-a-text-file.txt

Clearing configuration

To clear the configuration, run:

$ edge-impulse-blocks --clean

This resets the CLI configuration and will prompt you to log in again.

API Key

You can use an API key to authenticate with:

$ edge-impulse-blocks --api-key ei_...

Note that this resets the CLI configuration and automatically configures your organization.

Other options

  • --dev - lists development servers, use in conjunction with --clean.

Building deployment blocks

One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.

In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.

Only available for enterprise customers

Organizational features are only available for enterprise customers. View our pricing for more information.

Prerequisites

You'll need:

  • The Edge Impulse CLI.

    • If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):

  • Docker desktop installed on your machine.

Then, create a new folder on your computer named custom-deploy-block.

1. Getting basic deployment info

When a user deploys with a custom deployment block two things happen:

  1. A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite and SavedModel formats, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.

  2. This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.

To obtain this package go to your project's Dashboard, look for Administrative zone, enable Custom deploys, and click Save.

Enabling custom deploys in your project

If you now go to the Deployment page, a new option appears under 'Create library':

Downloading the base package for a custom deployment block

Once you click Build you'll receive a ZIP file containing five items:

  • deployment-metadata.json - this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: Deployment metadata spec.

  • trained.tflite - if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the int8 optimization, otherwise this is the float32 model.

  • trained.savedmodel.zip - if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.

  • edge-impulse-sdk - a copy of the latest Inferencing SDK.

  • model-parameters - impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.

  • tflite-model - neural network as source code in a way that can be used by the SDK to quickly run your impulse.

Store the unzipped file under custom-deploy-block/input.

2. Building a new binary

With the basic information in place we can create a new deployment block. Here we'll build a standalone application that runs our impulse on Linux, very useful when running your impulse on a gateway or desktop computer. First, open a command prompt or terminal window, navigate to the custom-deploy-block folder (that you created under 1.), and run:

$ edge-impulse-blocks init

This will prompt you to log in, and enter the details for your block.

Next, we'll add the application. The base application can be found at edgeimpulse/example-standalone-inferencing.

  1. Download the base application.

  2. Unzip under custom-deploy-block/app.

To build this application we need to combine the application with the edge-impulse-sdk, model-parameters and tflite-model folder, and invoke the (already included) Makefile.

2.1 Creating a build script

To build the application we use Docker, a virtualization technique which lets developers package up an application with all dependencies in a single package. In this container we'll place the build tools required for this application, and scripts to combine the trained impulse with the base application.

First, let's create a small build script. As a parameter you'll receive --metadata which points to the deployment information. In here you'll also get information on the input and output folders where you need to read from and write to.

Create a new file called custom-deploy-block/build.py and add:

build.py

import argparse, json, os, shutil, zipfile, threading

# parse arguments (--metadata FILE is passed in)
parser = argparse.ArgumentParser(description='Custom deploy block demo')
parser.add_argument('--metadata', type=str)
args = parser.parse_args()

# load the metadata.json file
with open(args.metadata) as f:
    metadata = json.load(f)

# now we have two folders 'metadata.folders.input' - this is where all the SDKs etc are,
# and 'metadata.folders.output' - this is where we need to write our output
input_dir = metadata['folders']['input']
app_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'app')
output_dir = metadata['folders']['output']

print('Copying files to build directory...')

is_copying = True
def print_copy_progress():
    if (is_copying):
        threading.Timer(2.0, print_copy_progress).start()
        print("Still copying...")
print_copy_progress()

# create a build directory, the input / output folders are on network storage so might be very slow
build_dir = '/tmp/build'
if os.path.exists(build_dir):
    shutil.rmtree(build_dir)
os.makedirs(build_dir)

# copy in the data from both 'input' and 'app' folders
os.system('cp -r ' + input_dir + '/* ' + build_dir)
os.system('cp -r ' + app_dir + '/* ' + build_dir)

is_copying = False

print('Copying files to build directory OK')
print('')

print('Compiling application...')

is_compiling = True
def print_compile_progress():
    if (is_compiling):
        threading.Timer(2.0, print_compile_progress).start()
        print("Still compiling...")
print_compile_progress()

# then invoke Make
os.chdir(build_dir)
os.system('make -f Makefile.tflite')

is_compiling = False

print('Compiling application OK')

# ZIP the build folder up, and copy to output dir
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
shutil.make_archive(os.path.join(output_dir, 'deploy'), 'zip', os.path.join(build_dir, 'build'))

Next, we need to create a Dockerfile, which contains all dependencies for the build. These include GNU Make, a compiler, and both the build script and the base application.

Create a new file called custom-deploy-block/Dockerfile and add:

Dockerfile

FROM ubuntu:18.04

WORKDIR /ei

# Install base dependencies
RUN apt update && apt install -y build-essential software-properties-common wget

# Install LLVM 9
RUN wget https://apt.llvm.org/llvm.sh && chmod +x llvm.sh && ./llvm.sh 9
RUN rm /usr/bin/gcc && rm /usr/bin/g++ && ln -s $(which clang-9) /usr/bin/gcc && ln -s $(which clang++-9) /usr/bin/g++

# Install Python 3.7
RUN apt install -y python3.7

# Copy the base application in
COPY app ./app

# Copy any scripts in that we have
COPY *.py ./

# This is the script our application should run (-u to disable buffering)
ENTRYPOINT [ "python3", "-u", "build.py" ]

2.2 Testing the build script with Docker

To test the build script we first build the container, then invoke it with the files from the input directory. Open a command prompt or terminal, navigate to the custom-deploy-block folder and:

  1. Build the container:

$ docker build -t cdb-demo .
  1. Invoke the build script - this mounts the current directory in the container under /home, and then passes the downloaded metadata script to the container:

$ docker run --rm -it -v $PWD:/home cdb-demo --metadata /home/input/deployment-metadata.json
  1. Voila. You now have an output folder which contains a ZIP file. Unzip output/deploy.zip and now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see Running your impulse locally):

$ ./output/edge-impulse-standalone "RAW FEATURES HERE"

Or if you run Windows or macOS, you can use Docker to run this application:

$ docker run --rm -v $PWD/output:/home ubuntu:18.04 /home/edge-impulse-standalone "RAW FEATURES HERE"

3. Uploading the deployment block to Edge Impulse

With the deployment block ready you can make it available in Edge Impulse. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:

$ edge-impulse-blocks push

This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization. The transformation block is now available in Edge Impulse under Deployment blocks. You can go here to set the logo, update the description, and set extra command line parameters.

Managing the deployment block in Edge Impulse

Privileged mode

Deployment blocks do not have access to the internet by default. If you need this, or if you need to pull additional information from the project (e.g. access to DSP blocks) you can set the 'privileged' flag on a deployment block. This will enable outside internet access, and will pass in the project.apiKey parameter in the metadata (if a development API key is set) that you can use to authenticate with the Edge Impulse API.

4. Using the deployment block

The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and you'll find a new section 'Custom targets'. Select your new deployment target and click Build.

Your new deployment target is now automatically available for all organizational projects

And now you'll have a freshly built binary from your own deployment block!

Freshly minted deployment block

5. Conclusion

Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.

You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating data ingestion from your cloud services, transforming raw data into ML-suitable data, training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features, let us know!

🚀

TI CC1352P Launchpad

The Texas Instruments CC1352P Launchpad is a development board equipped with the multiprotocol wireless CC1352P microcontroller. The Launchpad, when paired with the BOOSTXL-SENSORS and CC3200AUDBOOST booster packs, is fully supported by Edge Impulse, and is able to sample accelerometer & microphone data, build models, and deploy directly to the device without any programming required. The CC1352P Launchpad, BOOSTXL-SENSORS, and CC3200AUDBOOST boards are available for purchase directly from Texas Instruments.

If you don't have either booster pack or are using different sensing hardware, you can use the Data forwarder to capture data from any other sensor type, and then follow the Running your impulse locally tutorial to run your impulse. Or, you can clone and modify the open source firmware-ti-launchxl project on GitHub.

The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-ti-launchxl.

Installing dependencies

To set this device up in Edge Impulse, you will need to install the following software:

  1. Edge Impulse CLI.

  2. Texas Instruments UniFlash

    • Install the desktop version for your operating system here

    • Add the installation directory to your PATH

    • See Troubleshooting for more details

  3. On Linux:

    • GNU Screen: install for example via sudo apt install screen.

Problems installing the Edge Impulse CLI?

See the Installation and troubleshooting guide.

Connecting to Edge Impulse

With all the software in place it's time to connect the development board to Edge Impulse.

1. Configure your hardware

To interface the Launchpad with sensor hardware, you will need to either connect the BOOSTXL-SENSORS to collect accelerometer data, or the CC3200AUDBOOST to collect audio data. Follow the guides below based on what data you want to collect.

Before you start

The Launchpad jumper connections should be in their original configuration out of the box. If you have already modified the jumper connections, see the Launchpad's User Guide for the original configuration.

Accelerometer Hardware Configuration Guide

Connecting the BOOSTXL-SENSORS board to the Launchpad is simple. Just orient the sensor board such that the 3V3 and GND markings on the booster pack line up with the Launchpad, and then attach the booster pack to the top header pins of the Launchpad, as shown below:

Audio Hardware Configuration Guide

Extra Hardware Required

You will need five extra 0.1" jumper wires to connect the CC3200AUDBOOST to the Launchpad, as described in the Texas Instruments documentation.

The CC3200AUDBOOST board requires modifications to interface properly with the CC1352P series of Launchpads. The full documentation regarding these modifications is available from Texas Instruments in their Quick Start Guide, and a summary of the steps to configure the board are shown below.

  1. Disconnect conflicting pins on the Launchpad.

Pins 26-30 on header J3 conflict with the CC3200AUDBOOST pins and need to be disconnected. To do this easily, TI recommends bending the pins down as shown below.

Launchpad modifications are compatible across booster packs

All Edge Impulse supported booster packs do not use pins 26-30 on header J3. If you have modified your Launchpad to interface with the audio booster pack, you can leave these pins disconnected when connecting other boards.

  1. Connect jumper wires to required pins

The pin connections shown below are required by TI to interface between the two boards. Connect the pins by using jumper wires and following the diagram. For more information see the CC3200AUDBOOST User Guide and Quick Start Guide

With the pins connected, your board should appear as shown below.

  1. Align the P1 pin on the booster pack with 3V3 pin on the Launchpad, and connect the two boards together.

Using Audio and Accelerometer Hardware Simultaneously

In most cases it is possible to connect the sensor and audio booster pack at the same time. Allowing you to quickly switch between accelerometer and audio data collection. The primary constraint is that the BOOSTXL-SENSORS board must not have the TMP007 temperature sensor soldered on, as this conflicts with the audio interface when both booster packs are connected.

  1. Ensure that the TMP007 temperature sensor is not present on the sensor booster pack. The board should have an unpopulated footprint for U5 as shown below:

  1. Perform all modifications to the Launchpad and audio booster pack described in the Audio Hardware Configuration Guide

  2. Connect the BOOSTXL-SENSORS booster pack directly to the Launchpad.

  1. Connect the audio booster pack on top of the sensors booster pack. The final board should appear as shown below:

2. Connect the development board to your computer

Use a micro-USB cable to connect the development board to your computer.

3. Update the firmware

The development board does not come with the right firmware yet. To update the firmware:

  1. Download the latest Edge Impulse firmware, and unzip the file.

  2. Open the flash script for your operating system (flash_windows.bat, flash_mac.command or flash_linux.sh) to flash the firmware.

  3. Wait until flashing is complete, and press the RESET button once to launch the new firmware.

Problems flashing firmware onto the Launchpad?

See the Troubleshooting section for more information.

3. Setting keys

From a command prompt or terminal, run:

edge-impulse-daemon

This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean.

Which device do you want to connect to?

The Launchpad enumerates two serial ports. The first is the Application/User UART, which the edge-impulse firmware communicates through. The other is an Auxiliary Data Port, which is unused.

When running the edge-impulse-daemon you will be prompted on which serial port to connect to. On Mac & Linux, this will appear as:

? Which device do you want to connect to? (Use arrow keys) 
❯ /dev/tty.usbmodemL42003QP1 (Texas Instruments) 
 /dev/tty.usbmodemL42003QP4 (Texas Instruments)

Generally, select the lower numbered serial port. This usually corresponds with the Application/User UART. On Windows, the serial port may also be verified in the Device Manager

If a selected serial port fails to connect. Test the other port before checking troubleshooting for other common issues.

Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.

4. Verifying that the device is connected

That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.

Device connected to Edge Impulse

Next steps: building a machine learning model

With everything set up you can now build and run your first machine learning model with these tutorials:

  • Building a continuous motion recognition system.

  • Recognize sounds from audio

Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse, and you can run your impulse locally with custom firmware or sensor data.

Troubleshooting

Failed to flash

If the UniFlash CLI is not added to your PATH, the install scripts will fail. To fix this, add the installation directory of UniFlash (example /Applications/ti/uniflash_6.4.0 on macOS) to your PATH on:

  • Windows

  • macOS

  • linux

If during flashing you encounter further issues, ensure:

  • The device is properly connected and/or the cable is not damaged.

  • You have the proper permissions to access the USB device and run scripts. On macOS you can manually approve blocked scripts via System Preferences->Security Settings->Unlock Icon

  • If on Linux you may want to try copying tools/71-ti-permissions.rules to /etc/udev/rules.d/. Then re-attach the USB cable and try again.

Alternatively, the gcc/build/edge-impulse-standalone.out binary file may be flashed to the Launchpad using the UniFlash GUI or web-app. See the Texas Instruments Quick Start Guide for more info.

Building custom processing blocks

Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio), but they might not be suitable for all applications. Perhaps you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing. In this tutorial you'll learn how to support these use cases by adding custom processing blocks to the studio.

Prerequisites

Make sure you followed the tutorial, and have a trained impulse.

Development flow

This tutorial shows you the development flow of building custom processing blocks, and requires you to run the processing block on your own machine or server. Enterprise customers can share processing blocks within their organization, and run these on our infrastructure. See for more details.

1. Building your first custom processing block

Processing blocks take data and configuration parameters in, and return features and visualizations like graphs or images. To communicate to custom processing blocks, Edge Impulse studio will make HTTP calls to the block, and then use the response both in the UI, while generating features, or when training a machine learning model. Thus, to load a custom processing block we'll need to run a small server that responds to these HTTP calls. You can write this in any language, but we have created in Python. To load this example, open a terminal and run:

This creates a copy of the example project locally. Then, you can run the example either through Docker or locally via:

Docker

Locally

Then go to and you should be shown some information about the block.

Exposing the processing block to the world

As this block is running locally the studio cannot reach the block. To resolve this we can use which can make a local port accessible from a public URL. After you've finished development you can move the processing block to a server with a publicly accessible address (or run it on our infrastructure through your enterprise account). To set up a tunnel:

  1. Sign up for .

  2. Install the ngrok binary for your platform.

  3. Get a URL to access the processing block from the outside world via:

This yields a public URL for your block under Forwarding. Note down the address that includes https://.

Adding the custom block to Edge Impulse

Now that the custom processing block was created, and you've made it accessible to the outside world, you can add this block to Edge Impulse. In a project, go to Create Impulse, click Add a processing block, choose Add custom block (in the bottom left corner of the modal), and paste in the public URL of the block:

After you click Add block the block will show like any other processing block.

Add a learning bloc, then click Save impulse to store the impulse.

2. Adding configuration options

Processing blocks have configuration options which are rendered on the block parameter page. These could be filter configurations, scaling options, or control which visualizations are loaded. These options are defined in the parameters.json file. Let's add an option to smooth raw data. Open example-custom-processing-block-python/parameters.json and add a new section under parameters:

Then, open example-custom-processing-block-python/dsp.py and replace its contents with:

Restart the Python script, and then click Custom block in the studio (in the navigation bar). You now have a new option 'Smooth'. Every time an option changes we'll re-run the block, but as we have not written any code to respond to these changes nothing will happen.

2.1 Valid configuration types

We support a number of different types for configuration fields. These are:

  • int - renders a numeric textbox that expects integers.

  • float - renders a numeric textbox that expects floating point numbers.

  • string - renders a textbox that expects a string.

  • boolean - renders a checkbox.

  • select - renders a dropdown box. This also requires the parameter valid which should be an array of valid values. E.g. this renders a dropdown box with options 'low', 'high' and 'none':

3. Implementing smoothing and drawing graphs

To show the user what is happening we can also draw visuals in the processing block. Right now we support graphs (linear and logarithmic) and arbitrary images. By showing a graph of the smoothed sample we can quickly identify what effect the smooth option has on the raw signal. Open dsp.py and replace the content with the following script. It contains a very basic smoothing algorithm and draws a graph:

Restart the script, and click the Smooth toggle to observe the difference. Congratulations! You have just created your first custom processing block.

3.1 Adding features to labels

If you extract set features from the signal, like the mean, that you that return, you can also label these features. These labels will be used in the feature explorer. To do so, add a labels array that contains strings that map back to the features you return (labels and features should have the same length).

4. Other type of graphs

In the previous step we drew a linear graph, but you can also draw logarithmic graphs or even full images. This is done through the type parameter:

4.1 Logarithmic graphs

This draws a graph with a logarithmic scale:

4.2 Images

To show an image you should return the base64 encoded image and its MIME type. Here's how you draw a small PNG image:

4.3 Dimensionality reduction

If you output high-dimensional data (like a spectrogram or an image) you can enable dimensionality reduction for the feature explorer. This will run UMAP over the data to compress the features into three dimensions. To do so, set:

On the info object in parameters.json.

4.4 Full documentation

For all options that you can return in a graph, see the return types in the API documentation.

5. Running on device

Your custom block behaves exactly the same as any of the built-in blocks. You can process all your data, train neural networks or anomaly blocks, and validate that your model works. However, we cannot automatically generate optimized native code for the block, like we do for built-in processing blocks, but we try to help you write this code as much as possible. When you export your project to a C++ library we generate struct's for all the configuration options, and you only need to implement the extract_custom_block_features function (you can change this name through the cppType parameter in parameters.json).

An example of this function for the spectral analysis block is listed in the .

6. Other resources

Blog post:

7. Conclusion

With good feature extraction you can make your machine learning models smaller and more reliable, which are both very important when you want to deploy your model on embedded devices. With custom processing blocks you can now develop new feature extraction pipelines straight from Edge Impulse. Whether you're following the latest research, want to implement proprietary algorithms, or are just exploring data.

For inspiration we have published all our own blocks here: . If you've made an interesting block that you think is valuable for the community, please let us know on the or by opening a pull request. We'd be happy to help write efficient native code for the block, and then publish it as a standard block!

$ git clone https://github.com/edgeimpulse/example-custom-processing-block-python
$ docker build -t custom-blocks-demo .
$ docker run -p 4446:4446 -it --rm custom-blocks-demo
$ pip3 install -r requirements.txt
$ python3 dsp-server.py
$ ngrok http 4446
# or
$ ./ngrok http 4446
Session Status                online
Account                       Edge Impulse (Plan: Free)
Version                       2.3.35
Region                        United States (us)
Web Interface                 http://127.0.0.1:4040
Forwarding                    http://4d48dca5.ngrok.io -> http://localhost:4446
Forwarding                    https://4d48dca5.ngrok.io -> http://localhost:4446
        {
            "group": "Filter",
            "items": [
                {
                    "name": "Smooth",
                    "value": false,
                    "type": "boolean",
                    "help": "Whether to smooth the data",
                    "param": "smooth"
                }
            ]
        }
import numpy as np

def generate_features(draw_graphs, raw_data, axes, sampling_freq, scale_axes, smooth):
    return { 'features': raw_data * scale_axes, 'graphs': [] }
{
    "name": "Type",
    "value": "low",
    "help": "Type of filter to apply to the raw data",
    "type": "select",
    "valid": [ "low", "high", "none" ],
    "param": "filter-type"
}
import numpy as np

def smoothing(y, box_pts):
    box = np.ones(box_pts) / box_pts
    y_smooth = np.convolve(y, box, mode='same')
    return y_smooth

def generate_features(draw_graphs, raw_data, axes, sampling_freq, scale_axes, smooth):
    # features is a 1D array, reshape so we have a matrix with one raw per axis
    raw_data = raw_data.reshape(int(len(raw_data) / len(axes)), len(axes))

    features = []
    smoothed_graph = {}

    # split out the data from all axes
    for ax in range(0, len(axes)):
        X = []
        for ix in range(0, raw_data.shape[0]):
            X.append(raw_data[ix][ax])

        # X now contains only the current axis
        fx = np.array(X)

        # first scale the values
        fx = fx * scale_axes

        # if smoothing is enabled, do that
        if (smooth):
            fx = smoothing(fx, 5)

        # we save bandwidth by only drawing graphs when needed
        if (draw_graphs):
            smoothed_graph[axes[ax]] = list(fx)

        # we need to return a 1D array again, so flatten here again
        for f in fx:
            features.append(f)

    # draw the graph with time in the window on the Y axis, and the values on the X axes
    # note that the 'suggestedYMin/suggestedYMax' names are incorrect, they describe
    # the min/max of the X axis
    graphs = []
    if (draw_graphs):
        graphs.append({
            'name': 'Smoothed',
            'X': smoothed_graph,
            'y': np.linspace(0.0, raw_data.shape[0] * (1 / sampling_freq) * 1000, raw_data.shape[0] + 1).tolist(),
            'suggestedYMin': -20,
            'suggestedYMax': 20
        })

    return { 'features': features, 'graphs': graphs }
    graphs.append({
        'name': 'Logarithmic example',
        'X': {
            'Axis title': [ pow(10, i) for i in range(10) ]
        },
        'y': np.linspace(0, 10, 10).tolist(),
        'suggestedXMin': 0,
        'suggestedXMax': 10,
        'suggestedYMin': 0,
        'suggestedYMax': 1e+10,
        'type': 'logarithmic'
    })
    from PIL import Image, ImageDraw, ImageFont, ImageFilter

    # create a new image, and draw some text on it
    im = Image.new ('RGB', (438, 146), (248, 86, 44))
    draw = ImageDraw.Draw(im)
    draw.text((10, 10), 'Hello world!', fill=(255, 255, 255))

    # save the image to a buffer, and base64 encode the buffer
    with io.BytesIO() as buf:
        im.save(buf, format='png', bbox_inches='tight', pad_inches=0)
        buf.seek(0)
        image = (base64.b64encode(buf.getvalue()).decode('ascii'))

        # append as a new graph
        graphs.append({
            'name': 'Image from custom block',
            'image': image,
            'imageMimeType': 'image/png',
            'type': 'image'
        })
"visualization": "dimensionalityReduction"
Continuous motion recognition
Hosting custom DSP blocks
an example
http://localhost:4446
ngrok
ngrok
Run DSP
inferencing sdk
Utilize Custom Processing Blocks in Your Image ML Pipelines
edgeimpulse/processing-blocks
forums
Running your first custom block locally
Adding a custom processing block from an ngrok URL
An impulse with a custom processing block and a neural network.
Custom processing block with a "smooth" option that shows a graph of the processed features.
Launchpad connected with sensor booster pack
Disconnected pins on the CC1352P Launchpad
Jumper connections for CC3200AUDBOOST
Properly configured CC3200AUDBOOST
TI Launchpad connected with audio booster pack
Unpopulated TMP007 footprint on sensor booster pack
Launchpad connected with sensor booster pack
Launchpad connected with sensor and audio booster packs

Dataset transformation block

Transformation blocks take raw data from your ) and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.

In this tutorial we build a Python-based transformation block that loads Parquet files, calculates features from the Parquet file, and then writes a new file back to your dataset.If you haven't done so, go through first.

From dataset to project

You can also transform data in your organizational dataset into an Edge Impulse project. See

1. Prerequisites

You'll need:

  • The .

    • If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

  • The file which you can use to test the transformation block. This contains some data from the dataset in Parquet format.

Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):

  • installed on your machine.

1.1 - Parquet schema

This is the Parquet schema for the gestures.parquet file which we'll transform:

2. Building your first transformation block

To build a transformation block open a command prompt or terminal window, create a new folder, and run:

This will prompt you to log in, and enter the details for your block. E.g.:

Then, create the following files in this directory:

2.1 - Dockerfile

We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt) and which script to run (transform.py).

Note: Do not use a WORKDIR under /home! The /home path will be mounted in by Edge Impulse, making your files inaccessible.

ENTRYPOINT vs RUN / CMD

If you use a different programming language, make sure to use ENTRYPOINT to specify the application to execute, rather than RUN or CMD.

2.2 - requirements.txt

This file describes the dependencies for the block. We'll be using pandas and pyarrow to parse the Parquet file, and numpy to do some calculations.

2.3 - transform.py

This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):

  • --in-file - A file from the organizational dataset. In this case the gestures.parquet file.

  • --out-directory - Directory to write files to.

  • --hmac-key - You can use this HMAC key to sign the output files. This is not used in this tutorial.

  • --metadata - Additional key/value pairs defined for the incoming item(s). This is not used in this tutorial.

Add the following content. This takes in the Parquet file, groups data by their label, and then calculates the RMS over the X, Y and Z axes of the accelerometer.

2.4 - Building and testing the container

On your local machine

To test the transformation block locally, if you have Python and all dependencies installed, just run:

Docker

You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.

To build the container and test the block, open a command prompt or terminal window and navigate to the source directory. First, build the container:

Then, run the container (make sure gestures.parquet is in the same directory):

Seeing the output

This process has generated a new Parquet file in the out/ directory containing the RMS of the X, Y and Z axes. If you inspect the content of the file (e.g. using parquet-tools) you'll see the output:

Success!

3. Pushing the transformation block to Edge Impulse

With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:

This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.

The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.

If you make any changes to the block, just re-run edge-impulse-blocks push and the block will be updated.

4. Uploading gestures.parquet to Edge Impulse

Next, upload the gestures.parquet file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.

This makes the gestures.parquet file available from the Data page.

5. Starting the transformation

With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'.

Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page select 'Import data into Dataset'. Under 'output dataset', select 'Same dataset as source', and under 'Transformation block' select the new transformation block.

Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally uploads the data back to your dataset. If you have multiple files selected the transformations will also run in parallel.

You can now find the transformed file back in your dataset:

6. Next steps

Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the so you can automatically convert new incoming data. If you're interested in transformation blocks or any of the other enterprise features,

🚀

Appendix: Advanced features

Updating metadata from a transformation block

You can update the metadata of blocks directly from a transformation block by creating a ei-metadata.json file in the output directory. The metadata is then applied to the new data item automatically when the transform job finishes. The ei-metadata.json file has the following structure:

Some notes:

  • If action is set to add the metadata keys are added to the data item. If action is set to replace all existing metadata keys are removed.

Environmental variables

Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:

  • EI_API_KEY - an API key with 'member' privileges for the organization.

  • EI_ORGANIZATION_ID - the organization ID that the block runs in.

  • EI_API_ENDPOINT - the API endpoint (default: https://studio.edgeimpulse.com/v1).

Data Sources

The data sources page is actually much more than just adding data from external sources. It let you create complete automated data pipelines so you can work on your active learning strategies.

From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.

Add a data source

Click in + Add new data source and select where your data lives:

Click on Next, provide credentials:

Click on Verify credentials:

Here, you have several options to automatically label your data:

  • Infer from folder name

In the example above, the structure of the folder is the following:

The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20.

Note that the samples present in an unlabeled/ folder will be kept unlabeled in Edge Impulse Studio.

Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:

  • Infer from file name:

When using this option, only the file name is taken into account. The part before the first . will be used to set the label. E.g. cars.01741.jpg will set the label to cars.

  • Keep the data unlabeled:

All the data samples will be unlabeled, you will need to label them manually before using them.

Finally, click on Next, post-sync actions.

From this view, you can automate several actions:

  • Recreate data explorer

    The gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.

  • Retrain model

    If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.

    Note: You will need to have at least setup up and trained your project once.

  • Create new version

    Store all data, configuration, intermediate results and final models.

  • Create new deployment

    Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.

Run the pipeline

Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.

Run the pipeline from the UI

To run your pipeline from Edge Impulse studio, click on the ⋮ button and select Run pipeline now.

Run the pipeline from code

To run your pipeline from Edge Impulse studio, click on the ⋮ button and select Run pipeline from code. This will display an overlay with curl, Node.js and Python code samples.

Note that you will need to create an API key to run the pipeline from code.

Schedule your pipeline jobs

By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮ button and select Edit pipeline.

Note that free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.

Once the pipeline has successfully finish, you will receive an email like the following:

Note that you can also define who can receive the email. The users have to be part of your project. See .

Webhooks

Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:

Edit your pipeline

As of today, if you want to update your pipeline, you need to edit the configuration json available in ⋮ -> Run pipeline from code.

Here is an example of what you can get if all the actions have been selected:

Free projects have only access to the above builtinTransformationBlock.

If you are part of an , you can use your custom transformation jobs in the pipeline. In your organization workspace, go to "Custom blocks -> Transformation" and select "Run job" on the job you want to add.

Select "Copy as pipeline step" and paste it to the configuration json file.

message root {
  required binary sampleName (UTF8);
  required int64 timestamp (TIMESTAMP_MILLIS);
  required int64 added (TIMESTAMP_MILLIS);
  required boolean signatureValid;
  required binary device (UTF8);
  required binary label (UTF8);
  required float accX;
  required float accY;
  required float accZ;
}
$ edge-impulse-blocks init
Edge Impulse Blocks v1.9.0
? What is your user name or e-mail address (edgeimpulse.com)? [email protected]
? What is your password? [hidden]
Attaching block to organization 'Demo org Inc.'
? Choose a type of block Transformation block
? Choose an option Create a new block
? Enter the name of your block Demo dataset transformation
? Enter the description of your block Reads a Parquet file, extracts features, and writes the block back to the dataset
Creating block with config: {
  name: 'Demo dataset transformation',
  type: 'transform',
  description: 'Reads a Parquet file and splits it up in labeled data',
  organizationId: 34
}
Your new block 'Demo dataset transformation' has been created in '~/repos/tutorial-processing-block'.
When you have finished building your transformation block, run "edge-impulse-blocks push" to update the block in Edge Impulse.
FROM python:3.7.5-stretch

WORKDIR /app

# Python dependencies
COPY requirements.txt ./
RUN pip3 --no-cache-dir install -r requirements.txt

COPY . ./

ENTRYPOINT [ "python3",  "transform.py" ]
numpy==1.16.4
pandas==0.23.4
pyarrow==0.16.0
import pyarrow.parquet as pq
import numpy as np
import math, os, sys, argparse, json, hmac, hashlib, time
import pandas as pd

# these are the three arguments that we get in
parser = argparse.ArgumentParser(description='Organization transformation block')
parser.add_argument('--in-file', type=str, required=True)
parser.add_argument('--out-directory', type=str, required=True)

args, unknown = parser.parse_known_args()

# verify that the input file exists and create the output directory if needed
if not os.path.exists(args.in_file):
    print('--in-file argument', args.in_file, 'does not exist', flush=True)
    exit(1)

if not os.path.exists(args.out_directory):
    os.makedirs(args.out_directory)

# load and parse the input file
print('Loading parquet file', args.in_file, flush=True)
table = pq.read_table(args.in_file)
data = table.to_pandas()

features = []

# we group by label and then extract some metrics
for label in data.label.unique():
    data_per_label = data[data.label == label]

    # calculate the RMS per axis
    features.append({
        'label': label,
        'rmsX': np.sqrt(np.mean(data_per_label.accX**2)),
        'rmsY': np.sqrt(np.mean(data_per_label.accY**2)),
        'rmsZ': np.sqrt(np.mean(data_per_label.accZ**2))
    })

# and store as new file in the output directory
out_file = os.path.join(args.out_directory, os.path.splitext(os.path.basename(args.in_file))[0] + '_features.parquet')
pd.DataFrame(features).to_parquet(out_file)

print('Written features file', out_file, flush=True)
$ python3 transform.py --in-file gestures.parquet --out-directory out/
$ docker build -t test-org-transform-parquet-dataset .
$ docker run --rm -v $PWD:/data test-org-transform-parquet-dataset --in-file /data/gestures.parquet --out-directory /data/out
$ parquet-tools head -n5 out/gestures_features.parquet 
label = wave
rmsX = 11.424144744873047
rmsY = 4.73303747177124
rmsZ = 2.944265842437744

label = updown
rmsX = 3.899503231048584
rmsY = 3.9587674140930176
rmsZ = 10.34404468536377

label = circle
rmsX = 6.263721942901611
rmsY = 7.0987162590026855
rmsZ = 6.159618854522705

label = idle
rmsX = 3.714001178741455
rmsY = 3.4940428733825684
rmsZ = 8.6710205078125

label = snake
rmsX = 1.282995581626892
rmsY = 1.8830623626708984
rmsZ = 9.597149848937988
$ edge-impulse-blocks push
Edge Impulse Blocks v1.9.0
Archiving 'tutorial-processing-block'...
Archiving 'tutorial-processing-block' OK (2 KB) /var/folders/3r/fds0qzv914ng4t17nhh5xs5c0000gn/T/ei-transform-block-7812190951a6038c2f442ca02d428c59.tar.gz

Uploading block 'Demo dataset transformation' to organization 'Demo org Inc.'...
Uploading block 'Demo dataset transformation' to organization 'Demo org Inc.' OK

Building transformation block 'Demo dataset transformation'...
Job started
...
Building transformation block 'Demo dataset transformation' OK

Your block has been updated, go to https://studio.edgeimpulse.com/organization/34/data to run a new transformation
{
    "version": 1,
    "action": "add",
    "metadata": {
        "some-key": "some-value"
    }
}
organizational datasets
Building your first dataset
Creating a transformation block (project)
Edge Impulse CLI
gestures.parquet
Continuous gestures
Docker desktop
Edge Impulse API
let us know!
The transformation block in Edge Impulse
Selecting the transform tutorial dataset
Configuring the transformation job
Dataset transformation running
Dataset transformation successful
.
├── cars
│   ├── cars.01741.jpg
│   ├── cars.01743.jpg
│   ├── cars.01745.jpg
│   ├── ... (400 items)
├── unknown
│   ├── unknown.test_2547.jpg
│   ├── unknown.test_2548.jpg
│   ├── unknown.test_2549.jpg
│   ├── ... (400 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

3 directories, 814 files
.
├── testing
│   ├── cars
│   │   ├── cars.00012.jpg
│   │   ├── cars.00031.jpg
│   │   ├── cars.00035.jpg
│   │   └── ... (~150 items)
│   └── unknown
│       ├── unknown.test_1012.jpg
│       ├── unknown.test_1026.jpg
│       ├── unknown.test_1027.jpg
│       ├── ... (~150 items)
├── training
│   ├── cars
│   │   ├── cars.00006.jpg
│   │   ├── cars.00025.jpg
│   │   ├── cars.00065.jpg
│   │   └── ... (~600 items)
│   └── unknown
│       ├── unknown.test_1002.jpg
│       ├── unknown.test_1005.jpg
│       └── unknown.test_46.jpg
│       └── ... (~600 items)
└── unlabeled
    ├── cars.02066.jpg
    ├── cars.02067.jpg
    ├── cars.02068.jpg
    └── ... (14 items)

7 directories, 1512 files
{
    "organizationId":XX,
    "pipelineId":XX,
    "pipelineName":"Import data from portal \"Data sources demo\"",
    "projectId":XXXXX,
    "success":true,
    "newItems":0,
    "newChecklistOK":0,
    "newChecklistFail":0
}
[
    {
        "name": "Fetch data from s3://data-pipeline/data-pipeline-example/infer-from-folder/",
        "builtinTransformationBlock": {
            "type": "s3-to-project",
            "endpoint": "https://s3.your-endpoint.com",
            "path": "s3://data-pipeline/data-pipeline-example/infer-from-folder/",
            "region": "fr-par",
            "accessKey": "XXXXX",
            "category": "split",
            "labelStrategy": "infer-from-folder-name",
            "secretKeyEncrypted": "xxxxxx"
        }
    },
    {
        "name": "Refresh data explorer",
        "builtinTransformationBlock": {
            "type": "project-action",
            "refreshDataExplorer": true
        }
    },
    {
        "name": "Retrain model",
        "builtinTransformationBlock": {
            "type": "project-action",
            "retrainModel": true
        }
    },
    {
        "name": "Create new version",
        "builtinTransformationBlock": {
            "type": "project-action",
            "createVersion": true
        }
    },
    {
        "name": "Create on-device deployment (C++ library)",
        "builtinTransformationBlock": {
            "type": "project-action",
            "buildBinary": "zip",
            "buildBinaryModelType": "int8"
        }
    }
]
data explorer
Dashboard -> Collaboration
organization
Data sources
Add new data source
Provide your credentials
Automatically label your data
Trigger actions
Run your pipeline
Run the pipeline from code
Edit pipeline
Email example containing the full results
Data sources webhooks
Transformation blocks
Copy

FOMO: Object detection for constrained devices

Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5.

Tutorial

Want to see the FOMO in action? Check out our Detect objects with centroids (FOMO) tutorial.

For example, FOMO lets you do 60 fps. object detection on a Raspberry Pi 4:

And here's FOMO doing 30 fps. object detection on an Arduino Nicla Vision (Cortex-M7 MCU), using 245K RAM.

You can find the complete Edge Impulse project with the beers vs. cans model, including all data and configuration here: https://studio.edgeimpulse.com/public/89078/latest.

How does this 🪄 work?

So how does that work? First, a small primer. Let's say you want to detect whether you see a face in front of your sensor. You can approach this in two ways. You can train a simple binary classifier, which says either "face" or "no face", or you can train a complex object detection model which tells you "I see a face at this x, y point and of this size". Object detection is thus great when you need to know the exact location of something, or if you want to count multiple things (the simple classifier cannot do that) - but it's computationally much more intensive, and you typically need much more data for it.

Image classification vs object detection

The design goal for FOMO was to get the best of both worlds: the computational power required for simple image classification, but with the additional information on location and object count that object detection gives us.

Heat maps

The first thing to realize is that while the output of the image classifier is "face" / "no face" (and thus no locality is preserved in the outcome) the underlying neural network architecture consists of a number of convolutional layers. A way to think about these layers is that every layer creates a diffused lower-resolution image of the previous layer. E.g. if you have a 16x16 image the width/height of the layers may be:

  1. 16x16

  2. 4x4

  3. 1x1

Each 'pixel' in the second layer maps roughly to a 4x4 block of pixels in the input layer, and the interesting part is that locality is somewhat preserved. The 'pixel' in layer 2 at (0, 0) will roughly map back to the top left corner of the input image. The deeper you go in a normal image classification network, the less of this locality (or "receptive field") is preserved until you finally have just 1 outcome.

FOMO uses the same architecture, but cuts off the last layers of a standard image classification model and replaces this layer with a per-region class probability map (e.g. a 4x4 map in the example above). It then has a custom loss function which forces the network to fully preserve the locality in the final layer. This essentially gives you a heatmap of where the objects are.

From input image to heat map (cup in red, lamp in green)

The resolution of the heat map is determined by where you cut off the layers of the network. For the FOMO model trained above (on the beer bottles) we do this when the size of the heat map is 8x smaller than the input image (input image of 160x160 will yield a 20x20 heat map), but this is configurable. When you set this to 1:1 this actually gives you pixel-level segmentation and the ability to count a lot of small objects.

Here's a former iteration of the FOMO approach used to count individual bees (heat map 2x smaller than the input size).

Training on centroids

A difference between FOMO and other object detection algorithms is that it does not output bounding boxes, but it's easy to go from heat map to bounding boxes. Just draw a box around a highlighted area.

From heat map to bounding boxes

However, when working with early customers we realized that bounding boxes are merely an implementation detail of other object detection networks, and are not a typical requirement. Very often the size of objects is not important as cameras are in fixed locations (and objects thus fixed size), but rather you just want the location and the count of objects.

Thus, we now train on the centroids of objects. This makes it much easier to count objects that are close (every activation in the heat map is an object), and the convolutional nature of the neural network ensures we look around the centroid for the object anyway.

Training on the centroids of beer bottles. On top the source labels, at the bottom the inference result.

A downside of the heat map is that FOMO is that each cell acts as its own classifier. E.g. if your classes are "lamp", "plant" and "background" each cell will be either lamp, plant, or background. It's thus not possible to detect objects with overlapping centroids. You can see this in the Raspberry Pi 4 video above at 00:18 where the beer bottles are too close together. This can be solved by using a higher resolution heat map.

Flexible and very, very fast

A really cool benefit of FOMO is that it's fully convolutional. If you set an image:heat map factor of 8 you can throw in a 96x96 image (outputs 12x12 heat map), a 320x320 image (outputs 40x40 heat map), or even a 1024x1024 image (outputs 128x128 heat map). This makes FOMO incredibly flexible, and useful even if you have very large images that need to be analyzed (e.g. in fault detection where the faults might be very, very small). You can even train on smaller patches, and then scale up during inference.

Additionally FOMO is compatible with any MobileNetV2 model. Depending on where the model needs to run you can pick a model with a higher or lower alpha, and transfer learning also works (although you need to train your base models specifically with FOMO in mind). This makes it easy for end customers to use their existing models and fine-tune them with FOMO to also add locality (e.g. we have customers with large transfer learning models for wildlife detection).

Together this gives FOMO the capabilities to scale from the smallest microcontrollers all the way to full gateways or GPUs. Just some numbers:

  1. The video on the top classifies 60 times / second on a stock Raspberry Pi 4 (160x160 grayscale input, MobileNetV2 0.1 alpha). This is 20x faster than MobileNet SSD which does ~3 frames/second.

  2. The second video on the top classifies 30 times / second on an Arduino Nicla Vision board with a Cortex-M7 MCU running at 480MHz) in ~240K of RAM (96x96 grayscale input, MobileNetV2 0.35 alpha).

  3. During Edge Impulse Imagine we demonstrated a FOMO model running on a Himax WE-I Plus doing 14 frames per second on a DSP (video). This model ran in under 150KB of RAM (96x96 grayscale input, MobileNetV2 0.1 alpha). [1]

  4. The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps. on a Cortex-M4F at 80MHz. [1]

[1] Models compiled using EON Compiler.

How to get started?

To build your first FOMO models:

  1. Create a new project in Edge Impulse.

  2. Make sure to set your labeling method to 'Bounding boxes (object detection)'.

  3. Collect and prepare your dataset as in Object detection

  4. Add an 'Object Detection (Images)' block to your impulse.

  5. Under Images, select 'Grayscale'

  6. Under Object detection, select 'Choose a different model' and select one of the FOMO models.

  7. Make sure to lower the learning rate to 0.001 to start.

Selecting a FOMO model in Edge Impulse

FOMO is currently compatible with all fully-supported development boards that have a camera, and with Edge Impulse for Linux (any client). Of course, you can export your model as a C++ Library and integrate it as usual on any device or development board, the output format of models is compatible with normal object detection models; and our SDK runs on almost anything under the sun (see Running your impulse locally for an overview) from RTOS's to bare-metal to special accelerators and GPUs.

Expert mode tips

Additional configuration for FOMO can be accessed via expert mode.

Accessing expert mode

Object weighting

FOMO is sensitive to the ratio of objects to background cells in the labelled data. By default the configuration is to weight object output cells x100 in the loss function, object_weight=100, as a way of balancing what is usually a majority of background. This value was chosen as a sweet spot for a number of example use cases. In scenarios where the objects to detect are relatively rare this value can be increased, e.g. to 1000, to have the model focus even more on object detection (at the expense of potentially more false detections).

MobileNet cut point

FOMO uses MobileNetV2 as a base model for its trunk and by default does a spatial reduction of 1/8th from input to output (e.g. a 96x96 input results in a 12x12 output). This is implemented by cutting MobileNet off at the intermediate layer block_6_expand_relu

MobileNetV2 cut point

Choosing a different cut_point results in a different spatial reduction; e.g. if we cut higher at block_3_expand_relu FOMO will instead only do a spatial reduction of 1/4 (i.e. a 96x96 input results in a 24x24output)

Note though; this means taking much less of the MobileNet backbone and results in a model with only 1/2 the params. Switching to a higher alpha may counteract this parameter reduction. Later FOMO releases will counter this parameter reduction with a UNet style architecture.

FOMO classifier capacity

FOMO can be thought of logically as the first section of MobileNetV2 followed by a standard classifier where the classifier is applied in a fully convolutional fashion.

In the default configuration this FOMO classifier is equivalent to a single dense layer with 32 nodes followed by a classifier with num_classes outputs.

FOMO uses a convolutional classifier.

For a three way classifier, using the default cut point, the result is a classifier head with ~3200 parameters.

 LAYER                          SHAPE                NUMBER OF PARAMETERS
 block_6_expand_relu (ReLU)     (None, 20, 20, 96)   0                                         
 head (Conv2D)                  (None, 20, 20, 32)   3104                                            
 logits (Conv2D)                (None, 20, 20, 3)    99

We have the option of increasing the capacity of this classifier head by either 1) increasing the number of filters in the Conv2D layer, 2) adding additional layers or 3) doing both.

For example we might change the number of filters from 32 to 16, as well as adding another convolutional layer, as follows.

Adding an additional layer to the classifier of FOMO
 LAYER                          SHAPE                NUMBER OF PARAMETERS
 block_6_expand_relu (ReLU)     (None, 20, 20, 96)   0
 head_1 (Conv2D)                (None, 20, 20, 16)   1552                                         
 head_2 (Conv2D)                (None, 20, 20, 16)   272                                          
 logits (Conv2D)                (None, 20, 20, 3)    51

For some problems an additional layer can improve performance, and in this case actually uses less parameters. It can though potentially take longer to train and require more data. In future releases the tuning of this aspect of FOMO can be handled by the EON Tuner.

Project transformation block

Transformation blocks take raw data from your organizational datasets and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.

In this tutorial we build a Python-based transformation block that loads Parquet files, splits the data into thirty second windows, and uploads the data to a new project.

Want more? We also have an end-to-end example transformation block that mixes noise into an audio dataset: edgeimpulse/example-transform-block-mix-noise.

1. Prerequisites

You'll need:

  • The Edge Impulse CLI.

    • If you receive any warnings that's fine. Run edge-impulse-blocks afterwards to verify that the CLI was installed correctly.

  • The gestures.parquet file which you can use to test the transformation block. This contains some data from the Continuous gestures dataset in Parquet format.

Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):

  • Docker desktop installed on your machine.

1.1 - Parquet schema

This is the Parquet schema for the gestures.parquet file which we'll want to transform into data for a project:

message root {
  required binary sampleName (UTF8);
  required int64 timestamp (TIMESTAMP_MILLIS);
  required int64 added (TIMESTAMP_MILLIS);
  required boolean signatureValid;
  required binary device (UTF8);
  required binary label (UTF8);
  required float accX;
  required float accY;
  required float accZ;
}

2. Building your first transformation block

To build a transformation block open a command prompt or terminal window, create a new folder, and run:

$ edge-impulse-blocks init

This will prompt you to log in, and enter the details for your block. E.g.:

Edge Impulse Blocks v1.9.0
? What is your user name or e-mail address (edgeimpulse.com)? [email protected]
? What is your password? [hidden]
Attaching block to organization 'Demo org Inc.'
? Choose a type of block Transformation block
? Choose an option Create a new block
? Enter the name of your block Demo project transformation
? Enter the description of your block Reads a Parquet file and splits it up in labeled data
Creating block with config: {
  name: 'Demo project transformation',
  type: 'transform',
  description: 'Reads a Parquet file and splits it up in labeled data',
  organizationId: 34
}
Your new block 'Demo project transformation' has been created in '~/repos/tutorial-processing-block'.
When you have finished building your transformation block, run "edge-impulse-blocks push" to update the block in Edge Impulse.

Then, create the following files in this directory:

2.1 - Dockerfile

We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt) and which script to run (transform.py).

FROM python:3.7.5-stretch

WORKDIR /app

# Python dependencies
COPY requirements.txt ./
RUN pip3 --no-cache-dir install -r requirements.txt

COPY . ./

ENTRYPOINT [ "python3",  "transform.py" ]

Note: Do not use a WORKDIR under /home! The /home path will be mounted in by Edge Impulse, making your files inaccessible.

ENTRYPOINT vs RUN / CMD

If you use a different programming language, make sure to use ENTRYPOINT to specify the application to execute, rather than RUN or CMD.

2.2 - requirements.txt

This file describes the dependencies for the block. We'll be using pandas and pyarrow to parse the Parquet file, and numpy to do some calculations.

numpy==1.16.4
pandas==0.23.4
pyarrow==0.16.0

2.3 - transform.py

This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):

  • --in-file - A file from the organizational dataset. In this case the gestures.parquet file.

  • --out-directory - Directory to write files to.

  • --hmac-key - You can use this HMAC key to sign the output files.

Add the following content:

import pyarrow.parquet as pq
import numpy as np
import math, os, sys, argparse, json, hmac, hashlib, time
import pandas as pd

# these are the three arguments that we get in
parser = argparse.ArgumentParser(description='Organization transformation block')
parser.add_argument('--in-file', type=str, required=True)
parser.add_argument('--out-directory', type=str, required=True)
parser.add_argument('--hmac-key', type=str, required=True)

args, unknown = parser.parse_known_args()

# verify that the input file exists and create the output directory if needed
if not os.path.exists(args.in_file):
    print('--in-file argument', args.in_file, 'does not exist', flush=True)
    exit(1)

if not os.path.exists(args.out_directory):
    os.makedirs(args.out_directory)

# load and parse the input file
print('Loading parquet file', args.in_file, flush=True)
table = pq.read_table(args.in_file)
data = table.to_pandas()

# we'll split data based on the "label" Parquet column
# keep track of the last label and if it changes we write a file
window = []
last_label = data['label'][0]
interval_ms = (data['timestamp'][1] - data['timestamp'][0]).microseconds / 1000

# turn a window into a file that is in the Edge Impulse Data Acquisition format
def window_to_file(window):
    # window under 5 seconds: skip
    if (len(window) * interval_ms < 5000):
        return

    # take the label, timestamp and sensors
    label = window[0]['label']
    timestamp = int(pd.to_datetime(window[0]['timestamp']).value / 100000000)
    sensors = ['accX', 'accY', 'accZ']
    values = []
    for w in window:
        values.append([ w[x] for x in sensors ])

    # basic structure
    struct = {
        'protected': {
            'ver': 'v1',
            'alg': 'HS256',
            'iat': int(timestamp)
        },
        'signature': '0000000000000000000000000000000000000000000000000000000000000000',
        'payload': {
            'device_type': 'Importer',
            'interval_ms': interval_ms,
            'sensors': [ { 'name': x, 'units': 'm/s2' } for x in sensors ],
            'values': values
        }
    }

    # sign the structure
    encoded = json.dumps(struct)
    signature = hmac.new(bytes(args.hmac_key, 'utf-8'), msg = encoded.encode('utf-8'), digestmod = hashlib.sha256).hexdigest()
    struct['signature'] = signature

    # and write to the output directory
    file_name = os.path.join(args.out_directory, label + '.' + str(timestamp) + '.json')
    with open(file_name, 'w') as f:
        json.dump(struct, f)

# loop over all rows in the Parquet file
for index, row in data.iterrows():
    # file changes, or longer than 10 seconds? write file
    if ((last_label != row['label']) or (len(window) * interval_ms > 10000)):
        print('writing file', str(index) + '/' + str(10000), row['label'], flush=True)
        window_to_file(window)
        window = []

    last_label = row['label']
    window.append(row)

# write the last window
window_to_file(window)

2.4 - Testing the transformation block locally

On your local machine

To test the transformation block locally, if you have Python and all dependencies installed, just run:

$ python3 transform.py --in-file gestures.parquet --out-directory out/ --hmac-key 123

This generates a number of JSON files in the out/ directory. You can test the import into an Edge Impulse project via:

$ edge-impulse-uploader --clean out/*.json

Docker

You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.

First, build the container:

$ docker build -t test-org-transform-parquet .

Then, run the container (make sure gestures.parquet is in the same directory):

$ docker run --rm -v $PWD:/data test-org-transform-parquet --in-file /data/gestures.parquet --out-directory /data/out --hmac-key 0123

This generates a number of JSON files in the out/ directory. You can test the import into an Edge Impulse project via:

$ edge-impulse-uploader --clean out/*.json

3. Pushing the transformation block to Edge Impulse

With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:

$ edge-impulse-blocks push

This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.

Edge Impulse Blocks v1.9.0
Archiving 'tutorial-processing-block'...
Archiving 'tutorial-processing-block' OK (2 KB) /var/folders/3r/fds0qzv914ng4t17nhh5xs5c0000gn/T/ei-transform-block-7812190951a6038c2f442ca02d428c59.tar.gz

Uploading block 'Demo project transformation' to organization 'Demo org Inc.'...
Uploading block 'Demo project transformation' to organization 'Demo org Inc.' OK

Building transformation block 'Demo project transformation'...
Job started
...
Building transformation block 'Demo project transformation' OK

Your block has been updated, go to https://studio.edgeimpulse.com/organization/34/data to run a new transformation

The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.

The transformation block in Edge Impulse

If you make any changes to the block, just re-run edge-impulse-blocks push and the block will be updated.

4. Uploading gestures.parquet to Edge Impulse

Next, upload the gestures.parquet file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.

This makes the gestures.parquet file available from the Data page.

5. Starting the job

With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'.

Selecting the transform tutorial dataset

Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page, select 'Import data into Project'. Then under 'Project', select '+ Create new project' and enter a name. Under 'Transformation block' select the new transformation block.

Configuring the transformation job

Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally imports the data into the new project. If you have multiple files selected the transformations will also run in parallel.

A transformation job with a transformation block

6. Next steps

Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the Edge Impulse API so you can automatically convert new incoming data. Want more? We also have an end-to-end example transformation block that mixes noise into an audio dataset: edgeimpulse/example-transform-block-mix-noise.

If you're interested in transformation blocks or any of the other enterprise features, let us know!

🚀

Appendix: Advanced features

Environmental variables

Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:

  • EI_API_KEY - an API key with 'member' privileges for the organization.

  • EI_ORGANIZATION_ID - the organization ID that the block runs in.

  • EI_API_ENDPOINT - the API endpoint (default: https://studio.edgeimpulse.com/v1).

Data forwarder

The data forwarder is used to easily relay data from any device to Edge Impulse over serial. Devices write sensor values over a serial connection, and the data forwarder collects the data, signs the data and sends the data to the ingestion service. The data forwarder is useful to quickly enable data collection from a wide variety of development boards without having to port the full remote management protocol and serial protocol, but only supports collecting data at relatively low frequencies.

To use the data forwarder, load an application (examples for Arduino, Mbed OS and Zephyr below) on your development board, and run:

$ edge-impulse-data-forwarder

The data forwarder will ask you for the server you want to connect to, prompt you to log in, and then configure the device.

This is an example of the output of the forwarder:

Edge Impulse data forwarder v1.5.0
? What is your user name or e-mail address (edgeimpulse.com)? [email protected]
? What is your password? [hidden]
Endpoints:
    Websocket: wss://remote-mgmt.edgeimpulse.com
    API:       https://studio.edgeimpulse.com
    Ingestion: https://ingestion.edgeimpulse.com

[SER] Connecting to /dev/tty.usbmodem401203
[SER] Serial is connected
[WS ] Connecting to wss://remote-mgmt.edgeimpulse.com
[WS ] Connected to wss://remote-mgmt.edgeimpulse.com
? To which project do you want to add this device? accelerometer-demo-1
? 3 sensor axes detected. What do you want to call them? Separate the names with ',': accX, accY, accZ
? What name do you want to give this device? Jan's DISCO-L475VG
[WS ] Authenticated

Note: Your credentials are never stored. When you log in these are exchanged for a token. This token is used to further authenticate requests.

Clearing configuration

To clear the configuration, run:

$ edge-impulse-data-forwarder --clean

Overriding the frequency

To override the frequency, use:

$ edge-impulse-data-forwarder --frequency 100

Overriding the baud rate

To set a different baud rate, use:

$ edge-impulse-data-forwarder --baud-rate 460800

Protocol

The protocol is very simple. The device should send data on baud rate 115,200 with one line per reading, and individual sensor data should be split with either a , or a TAB. For example, this is data from a 3-axis accelerometer:

-0.12,-6.20,7.90
-0.13,-6.19,7.91
-0.14,-6.20,7.92
-0.13,-6.20,7.90
-0.14,-6.20,7.91

The data forwarder will automatically determine the sampling rate and the number of sensors based on the output. If you load a new application where the sampling frequency or the number of axes changes, the data forwarder will automatically be reconfigured.

Example (Arduino)

This is an example of a sketch that reads data from an accelerometer (tested on the Arduino Nano 33 BLE):

#include <Arduino_LSM9DS1.h>

#define CONVERT_G_TO_MS2    9.80665f
#define FREQUENCY_HZ        50
#define INTERVAL_MS         (1000 / (FREQUENCY_HZ + 1))

static unsigned long last_interval_ms = 0;

void setup() {
    Serial.begin(115200);
    Serial.println("Started");

    if (!IMU.begin()) {
        Serial.println("Failed to initialize IMU!");
        while (1);
    }
}

void loop() {
    float x, y, z;

    if (millis() > last_interval_ms + INTERVAL_MS) {
        last_interval_ms = millis();

        IMU.readAcceleration(x, y, z);

        Serial.print(x * CONVERT_G_TO_MS2);
        Serial.print('\t');
        Serial.print(y * CONVERT_G_TO_MS2);
        Serial.print('\t');
        Serial.println(z * CONVERT_G_TO_MS2);
    }
}

Example (Mbed OS)

This is an example of an Mbed OS application that reads data from an accelerometer (tested on the ST IoT Discovery Kit):

#include "mbed.h"
#include "stm32l475e_iot01_accelero.h"

static int64_t sampling_freq = 104; // in Hz.
static int64_t time_between_samples_us = (1000000 / (sampling_freq - 1));

// set baud rate of serial port to 115200
static BufferedSerial serial_port(USBTX, USBRX, 115200);
FileHandle *mbed::mbed_override_console(int fd) {
    return &serial_port;
}

int main()
{
    int16_t pDataXYZ[3] = {0};
    Timer t;
    t.start();

    BSP_ACCELERO_Init();

    while(1) {
        int64_t next_tick = t.read_us() + time_between_samples_us;

        BSP_ACCELERO_AccGetXYZ(pDataXYZ);
        pc.printf("%d\t%d\t%d\n", pDataXYZ[0], pDataXYZ[1], pDataXYZ[2]);

        while (t.read_us() < next_tick) {
            /* busy loop */
        }
    }
}

There's also a complete example that samples data from both the accelerometer and the gyroscope here: edgeimpulse/example-dataforwarder-mbed.

Example (Zephyr)

This is an example of a Zephyr application that reads data from an accelerometer (tested on the Nordic Semiconductor nRF52840 DK with ST X-NUCLEO-IKS02A1 shield), based on the sensorhub example:

#include <zephyr.h>
#include <sys/printk.h>
#include <drivers/sensor.h>
#include <stdio.h>
#include <stdlib.h>

static int64_t sampling_freq = 104; // in Hz.
static int64_t time_between_samples_us = (1000000 / (sampling_freq - 1));

int main() {
    // output immediately without buffering
    setvbuf(stdout, NULL, _IONBF, 0);

    // get driver for the accelerometer
    const struct device *iis2dlpc = device_get_binding(DT_LABEL(DT_INST(0, st_iis2dlpc)));
    if (iis2dlpc == NULL) {
        printf("Could not get IIS2DLPC device\n");
        return 1;
    }

    struct sensor_value accel[3];

    while (1) {
        // start a timer that expires when we need to grab the next value
        struct k_timer next_val_timer;
        k_timer_init(&next_val_timer, NULL, NULL);
        k_timer_start(&next_val_timer, K_USEC(time_between_samples_us), K_NO_WAIT);

        // read data from the sensor
        if (sensor_sample_fetch(iis2dlpc) < 0) {
            printf("IIS2DLPC Sensor sample update error\n");
            return 1;
        }

        sensor_channel_get(iis2dlpc, SENSOR_CHAN_ACCEL_XYZ, accel);

        // print over stdout
        printf("%.3f\t%.3f\t%.3f\r\n",
            sensor_value_to_double(&accel[0]),
            sensor_value_to_double(&accel[1]),
            sensor_value_to_double(&accel[2]));

        // busy loop until next value should be grabbed
        while (k_timer_status_get(&next_val_timer) <= 0);
    }
}

There's also a complete example that samples data from the accelerometer here: edgeimpulse/example-dataforwarder-zephyr.

Sensor fusion

Using the Data Forwarder, you can relay data from multiple sensors. You can check Benjamin Cabe's artificial nose for a complete example using NO2, CO, C2H5OH and VOC sensors on a WIO Terminal.

You may also have sensors with different sampling frequencies, such as:

  • accelerometer: 3 axis sampled at 100Hz

  • RMS current sensor: 1 axis sampled at 5Hz

In this case, you should first upscale to the highest frequency to keep the finest granularity: upscale RMS sensor to 100 Hz by duplicating each value 20 times (100/5). You could also smooth values over between samples.

Classifying data

To classify data you first deploy your project by following the steps in Running your impulse locally - which contains examples for a wide variety of platforms. Then, declare a features array, fill it with sensor data, and run the classifier. Here are examples for Arduino, Mbed and Zephyr - but the same applies to any other platform.

Note: These examples collect a full frame of data, then classify this data. This might not be what you want (as classification blocks the collection thread). See Continuous audio sampling for an example on how to implement continuous classification.

Classifying data (Arduino)

// Include the Arduino library here (something like your_project_inference.h) 
// In the Arduino IDE see **File > Examples > Your project name - Edge Impulse > Static buffer** to get the exact name
#include <your_project_inference.h>
#include <Arduino_LSM9DS1.h>

#define CONVERT_G_TO_MS2    9.80665f
#define FREQUENCY_HZ        EI_CLASSIFIER_FREQUENCY
#define INTERVAL_MS         (1000 / (FREQUENCY_HZ + 1))

static unsigned long last_interval_ms = 0;
// to classify 1 frame of data you need EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE values
float features[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE];
// keep track of where we are in the feature array
size_t feature_ix = 0;

void setup() {
    Serial.begin(115200);
    Serial.println("Started");

    if (!IMU.begin()) {
        Serial.println("Failed to initialize IMU!");
        while (1);
    }
}

void loop() {
    float x, y, z;

    if (millis() > last_interval_ms + INTERVAL_MS) {
        last_interval_ms = millis();

        // read sensor data in exactly the same way as in the Data Forwarder example
        IMU.readAcceleration(x, y, z);

        // fill the features buffer
        features[feature_ix++] = x * CONVERT_G_TO_MS2;
        features[feature_ix++] = y * CONVERT_G_TO_MS2;
        features[feature_ix++] = z * CONVERT_G_TO_MS2;

        // features buffer full? then classify!
        if (feature_ix == EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE) {
            ei_impulse_result_t result;

            // create signal from features frame
            signal_t signal;
            numpy::signal_from_buffer(features, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);

            // run classifier
            EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
            ei_printf("run_classifier returned: %d\n", res);
            if (res != 0) return;

            // print predictions
            ei_printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
                result.timing.dsp, result.timing.classification, result.timing.anomaly);

            // print the predictions
            for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
                ei_printf("%s:\t%.5f\n", result.classification[ix].label, result.classification[ix].value);
            }
        #if EI_CLASSIFIER_HAS_ANOMALY == 1
            ei_printf("anomaly:\t%.3f\n", result.anomaly);
        #endif

            // reset features frame
            feature_ix = 0;
        }
    }
}

void ei_printf(const char *format, ...) {
    static char print_buf[1024] = { 0 };

    va_list args;
    va_start(args, format);
    int r = vsnprintf(print_buf, sizeof(print_buf), format, args);
    va_end(args);

    if (r > 0) {
        Serial.write(print_buf);
    }
}

Classifying data (Mbed OS)

#include "mbed.h"
#include "stm32l475e_iot01_accelero.h"
#include "ei_run_classifier.h"

static int64_t sampling_freq = EI_CLASSIFIER_FREQUENCY; // in Hz.
static int64_t time_between_samples_us = (1000000 / (sampling_freq - 1));

// to classify 1 frame of data you need EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE values
static float features[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE];

// set baud rate of serial port to 115200
static BufferedSerial serial_port(USBTX, USBRX, 115200);
FileHandle *mbed::mbed_override_console(int fd) {
    return &serial_port;
}

int main()
{
    int16_t pDataXYZ[3] = {0};
    Timer t;
    t.start();

    BSP_ACCELERO_Init();

    while (1) {
        // fill the features array
        for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME) {
            int64_t next_tick = t.read_us() + time_between_samples_us;
            BSP_ACCELERO_AccGetXYZ(pDataXYZ);

            // copy accelerometer data into the features array
            features[ix + 0] = (float)pDataXYZ[0];
            features[ix + 1] = (float)pDataXYZ[0];
            features[ix + 2] = (float)pDataXYZ[0];

            while (t.read_us() < next_tick) {
                /* busy loop */
            }
        }

        // frame full? then classify
        ei_impulse_result_t result = { 0 };

        // create signal from features frame
        signal_t signal;
        numpy::signal_from_buffer(features, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);

        // run classifier
        EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
        ei_printf("run_classifier returned: %d\n", res);
        if (res != 0) return 1;

        // print predictions
        ei_printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
            result.timing.dsp, result.timing.classification, result.timing.anomaly);

        // print the predictions
        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
            ei_printf("%s:\t%.5f\n", result.classification[ix].label, result.classification[ix].value);
        }
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
        ei_printf("anomaly:\t%.3f\n", result.anomaly);
    #endif
    }
}

Classifying (Zephyr)

Before adding the classifier in Zephyr:

  1. Copy the extracted C++ library into your Zephyr project, and add the following to your CMakeLists.txt file (where ./model is where you extracted the library).

set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)

add_subdirectory(./model)
  1. Enable C++ and set the stack size of the main thread to at least 4K, by adding the following to prj.conf:

CONFIG_CPLUSPLUS=y
CONFIG_LIB_CPLUSPLUS=y
CONFIG_NEWLIB_LIBC=y
CONFIG_NEWLIB_LIBC_FLOAT_PRINTF=y

CONFIG_MAIN_STACK_SIZE=8192
  1. If you're on a Cortex-M target, enable hardware acceleration by adding the following defines to your CMakeLists.txt file:

add_definitions(-DEIDSP_USE_CMSIS_DSP=1
                -DEIDSP_LOAD_CMSIS_DSP_SOURCES=1
                -DEI_CLASSIFIER_TFLITE_ENABLE_CMSIS_NN=1
                -DARM_MATH_LOOPUNROLL)

Then, run the following application:

#include <zephyr.h>
#include <sys/printk.h>
#include <drivers/sensor.h>
#include <stdio.h>
#include <stdlib.h>
#include "ei_run_classifier.h"

static int64_t sampling_freq = EI_CLASSIFIER_FREQUENCY; // in Hz.
static int64_t time_between_samples_us = (1000000 / (sampling_freq - 1));

// to classify 1 frame of data you need EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE values
static float features[EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE];

int main() {
    // output immediately without buffering
    setvbuf(stdout, NULL, _IONBF, 0);

    // get driver for the accelerometer
    const struct device *iis2dlpc = device_get_binding(DT_LABEL(DT_INST(0, st_iis2dlpc)));
    if (iis2dlpc == NULL) {
        printf("Could not get IIS2DLPC device\n");
        return 1;
    }

    struct sensor_value accel[3];

    while (1) {
        for (size_t ix = 0; ix < EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE; ix += EI_CLASSIFIER_RAW_SAMPLES_PER_FRAME) {
            // start a timer that expires when we need to grab the next value
            struct k_timer next_val_timer;
            k_timer_init(&next_val_timer, NULL, NULL);
            k_timer_start(&next_val_timer, K_USEC(time_between_samples_us), K_NO_WAIT);

            // read data from the sensor
            if (sensor_sample_fetch(iis2dlpc) < 0) {
                printf("IIS2DLPC Sensor sample update error\n");
                return 1;
            }

            sensor_channel_get(iis2dlpc, SENSOR_CHAN_ACCEL_XYZ, accel);

            // fill the features array
            features[ix + 0] = sensor_value_to_double(&accel[0]);
            features[ix + 1] = sensor_value_to_double(&accel[1]);
            features[ix + 2] = sensor_value_to_double(&accel[2]);

            // busy loop until next value should be grabbed
            while (k_timer_status_get(&next_val_timer) <= 0);
        }

        // frame full? then classify
        ei_impulse_result_t result = { 0 };

        // create signal from features frame
        signal_t signal;
        numpy::signal_from_buffer(features, EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE, &signal);

        // run classifier
        EI_IMPULSE_ERROR res = run_classifier(&signal, &result, false);
        printf("run_classifier returned: %d\n", res);
        if (res != 0) return 1;

        // print predictions
        printf("Predictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
            result.timing.dsp, result.timing.classification, result.timing.anomaly);

        // print the predictions
        for (size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++) {
            printf("%s:\t%.5f\n", result.classification[ix].label, result.classification[ix].value);
        }
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
        printf("anomaly:\t%.3f\n", result.anomaly);
    #endif
    }
}

Troubleshooting

"The execution of scripts is disabled on this system" (Windows)

If you are running the data forwarder on a Windows system, you need to update PowerShell's execution policy to allow running scripts:

Set-ExecutionPolicy unrestricted