Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The data forwarder is used to easily relay data from any device to Edge Impulse over serial. Devices write sensor values over a serial connection, and the data forwarder collects the data, signs the data and sends the data to the ingestion service. The data forwarder is useful to quickly enable data collection from a wide variety of development boards without having to port the full remote management protocol and serial protocol, but only supports collecting data at relatively low frequencies.
To use the data forwarder, load an application (examples for Arduino, Mbed OS and Zephyr below) on your development board, and run:
The data forwarder will ask you for the server you want to connect to, prompt you to log in, and then configure the device.
This is an example of the output of the forwarder:
Note: Your credentials are never stored. When you log in these are exchanged for a token. This token is used to further authenticate requests.
To clear the configuration, run:
To override the frequency, use:
To set a different baud rate, use:
The protocol is very simple. The device should send data on baud rate 115,200 with one line per reading, and individual sensor data should be split with either a ,
or a TAB
. For example, this is data from a 3-axis accelerometer:
The data forwarder will automatically determine the sampling rate and the number of sensors based on the output. If you load a new application where the sampling frequency or the number of axes changes, the data forwarder will automatically be reconfigured.
This is an example of a sketch that reads data from an accelerometer (tested on the Arduino Nano 33 BLE):
This is an example of an Mbed OS application that reads data from an accelerometer (tested on the ST IoT Discovery Kit):
There's also a complete example that samples data from both the accelerometer and the gyroscope here: edgeimpulse/example-dataforwarder-mbed.
This is an example of a Zephyr application that reads data from an accelerometer (tested on the Nordic Semiconductor nRF52840 DK with ST X-NUCLEO-IKS02A1 shield), based on the sensorhub example:
There's also a complete example that samples data from the accelerometer here: edgeimpulse/example-dataforwarder-zephyr.
Using the Data Forwarder, you can relay data from multiple sensors. You can check Benjamin Cabe's artificial nose for a complete example using NO2, CO, C2H5OH and VOC sensors on a WIO Terminal.
You may also have sensors with different sampling frequencies, such as:
accelerometer: 3 axis sampled at 100Hz
RMS current sensor: 1 axis sampled at 5Hz
In this case, you should first upscale to the highest frequency to keep the finest granularity: upscale RMS sensor to 100 Hz by duplicating each value 20 times (100/5). You could also smooth values over between samples.
To classify data you first deploy your project by following the steps in Running your impulse locally - which contains examples for a wide variety of platforms. Then, declare a features
array, fill it with sensor data, and run the classifier. Here are examples for Arduino, Mbed and Zephyr - but the same applies to any other platform.
Note: These examples collect a full frame of data, then classify this data. This might not be what you want (as classification blocks the collection thread). See Continuous audio sampling for an example on how to implement continuous classification.
Before adding the classifier in Zephyr:
Copy the extracted C++ library into your Zephyr project, and add the following to your CMakeLists.txt
file (where ./model
is where you extracted the library).
Enable C++ and set the stack size of the main thread to at least 4K, by adding the following to prj.conf
:
If you're on a Cortex-M target, enable hardware acceleration by adding the following defines to your CMakeLists.txt
file:
Then, run the following application:
If you are running the data forwarder on a Windows system, you need to update PowerShell's execution policy to allow running scripts:
The serial daemon is used to connect fully-supported devices to Edge Impulse so that data from their on-board sensors can be uploaded directly into Edge Impulse Studio. This is particularly helpful for devices without an IP connection, for which the serial daemon acts as a data upload proxy. You can also use the serial daemon to configure the upload parameters.
Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the serial daemon. See this blog post for more information.
The serial daemon is part of the Edge Impulse CLI. In order to use the daemon, you first have to install the CLI.
To use the daemon, connect a fully-supported development board to your computer and run:
The daemon will prompt you to log in, and then configure the device. If your device does not have the right firmware yet, it will also prompt you to upgrade it.
This is an example of the output of the daemon:
Note: Your credentials are never stored. When you log in, the serial daemon exchanges your credentials for a session token, which is used to further authenticate requests.
You can use one device for many projects. To switch projects run:
And select the new project. The device will remain listed in the old project, and if you switch back will retain the same name and last seen date.
Running --clean
resets both the daemon configuration and the on-device configuration. If you run into issues, you can connect to the device using a serial console (with a baud rate of 115,200) and send the AT+CLEARCONFIG
command to the device, to remove its configuration.
Serial daemon options can be invoked as follows:
--api-key
Enables authentication using a project API key. API keys are long strings of random characters that start with ei_
and can be obtained from the project's dashboard on Edge Impulse Studio. Example:
--baud-rate
Change the rate of the communication between the device and Edge Impulse Studio. Default is 115,200 baud. Example:
--clean
Clears (resets) the daemon and device configurations.
--silent
Skip all wizards (except for the login prompt). This is useful in headless environments where the session token has already been obtained, or authentication is requested via the --api-key
option.
--verbose
Print additional information during execution. Useful for debugging.
--version
Prints the version of the Edge Impulse CLI (and therefore, the serial daemon) installed.
If you are using the ST B-L475E-IOT01A development board, you may experience the following error when attempting to connect to a WiFi network:
There is a known issue with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
You start the impulse via:
This will sample data from your real sensors, classify the data, then print the results. E.g.:
--debug
- run the impulse in debug mode, this will print the intermediate DSP results. For image models, a live feed of the camera and inference results will also be locally hosted and available in your browser (More on this below.)
--continuous
- run the impulse in continuous mode (not available on all platforms).
The Linux CLI Runner has an embedded API server that allows you to interact with the model easily from any application, environment, or framework that implements an HTTP client. This feature is started with the runner using the --run-http-server option.
To start the API server:
Which will share the link to the web page where you can see the live feed of the camera and inference results.
or
This will start the API server on port 3000, if you don't have an image model you will not see the http server web page.
API Endpoints
Once the server is running, you can send HTTP requests to interact with the model. Here is a simple example using Python:
How would you use this?
Here are a few examples of how you could use this embedded API server:
Custom Applications: A custom app running on the same Linux device can interact with the model using an HTTP client, simplifying the integration process.
IoT Devices: Small IoT devices with an HTTP client in the firmware can send data to the inference server (the runner) in the local network, get results, and leverage powerful ML models without the need for local model storage and inference.
Web Applications: Web applications can interact with the model running on the Linux device using the HTTP client, enabling powerful ML models in web applications without the need for cloud services.
Mobile Applications: Mobile applications can interact with the model running on the Linux device using the HTTP client, enabling powerful ML models in mobile applications without the need for cloud services.
The impulse runner is a powerful tool that allows you to run your impulse on your development board and interact with it using an embedded API server. This feature is useful for custom applications, IoT devices, web applications, and mobile applications that need to interact with the model running on the Linux device.
For more information on the impulse runner, or to discuss how you may use this please reach us on the Forum
This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:
edge-impulse-daemon - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.
edge-impulse-uploader - allows uploading and signing local files.
edge-impulse-data-forwarder - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.
edge-impulse-run-impulse - show the impulse running on your device.
edge-impulse-blocks - create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.
himax-flash-tool - to flash the Himax WE-I Plus.
Did you know you can also connect devices directly to your browser?
Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See this blog post for more information.
The uploader signs local files and uploads them to the ingestion service. This is useful to upload existing data samples and entire datasets, or to migrate data between Edge Impulse instances.
The uploader currently handles these types of files:
.cbor
- Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.
.json
- Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.
.csv
- Files in the Edge Impulse Comma Separated Values (CSV) format. If you have configured the "CSV wizard", the settings will be used to parse your CSV files.
.wav
- Lossless audio files. It's recommended to use the same frequency for all files in your data set, as signal processing output might be dependent on the frequency.
.jpg
and .png
- Image files. It's recommended to use the same ratio for all files in your data set.
.mp4
and .avi
- Video file. You can then from the studio split this video file into images at a configurable frame per second.
The uploader currently handles these types of dataset annotation formats:
Need more?
If none of these above choices are suitable for your project, you can also have a look at the Transformation blocks to parse your data samples to create a dataset supported by Edge Impulse. See Building your Transformation Blocks
You can also upload data directly from the studio, see Studio uploader. Go to the Data acquisition page, and click the 'upload' icon. You can select files or folders, the category and the label directly from here.
You can upload files via the Edge Impulse CLI via:
You can upload multiple files in one go via:
Or by specifying the directory:
The first time you'll be prompted for a server, and your login credentials (see Edge Impulse Daemon for more information).
Files are automatically uploaded to the training
category, but you can override the category with the --category
option. E.g.:
Or set the category to split
to automatically split data between training and testing sets (recommended for a balanced dataset). This is based on the hash of the file, so this is a deterministic process.
A label is automatically inferred from the file name, see the Ingestion service documentation. You can override this with the --label
option. E.g.:
When a labeling method is not provided, the labels are automatically inferred from the filename through the following regex: ^[a-zA-Z0-9\s-_]+
. For example: idle.01 will yield the label idle
.
Thus, if you want to use labels (string values) containing float values (e.g. "0.01", "5.02", etc...), automatic labeling won't work.
To bypass this limitation, you can make a JSON file containing your dataset files' info. We also support adding metadata to your samples:
info.labels
Metadata field is optional
To upload unlabeled data use "label": { "type": "unlabeled" }
To upload multi-label data use
And then use:
(available since Edge Impulse CLI v1.21)
You can upload directories of data in a range of different formats:
By default, we try to automatically detect your dataset annotation format from the supported ones. If we cannot detect it, the uploader will output the list of formats. You can then use:
To clear the configuration, run:
This resets the uploader configuration and will prompt you to log in again.
You can use an API key to authenticate with:
Note that this resets the uploader configuration and automatically configures the uploader's account and project.
In July 2023, we added support for many other image dataset annotation formats. Below is an example of the default Edge Impulse object detection format.
If you want to upload data for object detection, the uploader can label the data for you as it uploads it. To do this, all you need is to create a bounding_boxes.labels
file in the same folder as your image files. The contents of this file are formatted as JSON with the following structure:
You can have multiple keys under the boundingBoxes
object, one for each file name. If you have data in multiple folders, you can create a bounding_boxes.labels
in each folder.
You don't need to upload bounding_boxes.labels
When uploading one or more images, we check whether a labels file is present in the same folder, and automatically attach the bounding boxes to the image.
So you can just do:
or
Let the Studio do the work for you!
Unsure about the structure of the bounding boxes file? Label some data in the studio, then export this data by selecting Dashboard > Export. The bounding_boxes.labels
file will be included in the exported archive.
The uploader data in the OpenMV dataset format. Pass in the option --format-openmv
and pass the folder of your dataset in to automatically upload data. Data is automatically split between testing and training sets. E.g.:
--silent
- omits information on startup. Still prints progress information.
--dev
- lists development servers, use in conjunction with --clean
.
--hmac-key <key>
- set the HMAC key, only used for files that need to be signed such as wav
files.
--concurrency <count>
- number of files to uploaded in parallel (default: 20).
--progress-start-ix <index>
- when set, the progress index will start at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.
--progress-end-ix <index>
- when set, the progress index will end at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.
--progress-interval <interval>
- when set, the uploader will not print an update for every line, but every interval
period (in ms.).
--allow-duplicates
- to avoid pollution of your dataset with duplicates, the hash of a file is checked before uploading against known files in your dataset. Enable this flag to skip this check.
When using command line wildcards to upload large datasets you may encounter an error similar to this one:
This happens if the number of .wav
files exceeds the total number of arguments allowed for a single command on your shell. You can easily work around this shell limitation by using the find
command to call the uploader for manageable batches of files:
You can include any necessary flags by appending them to the xargs
portion, for example if you wish to specify a category
:
This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:
edge-impulse-daemon - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.
edge-impulse-uploader - allows uploading and signing local files.
edge-impulse-data-forwarder - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.
edge-impulse-run-impulse - show the impulse running on your device.
edge-impulse-blocks - create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.
himax-flash-tool - to flash the Himax WE-I Plus.
Connect to devices without the CLI? Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See this blog post for more information.
Create an Edge Impulse account.
Install Python 3 on your host computer.
Install Node.js v20 or above on your host computer.
For Windows users, install the Additional Node.js tools (called Tools for Native Modules on newer versions) when prompted.
Install the CLI tools via:
You should now have the tools available in your PATH.
Create an Edge Impulse account.
Install Python 3 on your host computer.
Install Node.js v20 or above on your host computer.
Alternatively, run the following commands:
The last command should return the node version, v20 or above.
Let's verify the node installation directory:
If it returns /usr/local/, run the following commands to change npm's default directory:
On MacOS you might be using zsh as default, so you will want to update the correct profile
Install the CLI tools via:
You should now have the tools available in your PATH.
If you have issues installing the CLI you can also collect data from fully-supported development boards directly using recent versions of Google Chrome and Microsoft Edge. See this blog post on how to get started.
This error indicates an issue occurred when installing the edge-impulse-cli for the first time or you have not selected to install the addition tools when installing NodeJS (not selected by default).
Remove NodeJS and install it again selecting the option:
Re-install the CLI via
If you receive the following error: The tools version "2.0" is unrecognized. Available tools versions are "4.0"
, launch a new command window as administrator and run:
This is indication that the node_modules
is not owned by you, but rather by root. This is probably not what you want. To fix this, run:
Try to set the npm user to root and re-run the installation command. You can do this via:
If you receive an error such as:
You're running an older version of node-gyp
(a way to build binary packages). Upgrade via:
This error occurs when you have upgraded Node.js since installing the Edge Impulse CLI. Re-install the CLI via:
Which will rebuild the dependencies.
This can happen even though you have Xcode CLT installed if you've updated macOS since your install. Follow this guide to reinstall Xcode CLT.
If you see this error message and you're behind a proxy you will need to set your proxy settings via:
Windows
macOS, Linux
Manually delete the Edge Impulse directory from node_modules
and reinstall:
The Himax flash tool uploads new binaries to the over a serial connection.
You upload a new binary via:
This will yield a response like this:
--baud-rate <n>
- sets the baud rate of the bootloader. This should only be used during development.
--verbose
- enable debug logs, including all communication received from the device.
The blocks CLI tool creates different blocks types that are used in organizational features such as:
- to transform large sets of data efficiently.
- to build personalized firmware using your own data or to create custom libraries.
- to create and host your custom signal processing techniques and use them directly in your projects.
- to use your custom neural network architectures and load pre-trained weights, with Keras, PyTorch and scikit-learn.
With the blocks CLI tool, you can create new blocks, run them locally, and push them to Edge Impulse infrastructure so we can host them for you. Edge Impulse blocks can be written in any language, and are based on Docker container for maximum flexibility.
As an example here, we will show how to create a transformation block.
You can create a new block by running:
When you're done developing the block you can push it to Edge Impulse via:
The metadata about the block (which organization it belongs to, block ID) is saved in .ei-block-config
, which you should commit. To view this data in a convenient format, run:
Rather than only running custom blocks in the cloud, the edge-impulse-blocks runner
command lets developers download, configure, and run custom blocks entirely on their local machine, making testing and development much faster. The options depend on the type of block being run, and they can be viewed by using the help menu:
As seen above, the runner
accepts a list of relevant option flags along with a variable number of extra arguments that get passed to the Docker container at runtime for extra flexibility. As an example, here is what happens when edge-impulse-blocks runner
is used on a file transformation block:
Best of all, the runner
only downloads data when it isn't present locally, thus saving time and bandwidth.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. Thus, every block needs at least a Dockerfile
. This is a file describing how to build the container that powers the block, and it has information about the dependencies for the block - like a list of Python packages your block needs. This Dockerfile
needs to declare an ENTRYPOINT
: a command that needs to run when the container starts.
An example of a Python container is:
Which takes a base-image with Python 3.7.5, then installs all dependencies listed in requirements.txt
, and finally starts a script called transform.py
.
Note: Do not use a WORKDIR under /home! The /home path will be mounted in by Edge Impulse, making your files inaccessible.
Note: If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
Besides your Dockerfile
you'll also need the application files, in the example above transform.py
and requirements.txt
. You can place these in the same folder.
When pushing a new block all files in your folder are archived and sent to Edge Impulse, where the container is built. You can exclude files by creating a file called .ei-ignore
in the root folder of your block. You can either set absolute paths here, or use wildcards to exclude many files. For example:
To clear the configuration, run:
This resets the CLI configuration and will prompt you to log in again.
You can use an API key to authenticate with:
Note that this resets the CLI configuration and automatically configures your organization.
--dev
- lists development servers, use in conjunction with --clean
.