Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
You can use your Linux x86_64 device or computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a webcam and microphone plugged into your system, they are automatically detected and can be used to build models.
Instruction set architectures
If you are not sure about your instruction set architectures, use:
To set this device up in Edge Impulse, run the following commands:
Ubuntu/Debian:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Counting objects using FOMO Looking to connect different sensors? Our [Linux SDK]/tools/edge-impulse-for-linux/) lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The Raspberry Pi 5 with 2–3× the speed of the previous generation, and featuring silicon designed in‑house for the best possible performance, we’ve redefined the Raspberry Pi experience. The Pi5 is a versatile Linux development board with a quad-core processor running at 2.4GHz a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio.
In addition to the Raspberry Pi 5 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the work fine on the development board out of the box.
In this documentation, we will detail the steps to set up your Raspberry Pi 5 with the new Bookworm release OS for Edge Impulse. This guide includes headless setup instructions and how to connect to Edge Impulse, along with troubleshooting tips.
You can set up your Raspberry Pi without a screen. To do so:
Download the Raspberry Pi OS - Bookworm Release
Ensure you have the latest Raspberry Pi OS which supports the new Edge Impulse Linux CLI version >= 1.3.0.
Flash the Raspberry Pi OS Image
Prepare the SD Card
When flashing the OS image, access the advanced options menu in the Raspberry Pi Imager to preconfigure your WiFi and enable SSH.
wpa_supplicant.conf cannot be used from Bookworm onward. You must use the rpi-imager
or the advanced menuraspi-config
tool to set up WiFi.
Create an empty file called ssh
in the boot drive to enable SSH.
Boot the Raspberry Pi
Insert the SD card into your Raspberry Pi 5 and power it on.
Find the IP Address
Locate the IP address of your Raspberry Pi using your router's DHCP logs or a network scanning tool. On macOS or Linux, use:
This will display the IP address, e.g., 192.168.1.19
.
Connect via SSH
Open a terminal and connect to the Raspberry Pi:
Log in with the default username pi
and password raspberry
.
If you have a screen and a keyboard/mouse attached to your Raspberry Pi:
Flash the Raspberry Pi OS Image
Flash the OS image to an SD card as described above.
Boot the Raspberry Pi
Insert the SD card into your Raspberry Pi 5 and power it on.
Connect to WiFi
Use the graphical interface to connect to your WiFi network.
Open a Terminal
Click the 'Terminal' icon in the top bar of the Raspberry Pi desktop.
To set this device up in Edge Impulse, run the following commands:
Then to update npm packages:
If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:
Use the cursor keys to select and open Interfacing Options, then select Camera, and follow the prompt to enable the camera. Reboot the Raspberry Pi.
If you want to install Edge Impulse on your Raspberry Pi using Docker, run the following commands:
Once in the Docker container, run:
With all software set up, connect your camera or microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor).
To connect your Raspberry Pi 5 to Edge Impulse, run the following command:
You can now sample raw data, build models, and deploy trained machine learning models directly from the Studio. Please let us know if you have any questions or need further assistance. forum.edgeimpulse.com
Wrong OS bits
If you see the following error when trying to deploy a .eim model to your Raspberry Pi:
It likely means you are attempting to deploy a .eim Edge Impulse model file to a 32-bit operating system running on a 64-bit CPU. To check your hardware architecture and OS in Linux, please run the following commands:
You can use your Intel or M1-based Mac computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a Macbook, the webcam and microphone of your system are automatically detected and can be used to build models.
To connect your Mac to Edge Impulse:
Open a terminal window and install the dependencies:
Last, install the Edge Impulse CLI:
Problems installing the CLI?
With the software installed, open a terminal window and run::
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally, just open a terminal and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
You must use 64-bit OS as 32-bit OS is no longer supported Raspberry Pi 5 uses , which is a 64-bit CPU. If you are installing Raspberry Pi OS for the RPi 5, make sure you use the 64-bit version. Raspberry Pi 5 cannot run armv7 images
Flash the OS image to an SD card using a tool like .
Install .
Install .
See the guide.
That's all! Your Mac is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
The AM62X Starter Kit from Texas Instruments is a development platform for the AM62X with quad-core Arm A53s at 1.4GHz. This general purpose microprocessor supports 1080p displays through HDMI, 5MP camera input through MIPI-CSI2, including the Raspberry Pi cam support, and multichannel audio. The Linux distribution for this device comes with Tensorflow-lite, ONNX Runtime, OpenCV, and gstreamer, all with python bindings and C++ libraries.
Please visit Texas Instruments' website for instructions on how to use this board with Edge Impulse.
The SAMA7G54 is a high-performance, Arm Cortex-A7 CPU-based embedded microprocessor (MPU) running up to 1 GHz. It supports multiple memories such as 16-bit DDR2, DDR3, DDR3L, LPDDR2, LPDDR3 with flexible boot options from octal/quad SPI, SD/eMMC as well as 8-bit SLC/MLC NAND Flash.
The SAMA7G54 integrates complete imaging and audio subsystems with 12-bit parallel and/or MIPI-CSI2 camera interfaces supporting up to 8 Mpixels and 720p @ 60 fps, up to four I2S, one SPDIF transmitter and receiver and a 4-stereo channel audio sample rate converter.
The device also features a large number of connectivity options including Dual Ethernet (one Gigabit Ethernet and one 10/100 Ethernet), six CAN-FD and three high-speed USB. Advanced security functions like secure boot, secure key storage, high-performance crypto accelerators for AES, SHA, RSA and ECC are also supported.
Microchip provides an optimized power management solution for the SAMA7G54. The MCP16502 has been fully tested and optimized to provide the best power vs. performance for the SAMA7G54.
Set these jumpers to the default settings:
Provide power to the board as described in the Microchip documentation.
Use the pre-built image - This has edge-impulse-linux preinstalled
OR
Use Docker & Buildroot to build your own custom image by following the Readme instructions in this repository.
Use BalenaEtcher to flash the .img file to an SD card.
The Microchip Developer Help portal has documentation for serial communications to the SAMA7G54-EK. Once your serial terminal is connected make sure the device has power and press the nStart
button, you should see messages appearing over the serial console.
Login with root
user and edgeimpulse
password.
If you would like to use SSH to connect to the board, some additional steps are necessary:
cd /etc/ssh/
nano sshd_config
Uncomment and change PermitRootLogin prohibit-password
to PermitRootLogin yes
Uncomment PasswordAuthentication yes
CTRL+X
then Y
then Enter
reboot
to restart SSH
ifconfig
to get IP address
On your host machine ssh root@www.xxx.yyy.zzz
With all software set up, connect your web-camera to your operating system and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
Another option is to download the .eim file directly from Studio, copy it to the device filesystem and run it with the --model-file argument. In this case chmod +x
will be required to give the .eim executable permissions.
The main route for deploying en Edge Impulse project with SAMA7G54-EK Evaluation Kit is through using .eim. However it is also possible to build example-standalone-inferencing-linux package and run it on the device.
To do that run make menuconfig
Go to Target packages -> Miscellaneous and choose Example Standalone Inferencing Linux package. Paste the project deployment files (edge-impulse-sdk, model-parameters, tflite-model) into buildroot-microchip/buildroot-at91/package/example-standalone-inferencing-linux folder.
Proceed to building the image with make -j $((`nproc` - 1))
You will be able to find custom
application file in /home on your target. Run it with ./custom features.txt
, where features.txt is a file with raw features.
Note: When using the .eim method it's important to ensure the file has appropriate permissions, so use chmod
to set these if needed.
The Raspberry Pi 4 is a versatile Linux development board with a quad-core processor running at 1.5GHz, a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio.
In addition to the Raspberry Pi 4 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the Camera Module work fine on the development board out of the box.
You can set up your Raspberry Pi without a screen. To do so:
Flash the Raspberry Pi OS image to an SD card using the Raspberry Pi Imager.
You must use 64-bit OS with _aarch64 and 32-bit OS with armv7l_*
During the flashing process, access the advanced options menu in the Raspberry Pi Imager to preconfigure your WiFi and enable SSH.
wpa_supplicant.conf cannot be used from Bookworm onward. You must use the Pi Imager or the advanced menuraspi-config
tool to set up WiFi.
Insert the SD card into your Raspberry Pi 4, and let the device boot up.
Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router or by scanning your network. E.g., on macOS and Linux via:
Here 192.168.1.19
is your IP address.
Connect to the Raspberry Pi over SSH. Open a terminal window and run:
Log in with the default username pi
and password raspberry
.
If you have a screen and a keyboard/mouse attached to your Raspberry Pi:
Flash the Raspberry Pi OS image to an SD card.
Insert the SD card into your Raspberry Pi 4, and let the device boot up.
Connect to your WiFi network.
Click the 'Terminal' icon in the top bar of the Raspberry Pi.
You can set up your Raspberry Pi without a screen. To do so:
Flash the Raspberry Pi OS image to an SD card.
You must use 64-bit OS with _aarch64 and 32-bit OS with armv7l_*
After flashing the OS, find the boot
mass-storage device on your computer, and create a new file called wpa_supplicant.conf in the boot
drive. Add the following code:
(Replace the fields marked with <>
with your WiFi credentials)
Next, create a new file called ssh
in the boot
drive. You can leave this file empty.
Insert the SD card into your Raspberry Pi 4, and let the device boot up.
Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router, or by scanning your network. E.g., on macOS and Linux via:
Here 192.168.1.19
is your IP address.
Connect to the Raspberry Pi over SSH. Open a terminal window and run:
Log in with the default username pi
and password raspberry
.
If you have a screen and a keyboard/mouse attached to your Raspberry Pi:
Flash the Raspberry Pi OS image to an SD card.
Insert the SD card into your Raspberry Pi 4, and let the device boot up.
Connect to your WiFi network.
Click the 'Terminal' icon in the top bar of the Raspberry Pi.
To set this device up in Edge Impulse, run the following commands:
If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:
Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera. Then reboot the Raspberry.
If you want to install Edge Impulse on your Raspberry Pi using Docker you can run the following commands:
Once on the Docker container, run:
and
You should now be able to run Edge Impulse CLI tools from the container running on your Raspberry.
Note that this will only work using an external USB camera
With all software set up, connect your camera or microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Raspberry Pi again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
If you see the following error when trying to deploy a .eim model to your Raspberry Pi:
It likely means you are attempting to deploy a .eim Edge Impulse model file to a 32-bit operating system running on a 64-bit CPU. To check your hardware architecture and OS in Linux, please run the following commands:
If you see something like this:
It means that you are running a 32-bit OS on a 64-bit CPU. To run .eim models on aarch64 CPUs, you must use a 64-bit operating system. Please download and install the 64-bit version of Raspberry Pi OS if you see aarch64
when you run uname -m
.
The Renesas RZ/G2L is a state-of-the-art general-purpose 64-bit Linux MPU with a dual-core ARM Cortex-A55 processor running at 1.2GHz.
The RZ/G2L EVK consists of a SMARC SOM module and an I/O carrier board that provides a USB serial interface, 2 channel Ethernet interfaces, a camera and an HDMI display interface, in addition to many other interfaces (PMOD, microphone, audio output, etc.). The RZ/G2L EVK can be acquired directly through the Renesas website.
For more technical information about RZ/G2L, please refer to the Renesas RZ/G2L documentation.
Please create an account on Renesas' website to be able to download the packages and files outlined in the subsequent sections.
Renesas provides Yocto build system to build all the necessary packages and create the Linux image. In this section, we will build the Linux image with the nodejs/npm packages required by Edge Impulse CLI tools. Renesas requires using the Ubuntu 20.04 Linux distribution to build the Linux image. Please follow instructions provided here to setup your build environment for your G2L yocto build. These instructions will also provide you with the necessary bootloader settings required to boot your G2L from sdcard.
In order to use the Edge Impulse CLI tools, NodeJS v18 needs to be installed into the yocto image that you build. Given the instructions called out here, once the following files are downloaded from Renesas (specific versions specified are required):
... and extracted, you need to download the patch file RZG2L_VLP306u1_switch_to_nodejs_18.17.1.patch and place the patch file into the same directory.
After putting all of these files into a single directory + patch file, you will need to create and patch your G2L yocto build environment as follows (this can be exported into a script that can be run):
You can then invoke your G2L yocto build process via:
Renesas documentation here then shows you different build options + how to flash your compiled images onto your G2L board. Once your build completes, your files that will be used in those subsequent instructions called out here to flash your G2L board can be found here:
screen
The easiest way is to connect through serial to the RZ/G2L board using the USB mini b port.
After connecting the board with a USB-C cable, please power the board with the red power button.
Please install screen
to the host machine and then execute the following command from Linux to access the board:
You will see the boot process, then you will be asked to log in:
Log in with username root
There is no password
Note that, it should be possible to use an Ethernet cable and log in via SSH if the daemon is installed on the image. However, for simplicity purposes, we do not refer to this one here.
Once you have logged in to the board, please run the following command to install Edge Impulse Linux CLI
With all software set up, connect your google coral camera to your Renesas board (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Currently, all Edge Impulse models can run on the RZ/G2L CPU which is a dedicated Cortex A55. In addition, you can bring your own model to Edge Impulse and use it on the device. However, if you would like to benefit from the DRP-AI hardware acceleration support including higher performance and power efficiency, please use one of the following models:
For object detection:
Yolov5 (v5)
FOMO (Faster objects More Objects)
For Image classification:
MobileNet v1, v2
It supports as well models built within the studio using the available layers on the training page.
Note that, on the training page you have to select the target before starting the training in order to tell the studio that you are training the model for the RZ/G2L. This can be done on the top right in the training page.
If you would like to do object detection with Yolov5 (v5) you need to fix the image resolution in the impulse design to 320x320, otherwise, you might risk that the training fails.
With everything set up you can now build your first machine learning model with these tutorials:
If you are interested in using the EON tuner in order to improve the accuracy of the model this is possible only for image classification for now. EON tuner supports for object detection is arriving soon.
If you use the EON tuner with image classification, you need to filter the int8
models since they are not supported by the DRP-AI. Also, you need to filter the grayscale models as well. Note that if you leave the EON tuner page, the filter will reset to the default settings, which means you need to re-filter the above models.
To run your impulse locally, just connect to your Renesas RZ/G2L and run:
This will automatically compile your model with full hardware acceleration and download the model to your Renesas board, and then start classifying.
Or you can select the RZ/G2L board from the deployment page, this will download an eim
model that you can use with the above runner as follows:
Go to the deployment page and select:
Then run the following on the RZ/G2L:
You will see the model inferencing results in the terminal also we stream the results to the local network. This allows you to see the output of the model in real-time in your web browser. Open the URL shown when you start the runner
and you will see both the camera feed and the classification results.