Loading...
Loading...
Loading...
Loading...
Discover how to use the Experiments feature to test and improve machine learning model accuracy.
Created By: Adam Milton-Barker
Public Project Link: https://studio.edgeimpulse.com/public/521263/latest
Edge Impulse Experiments are a powerful new feature that allows users to run multiple active Impulses within a single project. This enables seamless experimentation with various model configurations on the same dataset, offering a more efficient way to compare results.
The updated interface includes a new "Experiments" section, which centralizes Impulse management and integrates the EON Tuner for enhanced trial handling. Along with API enhancements and streamlined processes, these changes significantly accelerate development and improve project organization, making it easier to transition from data collection to deployment.
This project provides a walk through of how to use Experiments, along with a tutorial that will help you get started with Edge Impulse Experiments.
Arduino Nano RPI2040 Connect More Info
Edge Impulse Visit
The Arduino Nano RP2040 Connect is a highly versatile development board, bringing the power of the Raspberry Pi RP2040 microcontroller to the compact Nano form factor. Equipped with dual-core 32-bit Arm Cortex-M0+ processors, it enables seamless creation of IoT projects with built-in Wi-Fi and Bluetooth support via the U-blox Nina W102 module. The board includes an accelerometer, gyroscope, RGB LED, and omnidirectional microphone, making it ideal for real-time data collection and embedded AI applications.
The Nano RP2040 Connect is fully compatible with the Arduino Cloud platform, allowing users to rapidly prototype IoT solutions. It also supports MicroPython for those who prefer Python for programming. With a clock speed of 133 MHz, the board is well-suited for machine learning tasks, offering support for frameworks like TinyML and TensorFlow Lite. Additionally, its 6-axis IMU and temperature sensor expand the board's capability for advanced real-world applications.
To begin working with the Edge Impulse platform and the Nano RPI2040 Connect, follow this tutorial to connect your device.
Now it's time to create your Edge Impulse project. Head over to Edge Impulse, log in, and create your new project.
Edge Impulse offers Experiments to all users, with the Community tier allowing up to three simultaneous Experiments. Users on the Professional Plan and Enterprise tiers enjoy unlimited access to Experiments. You can explore all the platform's advanced features by signing up for an Enterprise Trial.
Once your project is created, you will see the project dashboard which will show you new additions to the platform.
Next you need to connect your device to the Edge Impulse platform. Ensuring you have the Nano connected to your computer, open a command line or terminal and use the following command:
You will be prompted for your Edge Impulse login details to proceed. Once authenticated you will need to choose the COM port that your device is connected to, and then select the Edge Impulse project you want to connect your device to.
If you now head over to your project and go the Devices
tab, you will see your device is now connected.
Now that your device is connected to Edge Impulse, it is time to collect some data. Head over to the Data aquisition
tab and select the RPI2040.
First we will create the normal
data. This data will represent when a machine is running normally with no abnormal vibrations. Select the Intertial
sensor and use Normal
as the label. Next record about 3 minutes data, collected in 10 second samples from the device.
Next we will collect some Vibrations
data. Change the label to Vibrations
and record 3 minutes more of samples, but this time shake the Arduino around while the samples are being recorded.
You should now have about 6 minutes of data. Note that at this point the data is not split into Training and Testing groups.
Head to the project dashboard and scroll to the Danger Zone
at the bottom. Click on the Perform train/test split
button to split the data.
Back on the Data aquisition
tab, you will now see that the data has been split.
Now it is time to create your Impulse. Head over to the Create Impulse
tab and you should see the configuration for your Nano RPI2040. You can accept the defaults here.
First we will use the Spectral Analysis
Processing block. Spectral Analysis is ideal for examining repetitive movements, particularly using accelerometer data. Tthis tool breaks down signals to reveal their frequency and power patterns over time.
Click Add
to add the Spectral Analysis Processing block to your Impulse.
For the Learning block, we will use Classification
to classify between Normal
and Vibrations
. Click Add
to add the Classification block to your Impulse.
Next click Save Impulse
.
Now we will generate the features that the AI model will use to learn. Head over to the Spectral Features
tab and click on Autotune parameters
. An autotune job will start and you will see the output on the right hand side of the UI.
Once the job is complete click Save parameters
. You will be redirected to the Generate features
tab.
A feature generation job will start, and once finished you will see the features on the right hand side. The features should be nicely clustered, if you notice features that are not clustered correctly you can click on them, review the samples and update your dataset or settings to fix.
Now it is time to train our model. Head over to the Classifier
tab, leave the default settings intact, and click on Save and train
.
A training job will start, and once completed you will see the results on the right hand side of the UI.
If you now head over to the Model testing
tab, you will be able to use your newly trained model on the Test data that was set aside. The Test data was not shown to the model during training, so this will help to evaluate how well the model performs on unseen data.
The testing process will start and you will see the results once complete.
If you head to the Experiments
tab, you will see that you now have your first Experiment listed.
You are now able to deploy your model to your Arduino. Head over to the Deployment
tab and search for Arduino, then follow the steps provided to you to deploy the model to your device.
As this tutorial is specifically related to Experiments, we will continue straight to EON Tuner and creating our next Experiment.
The EON™ Tuner simultaneously tests multiple model architectures, chosen based on your device and latency needs, to identify the best one for your application. The tuning process can take a while, but you can monitor its progress at any point during the search.
On the Experiments
tab, select EON Tuner
. For the Search space configuration
select Classification
in the Usecase templates
drop down, then click Start tuning
to run.
At this point, it is time to grab a coffee and put your feet up, as this will take some time to complete.
If at any time during the EON tuning process, you see a configuration you would like to try, you can simply click the Add
button for that configuration.
Here we see a configuration that has a considerable reduction for latency, RAM, and ROM, so we will use this configuration for our next Experiment.
The platform will create the blocks for your new Impulse and add the features automatically for you. If you head back to the Experiments
tab you will now see your new model waiting for you to test or deploy.
While the EON Tuner will help identify the best architectures and configuration automatically, you can also manually add a new Experiment in order to go through the blocks and neural network setup process again, by simply clicking on the "Create new impulse" button on the Experiments page.
At this point, once you have trained yet a third model and tested it's results, you now have 3 different models to choose from, and can select the best one to deploy to your device. In the Professional and Enterprise tiers, you can continue to evaluate and iterate with even more Experiments.
In this tutorial, we demonstrated how to build a defect detection system with Edge Impulse and the Arduino Nano RP2040, and how to leverage the EON Tuner to optimize your model. From there, we built a second and third model with the new Experiments feature in Edge Impulse to allow us to evaluate different options and results. With this, you can easily refine and enhance your models, showcasing the power and simplicity of Edge Impulse's new Experiments feature for continuous improvement in machine learning projects.
Getting Started with machine learning on the TI CC1352P Launchpad and Edge Impulse.
Created By: Swapnil Verma
Public Project Link:
This is a Getting Started Guide for the TI LAUNCHXL-CC1352P development board with Edge Impulse. Here we will connect the board to the Edge Impulse Studio, collect sensor data directly from the board, prepare a machine learning model using the collected data, deploy the model back to the board, and perform inferencing locally on the board. Let's get started!
The Launchpad Kit comes with the following items in the box:
The LAUNCHXL-CC1352P development board
Micro-USB to USB-A Cable
Documentation
The BOOSTXL sensor comes with:
BOOSTXL sensor board
Documentation
Each LAUNCHXL-CC1352P board comes preinstalled with a quick-start project called Project Zero. Let's run the quick-start project to verify our board is working properly.
Connect the board to the computer using the provided micro-USB cable.
Download the SimpleLink Starter smartphone app on your smartphone. This app lets you control the LEDs, see the state of the buttons and send data to the UART.
Open the app, select Project Zero from the list of available devices, and click on Sensor View to get the GUI.
In the Project Zero GUI, tap on the lights to turn them On/Off on the board. Press the user buttons on the board to see their status on the app.In the Project Zero GUI, tap on the lights to turn them On/Off on the board. Press the user buttons on the board to see their status on the app.
In this section, we will upgrade the firmware of the development board so we can connect it to the Edge Impulse Studio.
Please follow this official guide to update the firmware:
To begin, you'll need to create an Edge Impulse account and a project in the Edge Impulse Studio. Please follow the below steps to do so:
After login, please create a new project, give it a suitable name, and select an appropriate Project type.
After creating a new project, let's connect the development board to the Studio.
The next step is connecting our TI LAUNCHXL board to the Edge Impulse Studio, so we can ingest sensor data for the machine learning model. Please follow the below steps to do so:
Open a terminal or command prompt and type edge-impulse-daemon
. The [Edge Impulse daemon](https://docs.edgeimpulse.com/docs/Edge Impulse-cli/cli-daemon) will start and prompt for user credentials. If you have not installed Edge Impulse CLI as part of the Update the Firmware section, then please install it now.
After providing user credentials, it will prompt you to select an Edge Impulse project. Please navigate and select the project created in the previous steps, using the arrow keys.
After selecting the project, it will ask you to give the connected board a name. It is useful when you want to connect multiple boards to the same project.
Now the board should be connected to the selected project. The edge-impulse-daemon
will tell you which project the board is connected to. We can also verify by checking the devices tab of that project.
Navigate to the Data Acquisition tab in the Edge Impulse Studio.
Here you will find the Device we connected in the previous step and the sensor list. Please select the suitable sensor from the drop-down menu. For this project, I have selected the Accelerometer sensor and used default parameters.
Add a Label name for the sample you are about to collect. I am collecting up-down, side-to-side and circular motion data therefore I will use up_down, side_to_side and circle as labels. As a default motion, I will also collect stationary data.
Clicking Start Sampling will start the sample collection process. Once the sample is collected, it will be automatically uploaded to the Edge Impulse Studio.
When enough samples are collected, [balance the data](https://docs.edgeimpulse.com/docs/Edge Impulse-studio/data-acquisition#dataset-train-test-split-ratio) and if required [clean the data](https://docs.edgeimpulse.com/docs/Edge Impulse-studio/data-acquisition#cropping-samples) as well.
After Impulse design is complete, save the design and navigate to the preprocessing tab (Spectral features in this case) for the feature generation.
Click on the Save parameters button, then navigate to the Generate features tab and click Generate features button for data preprocessing.
Once the training is complete, please navigate to the [Model testing](https://docs.edgeimpulse.com/docs/Edge Impulse-studio/model-testing) tab, and click Classify all button.
After testing is finished, the Edge Impulse Studio will show the model accuracy, and other parameters.
Even though it is a simple example, the Edge Impulse Studio prepared an excellent machine learning model just by using the default recommended parameters, in just a couple of minutes.
In this step, we will deploy our prepared model to the TI LAUNCHXL-CC1352P development board, so we can perform inference locally on the board.
Please navigate to the [Deployment](https://docs.edgeimpulse.com/docs/Edge Impulse-studio/deployment) tab, select the TI LAUNCHXL-CC1352P board using the search bar, and click on the Build button.
After the build is finished, the new firmware will be downloaded automatically to your computer, and the Edge Impulse Studio will provide next-step instructions.
Please extract the folder and double-click the flash_<operating-system>
file. This will flash the newly created firmware on the TI LAUNCHXL-CC1352P board. This firmware contains the machine learning model we prepared in the above steps.
The next step is testing!! Let's see how well our model performs when run locally on the LAUNCHXL-CC1352P board. To start local inferencing, type edge-impulse-run-impulse
from your terminal or command prompt.
And, that's it. I hope this Getting Started Guide will be useful for you when using the TI LAUNCHXL-CC1352P with Edge Impulse.
Getting Started with image classification on the OpenMV RT1062 Camera and Edge Impulse.
Created By: Roni Bandini
GitHub Repository:
The open source, MCU-based OpenMV cameras and shields are popular devices for computer vision projects due to their low cost, user-friendly documentation, and excellent community. They are also easy to program, with Python and the OpenMV IDE. The new OpenMV RT1062 is the most powerful version yet; its features are truly outstanding:
ARM Cortex M7 processor running at 600 MHz
OV5640 image sensor capable of taking 2592x1944 (5MP) images
32MBs SDRAM, 1MB of SRAM and 16 MB of program/storage flash
I/O pins
microSD slot capable of 25MB/s reads/writes
A SPI bus that can run up to 60Mb/s
An onboard RTC which keeps running when the system is in low-power mode
A 12-bit X/Y/Z accelerometer (2/4/8g)
Onboard WiFi (a/b/g/n - 11/54/65 Mb/s) and Bluetooth (v5.1 – BR/EDR/BLE)
Onboard 10/100 Mb/s Ethernet
A LiPo battery connector
But how easy is it to work with this OpenMV RT1062? And how well does it perform in terms of inference time and power consumption when running machine learning vision projects built with Edge Impulse? Let’s find out.
For this project I have trained a simple classification model using Edge Impulse. I took 30 pictures of a Lego figure and 30 pictures of a small blue ball. I have created an Impulse with 96x96 px images and a Classification Learning Block. These items are very distinct from one another so the classification is quite easy from a machine learning perspective, but that is because we are more interested in evaluating the performance and ease of use of the OpenMV RT1062 here. In the future we could always explore more difficult ML tasks.
I have trained the model with 10 cycles at 0.0005 learning rate, which was enough again due to the disparity between the chosen objects. After testing I chose “OpenMV Library” on the Deployment page.
Next I extracted the .zip
file that was generated and downloaded. I connected the OpenMV camera to my computer with a USB-C cable, and copied the labels.txt
and trained .tflite
files to the camera, which conveniently exposes itself as a USB drive when connected.
I have created a MicroPython script based on the example classification script included in the .zip file, but with small modifications to measure inference time.
I have clicked the Connection icon, and then executed the script with “Ctrl+R”. In the upper right window, a live view of the camera was presented, and I was also able to view console messages like inference time and FPS.
The default script has a configured resolution of 240x240. In order to change the resolution for higher values, I had to change sensor frame size to SXGA
.
After this change I was able to increase the resolution up to 1024x1024.
The OpenMV FAQs state that the OpenMV Cam consumes about 100 mA while idle, and 140 mA when processing images. However, it’s important to note that there are several OpenMV Cam models (such as the RT1062, H7 Plus, and H7 R2) and ML inferences could increase the power consumption, so I soldered two headers to VIN and GND pins and I put a Multimeter in series with the power lines to make my own tests.
Given that I couldn't use the USB cable for this step, I needed some way to automatically execute the MicroPython script at start. I clicked Tools, Save Open Script to OpenMV cam, then I selected Tools, Reset OpenMV Cam.
At this point the script was automatically run when powering on the device, and I was able to measure IDLE and Inference power consumption for every resolution.
As expected, Inference time varies while using a higher frame size, but there is not much difference in regards to power consumption.
Note: Because I utilized the low resolution ammeter feature of a general-purpose multimeter, the measurement of power consumption may not be entirely accurate and precision equipment should be used for highly detailed investigation if needed.
The OpenMV Cam has an on-board LED with blinking patterns related to different board states.
Green: Bootloader is running
Blue: The board is executing main.py
script
White: Firmware is panicking from a hardware failure
If for any reason your OpenMV RT1062 becomes bricked during firmware updates, you have to make a small bridge between the SBL pin and 3.3v pin. Then you will be able to re-flash (functional) firmware via the OpenMV IDE.
Edge Impulse recently included Visual Anomaly detection but, but for now this is not supported on the OpenMV RT1062. Perhaps in the future this functionality can be added.
The new OpenMV RT1062 camera is an excellent choice for edge AI projects, because the 5MP resolution allows you not only to make inferences but also to capture high-quality snapshots for your dataset, and store them on a microSD card or send them to your Edge Impulse project via WiFi. Additionally, its power configuration can be fine-tuned for efficiency (with low consumption to start) thanks to the board’s RTC/sleep features. And of course, using a high-level programming language like MicroPython simplifies any development considerably.
Getting Started with machine learning on the Renesas CK-RA6M5 Cloud Kit and Edge Impulse.
Created By: Swapnil Verma
Public Project Link:
The enables users to securely connect to the cloud and explore the features of the Cortex M33-based Renesas RA6M5 group of MCUs and cloud services. This development board can run machine-learning models and is .
This is a Getting Started Guide for the Renesas CK-RA6M5 board with Edge Impulse. Here we will connect the board to the Edge Impulse Studio, collect sensor data directly from the board, prepare a machine learning model using the collected data, deploy the model back to the board, and perform inferencing locally on the board. Let's get started!
The Cloud Kit comes with the following items in the box:
The CK-RA6M5 development board
RYZ014A PMOD (CAT-M1 Cellular Module)
SIM card
Antenna
2 Micro USB to A cables
Micro USB A/B to A adapter cable
Documentation
Each CK-RA6M5 board comes preinstalled with a quick-start project. Let's run that quick-start project to verify our board is working properly.
Make sure that (a) J22 is set to link pins 2-3 (b) J21 link is closed and (c) J16 link is open.
Connect J14 and J20 on the CK-RA6M5 board to USB ports on the host PC using the 2 micro USB cables supplied.
The power LED (LED6) on the CK-RA6M5 board lights up white, indicating that the CK-RA6M5 board is powered on.
Immediately after the power on, the four user LEDs will take on the following states:
LED1 Red – Off
LED2 RGB – Off
LED3 Green – Steady, full intensity
LED4 Blue – Blinking at 1hz frequency
Press the user button (S2) on the board to change the blinking frequency of the user LED4 (blue). With every press of the first user button (S2), the frequency will switch from 1 Hz to 5 Hz to 10 Hz and cycle back.
In order to connect the CK-RA6M5 board to the Edge Impulse Studio, we need to upgrade the board's firmware. Please follow the official Edge Impulse guide to update its firmware:
Once the board is flashed with Edge Impulse firmware, the real magic starts.
To begin, you'll need to create an Edge Impulse account and a project in the Edge Impulse Studio. Please follow the below steps to do so:
After login, please create a new project, give it a suitable name, and select an appropriate Project type.
After creating a new project, navigate to the Devices Tab.
The next step is connecting our Renesas CK-RA6M5 board to the Edge Impulse Studio, so we can ingest sensor data for the machine learning model. Please follow the below steps to do so:
Connect the Renesas CK-RA6M5 board to the computer by following the steps mentioned in the Quick Start section.
After providing user credentials, it will prompt you to select an Edge Impulse project. Please navigate and select the project created in the previous steps, using the arrow keys.
After selecting the project, it will ask you to give the connected board a name. It is useful when you want to connect multiple boards to the same project.
Now the board should be connected to the selected project. The edge-impulse-daemon
will tell you which project the board is connected to. We can also verify by checking the Devices tab of that project.
It will also list all the sensors available for data gathering.
Navigate to the Data Acquisition tab in the Edge Impulse Studio.
Here you will find the Device we connected in the previous step and the sensor list. Please select the suitable sensor from the drop-down menu. For this project, I have selected the Microphone sensor and used default parameters.
Add a Label name for the sample you are about to collect. I am collecting clap and whistle sounds therefore I will use clap and whistle as labels.
Clicking Start Sampling will start the sample collection process. Once the sample is collected, it will be automatically uploaded to the Edge Impulse Studio.
After Impulse design is complete, save the design and navigate to the preprocessing tab (MFE in this case) for the feature generation.
Click on the Save parameters button, then navigate to the Generate features tab and click Generate features button for data preprocessing.
After testing is finished, the Edge Impulse Studio will show the model accuracy, and other parameters.
Even though it is a simple example, the Edge Impulse Studio prepared an excellent machine learning model just by using the default recommended parameters, in just a couple of minutes.
In this step, we will deploy our prepared model to the Renesas CK-RA6M5 board, so we can perform inference locally on the board.
After the build is finished, the new firmware will be downloaded automatically to your computer, and the Edge Impulse Studio will provide next-step instructions.
Please extract the folder and double-click the flash_<operating-system>
file. This will flash the newly created firmware on the CK-RA6M5 board. This firmware contains the machine learning model we prepared in the above steps.
The next step is testing!! Let's see how well our model performs when run locally on the Renesas CK-RA6M5 board:
And, that's it. I hope this Getting Started Guide will be useful for you when using the Renesas CK-RA6M5 with Edge Impulse.
The is a development board equipped with the multiprotocol wireless CC1352P microcontroller. The Launchpad, when paired with the is , and can sample sensor data, build models, and deploy directly to the device without any programming required.
To learn more about Project Zero or user guide please follow .
Firmware Update Guide -
Navigate to the and create an account. If you already have an account then please login using your credentials.
Edge Impulse provides multiple options for . In this Getting Started Guide, we will look at the direct data ingestion from the board using edge-impulse-daemon
. Please follow the below steps for data acquisition:
After data collection, the next step is machine learning model preparation. To do so, please navigate to the and add relevant and to the pipeline.
Edge Impulse Studio will automatically add an and it will recommend a suitable preprocessing and a learning block based on the data type. I have used the recommended ones in this project with the default arguments.
After feature generation, please navigate to the Learning Tab ( in this case) to design the neural network architecture. I have used the default architecture and parameters recommended by the Edge Impulse Studio. After selecting a suitable configuration, click on the Start training button.
If you have any questions, please check out the .
I have downloaded the OpenMV IDE from . The IDE is available for Windows, Mac and Ubuntu Linux.
The MicroPython used for this project is available in this GitHub repository:
Firmware Update Guide -
Navigate to the and create an account. If you already have an account then please login using your credentials.
Open a terminal or command prompt and type edge-impulse-daemon
. The will start and prompt for user credentials.
Edge Impulse provides multiple options for . In this Getting Started Guide, we will look at the direct data ingestion from the board using edge-impulse-daemon
. Please follow the below steps for data acquisition:
When enough samples are collected, and if required as well.
After data collection, the next step is machine learning model preparation. To do so, please navigate to the and add relevant and to the pipeline.
Edge Impulse Studio will automatically add an and it will recommend a suitable preprocessing and a learning block based on the data type. I have used the recommended ones in this project with the default arguments.
After feature generation, please navigate to the Learning Tab ( in this case) to design the neural network architecture. I have used the default architecture and parameters recommended by the Edge Impulse Studio. After selecting a suitable configuration, click on the Start training button.
Once the training is complete, please navigate to the tab, and click Classify all button.
Please navigate to the tab, select the Renesas CK-RA6M5 board using the search bar, and click on the Build button.
If you have any questions, please check out the .