Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The API references for the ingestion service, remote management service, and the studio API; plus SDK documentation for the acquisition and inferencing libraries can be found in the API references.
Edge Impulse for Linux SDKs for Node.js, Python, Go and C++
A gentle introduction to the exciting field of embedded machine learning.
Machine learning (ML) is a way of writing computer programs. Specifically, it’s a way of writing programs that process raw data and turn it into information that is meaningful at an application level.
For example, one ML program might be designed to determine when an industrial machine has broken down based on readings from its various sensors, so that it can alert the operator. Another ML program might take raw audio data from a microphone and determine if a word has been spoken, so it can activate a smart home device.
Unlike normal computer programs, the rules of ML programs are not determined by a developer. Instead, ML uses specialized algorithms to learn rules from data, in a process known as training.
In a traditional piece of software, an engineer designs an algorithm that takes an input, applies various rules, and returns an output. The algorithm’s internal operations are planned out by the engineer and implemented explicitly through lines of code. To predict breakdowns in an industrial machine, the engineer would need to understand which measurements in the data indicate a problem and write code that deliberately checks for them.
This approach works fine for many problems. For example, we know that water boils at 100°C at sea level, so it’s easy to write a program that can predict whether water is boiling based on its current temperature and altitude. But in many cases, it can be difficult to know the exact combination of factors that predicts a given state. To continue with our industrial machine example, there might be various different combinations of production rate, temperature, and vibration level that might indicate a problem but are not immediately obvious from looking at the data.
To create an ML program, an engineer first collects a substantial set of training data. They then feed this data into a special kind of algorithm, and let the algorithm discover the rules. This means that as ML engineers, we can create programs that make predictions based on complex data without having to understand all of the complexity ourselves.
Through the training process, the ML algorithm builds a model of the system based on the data we provide. We run data through this model to make predictions, in a process called inference.
There are many different types of machine learning algorithms, each with their own unique benefits and drawbacks. Edge Impulse helps engineers select the right algorithm for a given task.
Machine learning is an excellent tool for solving problems that involve pattern recognition, especially patterns that are complex and might be difficult for a human observer to identify. ML algorithms excel at turning messy, high-bandwidth raw data into usable signals, especially combined with conventional signal processing.
For example, the average person might struggle to recognize the signs of a machine failure given ten different streams of dense, noisy sensor data. However, a machine learning algorithm can often learn to spot the difference.
But ML is not always the best tool for the job. If the rules of a system are well defined and can be easily expressed with hard-coded logic, it’s usually more efficient to work that way.
Limitations of machine learning
Machine learning algorithms are powerful tools, but they can have the following drawbacks:
They output estimates and approximations, not exact answers
ML models can be computationally expensive to run
Training data can be time consuming and expensive to obtain
It can be tempting to try and apply ML everywhere—but if you can solve a problem without ML, it is usually better to do so.
Recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest of microcontrollers. Embedded machine learning, also known as TinyML, is the field of machine learning when applied to embedded systems such as these.
There are some major advantages to deploying ML on embedded devices. The key advantages are neatly expressed in the unfortunate acronym BLERP, coined by Jeff Bier. They are:
Bandwidth—ML algorithms on edge devices can extract meaningful information from data that would otherwise be inaccessible due to bandwidth constraints.
Latency—On-device ML models can respond in real-time to inputs, enabling applications such as autonomous vehicles, which would not be viable if dependent on network latency.
Economics—By processing data on-device, embedded ML systems avoid the costs of transmitting data over a network and processing it in the cloud.
Reliability—Systems controlled by on-device models are inherently more reliable than those which depend on a connection to the cloud.
Privacy—When data is processed on an embedded system and is never transmitted to the cloud, user privacy is protected and there is less chance of abuse.
The best way to learn about embedded machine learning is to see it for yourself. To train your own model and deploy it to any device, including your mobile phone, follow our Getting Started guide.
The enterprise version of Edge Impulse offers team collaboration on projects, go to Dashboard, find the Collaborators section, and click the '+' icon. If you have an interesting research or community project we can enable collaboration on the free version of Edge Impulse as well, by emailing hello@edgeimpulse.com.
You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.
The minimum hardware requirements for the embedded device depends on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video, view our inference performance metrics for more details.
We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking ⋮
, and selecting Switch to expert mode.
Another big part of Edge Impulse are the processing blocks, as they clean up the data, and already extract important features from your data before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: edgeimpulse/processing-blocks (and you can build your own processing blocks as well).
It depends on the hardware.
For general-purpose MCUs we typically use EON Compiler with TFLite Micro kernels (including hardware optimization, e.g. via CMSIS-NN, ESP-NN).
On Linux, if you run the Impulse on CPU, we use TensorFlow Lite.
For accelerators we use a wide variety of other runtimes, e.g. hardcoded network in silicon for Syntiant, custom SNN-based inference engine for Brainchip Akida, DRP-AI for Renesas RZV2L, etc...
The EON Compiler compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware.
By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.
Yes you can! Check out our documentation on Bringing your own model (BYOM) into your Edge Impulse project, and using the Edge Impulse Python SDK!
Edge Impulse uses UMAP (a dimensionality reduction algorithm) to project high dimensionality input data into a 3 dimensional space. This even works for extremely high dimensionality data such as images.
Yes. The enterprise version of Edge Impulse can integrate directly with your cloud service to access and transform data.
Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in the Studio. Multiply this by the active power consumption of your MCU to get an indication of power cost per inference.
More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low-resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).
Also see Analyse Power Consumption in Embedded ML Solutions.
See .eim models? on the Edge Impulse for Linux pages.
Using the Edge Impulse Studio data acquisition tools (like the serial daemon or data forwarder), you can collect data samples manually with a pre-defined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the Edge Impulse CLI, data ingestion API, web uploader, enterprise data storage bucket tools or enterprise upload portals. You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.
Yes! A "supported board" simply means that there is an official or community-supported firmware that has been developed specifically for that board that helps you collect data and run impulses. Edge Impulse is designed to be extensible to computers, smartphones, and a nearly endless array of microcontroller build systems.
You can collect data and upload it to Edge Impulse in a variety of ways. For example:
Transmitting data to the Data forwarder
Using the Edge Impulse for Linux SDK
By uploading files directly (e.g. CBOR, JSON, CSV, WAV, JPG, PNG)
Your trained model can be deployed as part a C++ library. It requires some effort, but most build systems will work with our C++ library, as long as that build system has a C++ compiler and there is enough flash/RAM on your device to run the library (which includes the DSP block and model).
After creating your Edge Impulse Studio project, you will be directed to the project's dashboard. The dashboard gives a quick overview of your project such as your project ID, the number of devices connected, the amount of data collected, the preferred labeling method, among other editable properties. You can also enable some additional capabilities to your project such as collaboration, making your project public, and showcasing your public projects using Markdown READMEs as we will see.
The figure below shows the various sections and widgets of the dashboard that we will cover here.
The Getting Started section is here to help. You can choose from 3 different options to get started:
Add existing data: When selecting this option, you can then choose to Upload data from your computer or to Add storage bucket
Collect new data: When selecting this option, the getting started guide will ask you to either Scan QR code to connect to your phone, Connect to your computer, or to Connect your device or developer board. Make sure that your device or development board is flashed with the Edge Impulse official firmware.
Upload your model: This option will change the default workflow. See BYOM to learn more about how to import your existing model to Edge Impulse studio.
To share your private project with the world, and click Make this project public.
By doing this, all of your data, block configurations, intermediate results, and final models will be shared with the world. Your project will be publicly accessible and can be cloned with a single click with the provided URL:
When you have a trained model available in your project, this card will appear to let you test your model with your phone or your computer. This is particularly useful to validate the behaviour of your model before integrating it into your targeted embedded firmware.
You can invite up to three collaborators to join and contribute to your project. To have unlimited collaborators, your project needs to be part of an organization to access unlimited team collaborations.
To add a collaborator, go to your project's dashboard and find the "Collaborators" widget. Click the '+' icon and type the username or e-mail address of the other user. The user will be invited to create an Edge Impulse account if it doesn't exist.
The user will be automatically added to the project and will get an email notification inviting them to start contributing to your project. To remove a user, simply click on the three dots besides the user then tap ‘Delete’ and they will be automatically removed.
The project README enables you to explain the details of your project in a short way. Using this feature, you can add visualizations such as images, GIFs, code snippets, and text to your project in order to bring your colleagues and project viewers up to speed with the important details of your project. In your README you might want to add things like:
What the project does
Why the project is useful
Motivations of the project
How to get started with the project
What sensors and target deployment devices you used
How you plan to improve your project
Where users can get help with your project
To create your first README, navigate to the "about this project" widget and click "add README"
For more README inspiration, check out the public Edge Impulse project tutorials below:
The project info widget shows the project's specifications such as the project ID, labeling method, and latency calculations for your target device.
The project ID is a unique numerical value that identifies your project. Whenever you have any issue with your project on the studio, you can always share your project ID on the forum for assistance from edge Impulse staff.
On the labeling method dropdown, you need to specify the type of labeling your dataset and model expect. This can be either one label per data item or bounding boxes. Bounding boxes only work for object detection tasks in the studio. Note that if you interchange the labeling methods, learning blocks will appear to be hidden when building your impulse.
One of the amazing Edge Impulse superpowers is the latency calculation component. This is an approximate time in milliseconds that the trained model and DSP operations are going to take during inference based on the selected target device. This hardware in the loop approach ensures that the target deployment device compute resources are not underutilized or over-utilized. It also saves developers' time associated with numerous inference iterations back and forth the studio in search of optimum models.
In the Block Output section, you can download the results of the DSP and ML operations of your impulse.
The downloadable assets include the extracted features, Tensorflow SavedModel, and both quantized and unquantized TensorFlow lite models. This is particularly helpful when you want to perform other operations to the output blocks outside the Edge Impulse studio. For example, if you need a TensorflowJS model, you will just need to download the TensorFlow saved model from the dashboard and convert it to TensorFlowJS model format to be served on a browser.
Changing Performance Settings is only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
This section consists of editable parameters that directly affect the performance of the studio when building your impulse. Depending on the selected or available settings, your jobs can either be fast or slow.
The use of GPU for training and Parallel DSP jobs is currently an internal experimental feature that will be soon released.
To bring even more flexibility to projects, the administrative zone gives developers the power to enable other additional features that are not found in edge impulse projects by default. Most of these features are usually advanced features intended for organizations or sometimes experimental features.
To activate these features you just need to check the boxes against the specific features you want to use and click save experiments.
The danger zone widget consists of irrevocable actions that let you to:
Perform train/test split. This action re-balances your dataset by splitting all your data automatically between the training and testing set and resets the categories for all data.
Launch the getting started wizard. This will remove all data, and clear out your impulse.
Transfer ownership. This action is available for users who have one or more organizations linked with their accounts. With it, you can start working on a project with your user profile and then transfer the ownership to your organization.
Delete your project. This action removes all devices, data, and impulses from your project.
Delete all data in this project.
You can upload an already existing dataset to your project directly through the Edge impulse Studio. The data should be in the Data Acquisition format (CBOR, JSON, CSV), or as WAV, JPG or PNG files.
To upload data using the uploader, go to the Data acquisition page and click on the uploader button as shown in the image below:
When uploading your data, you can choose the category you want your data to fall in i.e training set, testing set or automatically split the dataset between training and testing set. You can also choose whether to infer labels from files name or enter a label of which the files should automatically fall in.
There is a wide variety of devices that you can connect to your Edge Impulse project. These devices can help you collect datasets for your project, test your trained ML model and even deploy your ML model directly to your development board with a pre-built binary application (for fully supported development platforms).
On the Devices tab, you'll find a list of all your connected devices and a guide on how to connect new devices that are currently supported by Edge Impulse.
To connect a new device, click on the Connect a new device button on the top right of your screen.
You will get a pop-up with multiple options of devices you can connect to your Edge Impulse project. Available options include:
Welcome to Edge Impulse! We enable developers to create the next generation of intelligent device solutions with . In the documentation you'll find user guides, tutorials and API documentation. For support, visit the .
If you're new to the idea of embedded machine learning, or machine learning in general, you may enjoy our quick guide:
Follow these three steps to build your first embedded Machine Learning model - no worries, you can use almost any device to get started.
You'll need some data:
If you have an existing development board or device, you can collect data with a few lines of code using the or the SDK.
If you want to collect live data from a supported development kit, select your board from the list of and follow the instructions to connect your board to edge impulse.
If you already have a dataset, you can upload it via the .
If you have a mobile phone you can use it as a sensor to collect data, see .
Try the tutorials on , , , or . These will let you build machine learning models that detect things in your home or office.
After training your model you can run your model on your device:
If you want to integrate the model with your own firmware or project you can export your complete model (including all signal processing code and machine learning models) to a C++ or Arduino library with no external dependencies (open source and royalty-free), see .
If you have a fully supported development board (or your mobile phone) you can build new firmware - which includes your model - directly from the UI. It doesn't get easier than that!
If you have a gateway, a computer or a web browser where you want to run your model, you can export to and run it anywhere you can run JavaScript.
We have some great tutorials, but you have full freedom in the models that you design in Edge Impulse. You can plug in new signal processing blocks, and completely new neural networks. See and .
All collected data for each project can be viewed on the Data acquisition tab. You can see how your data has been split for train/test set as well as the data distribution for each class in your dataset. You can also send new sensor data to your project either by file upload, WebUSB, Edge Impulse API, or Edge Impulse CLI.
The panel on the right allows you to collect data directly from any fully supported platform:
When using the Edge Impulse for Linux CLI, run edge-impulse-linux --clean
and it will add your platform to the device list of your project. You will then will be able to interact with it from the Record new data panel.
The train/test split is a technique for training and evaluating the performance of a machine learning algorithms. It indicates how your data is split between training and testing samples. For example, an 80/20 split indicates that 80% of the dataset is used for model training purposes while 20% is used for model testing.
This section also shows how your data samples in each class are distributed to prevent imbalanced datasets which might introduce bias during model training.
Manually navigating to some categories of data can be time consuming, especially when dealing with a large dataset. The data acquisition filter enables the user to filter data samples based on some criteria of choice. This can be based on:
Label - class to which a sample represents.
Sample name - unique ID representing a sample.
Signature validity
Enabled and disabled samples
Length of sample - duration of a sample.
The filtered samples can then be manipulated by editing label, deleting, moving from trains set to test set and vise versa a shown in the image above.
The data manipulations above can also be applied at the data sample level just by simply navigating to the individual data sample then clicking ⋮ and selecting the type of action you might want to perform to the specific sample. This might be renaming , editing its label, disabling, cropping, splitting, downloading, and even deleting the sample when desired.
To crop a data sample, go to the sample you want to crop and click ⋮, then select Crop sample. You can specific a length, or drag the handles to resize the window, then move the window around to make your selection.
Made a wrong crop? No problem, just click Crop sample again and you can move your selection around. To undo the crop, just set the sample length to a high number, and the whole sample will be selected again.
Besides cropping you can also split data automatically. Here you can perform one motion repeatedly, or say a keyword over and over again, and the events are detected and can be stored as individual samples. This makes it easy to very quickly build a high-quality dataset of discrete events. To do so head to Data acquisition, record some new data, click, and select Split sample. You can set the window length, and all events are automatically detected. If you're splitting audio data you can also listen to events by clicking on the window, the audio player is automatically populated with that specific split.
Samples are automatically centered in the window, which might lead to problems on some models (the neural network could learn a shortcut where data in the middle of the window is always associated with a certain label), so you can select "Shift samples" to automatically move the data a little bit around.
Splitting data is - like cropping data - non-destructive. If you're not happy with a split just click Crop sample and you can move the selection around easily.
The labelling queue will only appear on your data acquisition page if you are dealing with an object detection tasks. The labelling queue shows a list of images that have been staged for annotation for your project.
If you are not dealing with an object detection task, you can simply disable the labelling queue bar by going to Dashboard > Project info > Labeling method and clicking the dropdown and selecting "one label per data item" as shown in the image below.
The CSV Wizard allows users with larger or more complex datasets to easily upload their data without having to worry about converting it to the .
To access the CSV Wizard, navigate to the Data Acquisition tab of your Edge Impulse project and click on the CSV Wizard button:
We can take a look at some sample data from a Heart Rate Monitor (Polar H10). We can see there is a lot of extra information we don’t need:
Choose a CSV file to upload and select "Upload File". The file will be automatically analyzed and the results will be displayed in the next step. Here I have selected an export from a HR monitor. You can try it out yourself by downloading this file:
When processing your data, we will check for the following:
Does this data contain a label?
Is this data time series data?
Is this data raw sensor data or processed features?
Is this data separated by a standard delimiter?
Is this data separated by a non-standard delimiter?
If there are settings that need to be adjusted, (for the start of your data you can select skip first x lines or no header, and adjust the delimiter) you can do so before selecting looks good, next"**.
Here you can select the timestamp column, or row and the frequency of the timestamps. If you do not have a timestamp column, you can select No timestamp column and add a timestamp later. If you do have a timestamp column you can select: the timestamp format, e.g. full timestamp, and the frequency of the timestamps, overriding is also possible via Override timestamp difference. For example Selecting 20000 will give you the detected frequency of: 0.05 Hz.
Here you can select the label column, or row. If you do not have a label column, you can select No (no worries, you can provide this when you upload data) and add a label later. If you do have a label column you can select: Yes it's "Value" The CSV Wizard allows users with larger or more complex datasets to easily upload their data without having to worry about converting it to CBOR format. You can also select the columns that contain your values.
How long do you want your samples to be?
In this section, you can set a length limit to your sample size. For example, if your CSV contains 30 seconds of data, when setting a limit of 3000ms, it will create 10 distinct data samples of 3 seconds.
Congratulations! 🚀 You have successfully created a CSV transform with the CSV Wizard. You can now save this transform and use it to process your data.
Any CSV files that you upload into your project - whether it's through the uploader, the CLI, the API or through data sources - will now be processed according to the rules you set up with the CSV Wizard!
You can access any feature in the Edge Impulse Studio through the . We also have the if you want to send data directly, and we have an open to control devices from the Studio.
For startups and enterprises looking to scale edge ML algorithm development from prototype to production, we offer an . This includes all of the tools needed to go from data collection to model deployment, such as a robust dataset builder to future-proof your data, integrations with all major cloud vendors, dedicated technical support, custom DSP and ML capabilities, and full access to the Edge Impulse APIs to automate your algorithm development.
To get more information, please .
Through .
Using the .
From the .
The WebUSB and the Edge Impulse daemon work with any fully supported device by flashing the pre-built Edge Impulse firmware to your board. See the list of .
.
.
.
.
.
(Enterprise feature).
(Enterprise feature).
For more information about the labelling queue and how to perform data annotation using AI assisted labelling on Edge Impulse, you can have a look at our documentation .
Congratulations, you've trained your first embedded machine learning model! This page lists next steps you can take to make your devices smarter.
You've ran your model in the browser, but you can also run it on a wide variety of devices. Head to the development boards section for a full overview. If you have a device that is not supported, no problem, you can export your model as a C++ library that runs on any embedded device. See Running your impulse locally for more information.
Making a machine learning model that responds to your voice is cool, but you can do a lot more with Edge Impulse. Here are a number of tutorials to get you started:
Your model was trained on +/- 20 seconds of data, which is a very small amount of data. To make your model more robust you can add more data.
If your model does not respond well enough on your keyword (e.g. if you have someone saying the word in a different tone or pitch), record some more data of the keyword.
If the model is too sensitive (triggers when you say something else), then say some different words and label them with the 'unknown' class.
You can record new data from your computer, your phone, or a development board. Go to Data acquisition and click Show options for instructions. Then, to split your data into individual samples, click the three dots next to a sample, and select Split sample (more info).
Think your model is awesome, and want to share it with the world? Go to Dashboard and click Make this project public. This will make your whole project - including all data, machine learning models and visualizations - available, and can be viewed and cloned by anyone with the URL.
Do you have any other questions or want to share your awesome ideas? Head to the forum!
In object detection ML projects, labeling is the process of defining regions of interest in the frame.
Manually labeling images can become tedious and time-consuming, especially when dealing with huge datasets. This is why Edge Impulse studio provides an AI-assisted labeling tool to help you in your labeling workflows.
To use the labeling queue, you will need to set your Edge Impulse project as an "object detection" project. The labeling queue will only display the images that have not been labeled.
Currently, it only works to define bounding boxes (ingestion format used to train both MobileNetv2 SSD and FOMO models).
Can't see the labeling queue?
Go to Dashboard, and under 'Project info > Labeling method' select 'Bounding boxes (object detection)'.
There are 3 ways you can use to perform AI-assisted labeling on the Edge Impulse Studio:
Using yolov5
Using your own model
Using object tracking
Already have a labeled dataset?
If you already have a labeled dataset containing bounding boxes, you can use the uploader to import your data.
By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!
To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!
You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab.
Currently, this only works with models trained with MobileNet SSD transfer learning.
From the “Labeling queue”, click the Label suggestions dropdown and select “Classify using ”:
You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!
If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.
Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:
Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!
We are excited to see what you build with the AI-Assisted Labeling feature in Edge Impulse, please post your project on our forum or tag us on social media, @Edge Impulse!
The Raw Data block generates windows from data samples without any specific signal processing. It is great for signals that have already been pre-processed and if you just need to feed your data into the Neural Network block.
GitHub repository containing all DSP block code: .
Scaling
Scale axes: Multiplies each axis by this number. This can be used to normalize your data between 0 and 1.
The Raw Data block retrieves raw samples and applies the Scaling parameter.
You can add arbitrary metadata to data items. You can use this for example to track on which site data was collected, where data was imported from, or where the machine that generated the data was placed. Some key use cases for metadata are:
Prevent leaking data between your train and validation set. See: below.
Synchronisation actions in , for example to remove data in a project if the source data was deleted in the cloud.
Get a better understanding of real-world accuracy by seeing how well your model performs when grouped by a metadata key. E.g. whether data on site A performs better than site B.
Metadata is shown on Data acquisition when you click on a data item. From here you can add, edit and remove metadata keys.
It's pretty unpractical to manually add metadata to each data item, so the easiest way is to add metadata when you upload data. You can do this either by:
Setting the x-metadata
header to a JSON string when calling the ingestion service:
When training an ML model we split your data into a train and a validation set. This is done so that during training you can evaluate whether your model works on data that it has seen before (train set) and on data that it has never seen before (validation set) - ideally your model performs similarly well on both data sets: a sign that your model will perform well in the field on completely novel data.
However, this can give a false sense of security if data that is very similar ends up in both your train and validation set ("data leakage"). For example:
You split a video into individual frames. These images don't differ much from frame to frame; and you don't want some frames in the train, and some in the validation set.
You're building a sleep staging algorithm, and look at 30 second windows. From window to window the data for one person will look similar, so you don't want one window in the train, another in the validation set for the same person in the same night.
By default we split your training data randomly in a train and validation set (80/20 split) - which does not prevent data leakage, but if you tag your data items with metadata you can avoid this. To do so:
Tag all your data items with metadata.
Go to any ML block and under Advanced training settings set 'Split train/validation set on metadata key' to a metadata key (f.e. video_file
).
Now every data item with the same metadata value for video_file
will always be grouped together in either the train or the validation set; so no more data leakage.
The data sources page is actually much more than just adding data from external sources. It let you create complete automated data pipelines so you can work on your active learning strategies.
From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.
Click in + Add new data source and select where your data lives:
You can either use:
AWS S3 buckets
Google Cloud Storage
Any S3-compatible bucket
Don't import data (if you just need to create a pipeline)
Click on Next, provide credentials:
Click on Verify credentials:
Here, you have several options to automatically label your data:
In the example above, the structure of the folder is the following:
The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20
.
The samples present in an unlabeled/
folder will be kept unlabeled in Edge Impulse Studio.
Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:
When using this option, only the file name is taken into account. The part before the first .
will be used to set the label. E.g. cars.01741.jpg
will set the label to cars
.
All the data samples will be unlabeled, you will need to label them manually before using them.
Finally, click on Next, post-sync actions.
From this view, you can automate several actions:
Recreate data explorer
Retrain model
If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.
Note: You will need to have trained your project at least once.
Create new version
Store all data, configuration, intermediate results and final models.
Create new deployment
Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.
Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline now.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline from code. This will display an overlay with curl
, Node.js
and Python
code samples.
You will need to create an API key to run the pipeline from code.
By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮
button and select Edit pipeline.
Free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.
Once the pipeline has successfully finish, you will receive an email like the following:
Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:
As of today, if you want to update your pipeline, you need to edit the configuration json available in ⋮
-> Run pipeline from code.
Here is an example of what you can get if all the actions have been selected:
Free projects have only access to the above builtinTransformationBlock
.
Select Copy as pipeline step and paste it to the configuration json file.
The Image block is dedicated to computer vision applications. It normalizes image data, and optionally reduce the color depth.
GitHub repository containing all DSP block code: .
Color depth: Color depth to use (RGB or grayscale)
The Image performs normalization, converting each pixel's channel of the image to a float value between 0 and 1. If Grayscale is selected, each pixel is converted to a single value following the (Y' component only).
The Spectrogram processing block extracts time and frequency features from a signal. It performs well on audio data for non-voice recognition use cases, or on any sensor data with continuous frequencies.
GitHub repository containing all DSP block code: .
Picking the right parameters for DSP algorithms can be difficult. It often requires a lot of experience and experimenting. The autotuning function makes this process easier by looking at the entire dataset and recommending a set of parameters that is tuned for your dataset.
Spectrogram
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
FFT size: The size of the FFT for each frame. Will zero pad or clip if frame length in samples does not equal FFT size.
Normalization
Noise floor (dB): signal lower than this level will be dropped
It first divides the window in multiple overlapping frames. The size and number of frames can be adjusted with the parameters Frame length and Frame stride. For example with a window of 1 second, frame length of 0.02s and stride of 0.01s, it will create 99 time frames.
An FFT is then calculated for each frame. The number of frequency features for each frame is equal to the FFT size parameter divided by 2 plus 1. We recommend keeping the FFT size a power of 2 for performances purpose. Finally the Noise floor value is applied to the power spectrum.
The features generated by the Spectrogram block are equal to the number of generated time frames times the number of frequency features.
Frequency bands and frame length
There is a connection between the FFT size parameter and the frame length. The frame length will be cropped or padded to the FFT size value before applying the FFT. For example, with a 8kHz sampling frequency and a time frame of 0.02s, each time frame contains 160 samples (8k * 0.02). If your FFT size is set 128, time frames will be cropped to 128 samples. If your FFT size is set to 256, time frames will be padded with zeros.
Providing an file when uploading data (this works both in the CLI and in the Studio).
You can read samples, including their metadata via the API call, and then use the API to update the metadata. For example, this is how you add a metadata field to the first data sample in your project using the :
(enterprise feature)
(enterprise feature)
The gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.
You can also define who can receive the email. The users have to be part of your project. See: .
If you are part of an , you can use your custom transformation jobs in the pipeline. In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio):
The source code of these blocks are available in the Edge Impulse processing blocks GitHub repository.
If you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing, follow our tutorial on Building custom processing blocks.
The IMU Syntiant block rescales raw data to 8 bits values to match the NDP101/120 chip input requirements.
Scaling
Scale 16 bits to 8 bits: Scale data to 8-bits values in the [-1, 1] range, raw data is divided by 2G (2 * 9.80665). Using Edge Impulse official firmwares, this parameter should be enabled as raw data is not rescaled. If this parameter is disabled the data samples will not be rescaled, you should disable this parameter if your raw data samples are already normalized to the [-1, 1] range.
The IMU Syntiant block retrieves raw samples and applies the Scale 16 bits to 8 bits parameter.
Building custom processing blocks is available for everyone but has to be self-hosted. If you want to host it on Edge Impulse infrastructures, you can do that within your organization interface.
In this tutorial, you'll learn how to use Edge Impulse CLI to push your custom DSP block to your organisation and how to make this processing block available in the Studio for all users in the organization.
The Custom Processing block we are using for this tutorial can be found here: https://github.com/edgeimpulse/edge-detection-processing-block. It is written in Python. Please note that one of the beauties with custom blocks is that you can write them in any language as we will host a Docker container and we are not tied to a specific runtime.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
You'll need:
The Edge Impulse CLI. If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
Docker desktop installed on your machine. Custom blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
A Custom Processing block running with Docker.
Inside your Custom DSP block folder, run the following command:
The output will look like this:
Modify or update your custom code if needed and run the following command:
The output will look similar to this:
That's it, now your custom DSP block is hosted on your organization. To make sure it is up and running, in your organisation, go to Custom blocks->DSP and you will see the following screen:
To use your DSP block, simply add it as a processing block in the Create impulse view:
Full instruction on how to build processing blocks: Building custom processing blocks
When running edge-impulse-blocks init
for hosting a custom DSP block, ensure you log into an Edge Impulse account that is a member of an Organization. If you are logged into a personal account, you will be presented with the following CLI output:
The Audio MFCC blocks extracts coefficients from an audio signal. Similarly to the Audio MFE block, it uses a non-linear scale called Mel-scale. It is the reference block for speech recognition and can also performs well on some non-human voice use cases.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Picking the right parameters for DSP algorithms can be difficult. It often requires a lot of experience and experimenting. The autotuning function makes this process easier by looking at the entire dataset and recommending a set of parameters that is tuned for your dataset.
Mel Frequency Cepstral Coefficients
Number of coefficients: Number of cepstral coefficients to keep after applying Discrete Cosine Transform
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Window size: The size of sliding window for local cepstral mean normalization. Windows size must be odd.
Pre-emphasis
Coefficient: The pre-emphasizing coefficient to apply to the input signal (0 equals to no filtering)
Note: Shift has been removed and set to 1 for all future projects. Older & existing projects can still change this value or use an existing value.
The features' extractions adds one extra step to the MFE block resulting in a compressed representation of the filterbanks. A Discrete Cosine Transform is applied on each filterbank to extract cepstral coefficients. 13 coefficients are usually retained, the rest are discarded as they represent fast changes not useful for speech recognition.
The Audio Syntiant processing block extracts time and frequency features from a signal. It is similar to the Audio MFE but performs additional processing specific to the Syntiant NDP101/120 chip. This block can be used only with Syntiant targets.
Log Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number (fixed): The number of triangular filters applied to the spectrogram
FFT length (fixed): The FFT size
Low frequency (fixed): Lowest band edge of Mel-scale filterbanks
High frequency (fixed): Highest band edge of Mel-scale filterbanks
Preemphasis
Coefficient: Pre-emphasis coefficient
Chip
Features extractor: Syntiant method to generate features, choose accordingly to your chip
The features' extractions is a proprietary algorithm from Syntiant. However parameters are very close to the Audio MFE. Pre-emphasis coefficient is applied first to amplify higher frequencies. The signal is then divided in overlapping frames, defined by the Frame length and Frame stride to extract speech features.
Sampling frequency
The Audio Syntiant block only supports a 16 kHz frequency. You can adjust the sampling frequency in the "Create Impulse" section.
Similarly to the Spectrogram block, the Audio MFE processing block extracts time and frequency features from a signal. However it uses a non-linear scale in the frequency domain, called Mel-scale. It performs well on audio data, mostly for non-voice recognition use cases when sounds to be classified can be distinguished by human ear.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Picking the right parameters for DSP algorithms can be difficult. It often requires a lot of experience and experimenting. The autotuning function makes this process easier by looking at the entire dataset and recommending a set of parameters that is tuned for your dataset.
Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Normalization
Noise floor (dB): signal lower than this level will be dropped
The features' extractions is similar to the Spectrogram (Frame length, Frame stride, and FFT length parameters are the same) but it adds 2 extra steps.
After computing the spectrogram, triangular filters are applied on a Mel-scale to extract frequency bands. They are configured with parameters Filter number, Low frequency and High frequency to select the frequency band and the number of frequency features to be extracted. The Mel-scale is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The idea is to extract more features (more filter banks) in the lower frequencies, and less in the high frequencies, thus it performs well on sounds that can be distinguished by human ear.
The last step is to perform a local mean normalization of the signal, applying the Noise floor value to the power spectrum.
Upload your own model directly into your Edge Impulse project (TensorFlow SavedModel, ONNX, or TensorFlow Lite)
Bring your own model or BYOM allows you to optimize and deploy your own pretrained model (TensorFlow SavedModel, ONNX, or TensorFlow Lite) to any edge device, directly from your Edge Impulse project.
First, create a new project in Edge Impulse.
Also make sure you have your own pretrained model available locally on your computer, in one of the following formats: TensorFlow SavedModel (saved_model.zip
), ONNX model (.onnx
) or TensorFlow Lite model (.tflite
)
For this guide, we will be uploading a pretrained image classification TFLite model for plant disease classification, downloaded from the TensorFlow Dev Hub.
Then, from the Dashboard of your Edge Impulse project under "Getting started", select Upload your model:
Upload your trained model: Upload a TensorFlow SavedModel (saved_model.zip
), ONNX model (.onnx
) or TensorFlow Lite model (.tflite
) to get started.
Model performance: Do you want performance characteristics (latency, RAM and ROM) for a specific device? Select "No" to show the performance for a range of device types, or "Yes" to run performance profiling for any of our available officially supported Edge Impulse development platforms.
After configuring the settings for uploading your model, select Upload your model and wait for your model to upload, you can check the upload status via the "Upload progress" section.
When selecting an ONNX model, you can also upload a .npy
file to Upload representative features (Optional). If you upload a set of representative features - for example, your validation set - as an .npy
file we can automatically quantize this model for better on-device performance.
Depending on the model you have uploaded in Step 1, the configuration settings available for Step 2 will change.
For this guide, we have selected the following configuration model settings for optimal processing for an image classification model with input shape (300, 300, 3)
in RGB format, Classification model output and 16 output labels: Tomato Healthy, Tomato Septoria Leaf Spot, Tomato Bacterial Spot, Tomato Blight, Cabbage Healthy, Tomato Spider Mite, Tomato Leaf Mold, Tomato_Yellow Leaf Curl Virus, Soy_Frogeye_Leaf_Spot, Soy_Downy_Mildew, Maize_Ravi_Corn_Rust, Maize_Healthy, Maize_Grey_Leaf_Spot, Maize_Lethal_Necrosis, Soy_Healthy, Cabbage Black Rot
After configuring your model settings, select Save model to view your model's on-device performance information for both MCUs and microprocessors (if applicable, depending on your model's arena size).
Optionally upload test data to ensure correct model settings and proper model processing:
Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks. With Edge Impulse's transfer learning block for audio keyword spotting, we take the same transfer learning technique classically used for image classification and apply it to audio data. This allows you to fine-tune a pre-trained keyword spotting model on your data and achieve even better performance than using a classification block, even with a relatively small keyword dataset.
Excited? Train your first keyword spotting model in under 5 minutes with the getting started wizard!
To choose transfer learning as your learning block, go to create impulse and click on Add a Learning Block, and select Transfer Learning (Keyword Spotting).
To choose your preferred pre-trained network, select the Transfer learning tab on the left side of your screen and click choose a different model. A pop up will appear on your screen with a list of models to choose from as shown in the image below.
Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on an ImageNet dataset as it's pre-trained network for you to fine-tune for your specific application.
Before you start training your model, you need to set the following neural network configurations:
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%.
You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.
The preset configurations just don't work for your model? No worries, Expert Mode is for you! Expert Mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.
You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.
After extracting meaningful features from the raw signal using signal processing, you can now train your model using a learning block. We provide a number of pre-defined learning blocks:
.
.
.
.
.
.
.
Miss an architecture? You can , with PyTorch, Keras or scikit-learn.
For most of the learning blocks (except K-means Anomaly Detection), you can use the Switch to expert mode button to access the full Keras API for custom architectures, , and more.
Want to use a novel ML architecture, or load your own transfer learning models into Edge Impulse? Create a custom learning block! It's easy to bring in any training pipeline into the Studio, as long as you can output TFLite or ONNX files. We have end-to-end examples of doing this in Keras, PyTorch and scikit-learn.
If you just want to modify the neural network architecture or loss function, you can also use directly in the Studio, without having to bring your own model. Go to any ML block, select three dots, and select Switch to Keras (expert) mode.
This page describes the input and output formats if you want to bring your own model, but a good way to start building a custom learning block is by modifying one of the following example repositories:
- wraps the Ultralytics YOLOv5 repository (trained with PyTorch) to train a custom transfer learning model.
- a Keras implementation of transfer learning with EfficientNet B0.
- a basic multi-layer perceptron in Keras and TensorFlow.
- a basic multi-layer perceptron in PyTorch.
- trains a logistic regression model using scikit-learn, then outputs a TFLite file for inferencing using .
Any built-in block in the Edge Impulse Studio (e.g. classifiers, regression models or FOMO blocks) can be edited locally, and then pushed back as a custom block. This is great if you want to make heavy modifications to these training pipelines, for example to do custom data augmentation. To download a block, go to any ML block in your project, click the three dots, select Edit block locally, and follow the instructions in the README.
Training pipelines in Edge Impulse are built on top of Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. To train your own model you'll need to wrap all the required packages, your scripts, and (if you use transfer learning) your pre-trained weights into this container. When running in Edge Impulse the container does not have network access, so make sure you don't download dependencies while running (fine when building the container).
A typical Dockerfile
might look like (see the example repositories for more information):
Important: ENTRYPOINT
It's important to create an ENTRYPOINT
at the end of the Dockerfile to specify which file to run.
GPU Support
If you want to have GPU support (only for enterprise customers), you'll need cuda packages installed. If you export a learn block from the Studio these will already have the right base packages, so use that Dockerfile as a starting point.
The entrypoint (see above in the Dockerfile) will be called with these four parameters:
--data-directory
- where you can find the data (see below for the input/output formats).
--epochs
- number of epochs to train for (set by the user in the UI).
--learning-rate
- learning rate to train with (set by the user in the UI).
--out-directory
- where to write the TFLite or ONNX files (see below for the input/output formats).
The data directory contains your dataset, after running any DSP blocks, and already split in a train/validation set:
X_split_train.npy
Y_split_train.npy
X_split_test.npy
Y_split_train.npy
The X_*.npy
files are float32 Numpy arrays, already in the right shape (e.g. if you're training on 96x96 RGB images this will be of shape (n, 96, 96, 3)
). You can typically load these without any modification into your training pipeline (see the notes after this section for caveats).
The Y_*.npy
files are either:
int32 Numpy arrays, with four columns (label_index
, sample_id
, sample_slice_start_ms
, sample_slice_end_ms
).
A JSON array in the form of:
[{ "sampleId": 234731, "boundingBoxes": [{ "label": 1, "x": 260, "y": 313, "w": 234, "h": 261 }] } ]
2) is sent if your dataset has bounding boxes, in all other cases 1) is sent.
This regenerates features (if necessary) and then downloads the updated dataset.
The input features for vision models are a 3D vector of shape (WIDTH, HEIGHT, CHANNELS)
, where the channel data is in RGB
format and each pixel is scaled 0..1
.
If the input to your model is different (e.g. BGR
, or scaled 0..255
) you'll need to transform the input. This needs to happen as part of your neural network, as the input will always be as stated above. Here's how you can do that:
If you have a model that requires the input to be scaled 0..255
(e.g. EfficientNet) you can inject a Mul
layer that multiplies the input by 255 before passing it to the first hidden layer of your network.
If you have a model that requires BGR
input, rather than RGB
input (e.g. Resnet50) you'll need to transpose the first and last channels.
The training pipeline can output either TFLite or ONNX files:
If you output TFLite files
model.tflite
- a TFLite file with float32 inputs and outputs.
model_quantized_int8_io.tflite
- a quantized TFLite file with int8 inputs and outputs.
saved_model.zip
- a TensorFlow saved model (optional).
At least one of the TFLite files is required.
If you output ONNX files
model.onnx
- An ONNX file with float16 or float32 inputs and outputs.
We automatically convert this file to both unquantized and quantized TFLite files after training.
I'm using scikit-learn, I don't have TFLite or ONNX files...
To edit the block, go to:
Enterprise: go to your organization, Custom blocks > Machine learning.
Developers: click on your photo on the top right corner, select Custom blocks > Machine learning.
The block is now available from inside any of your Edge Impulse projects. Depending on the data your block operates on, you can add it via:
Object Detection: Create impulse > Add learning block > Object Detection (Images), then select the block via 'Choose a different model' on the 'Object detection' page.
Image classification: Create impulse > Add learning block > Transfer learning (Images), then select the block via 'Choose a different model' on the 'Transfer learning' page.
Audio classification: Create impulse > Add learning block > Transfer Learning (Keyword Spotting), then select the block via 'Choose a different model' on the 'Transfer learning' page.
Other (classification): Create impulse > Add learning block > Custom classification, then select the block via 'Choose a different model' on the 'Machine learning' page.
Other (regression): Create impulse > Add learning block > Custom regression, then select the block via 'Choose a different model' on the 'Regression' page.
Unfortunately object detection models typically don't have a standard way to go from neural network output layer to bounding boxes. Currently we support the following types of output layers:
MobileNet SSD
Edge Impulse FOMO
YOLOv5 (compatible with Ultralytics YOLOv5 v6)
YOLOv5 for Renesas DRP-AI
YOLOX
If you have an object detection model with a different output layer then please contact your user success engineer (enterprise) or let us know on the forums (free users) with an example on how to interpret the output, and we can add it.
The profiling API expects:
A TFLite file.
A reference model (which model is closest to your architecture) - you can choose between gestures-large-f32
, gestures-large-i8
, image-32-32-mobilenet-f32
, image-32-32-mobilenet-i8
, image-96-96-mobilenet-f32
, image-96-96-mobilenet-i8
, image-320-320-mobilenet-ssd-f32
, keywords-2d-f32
, keywords-2d-i8
. Make sure to use i8
models if you have quantized your model.
Here's how you invoke the API from Python:
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio), but they might not be suitable for all applications. Perhaps you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing. In this tutorial you'll learn how to support these use cases by adding custom processing blocks to the studio.
There is also a complete video covering how to implement your custom DSP block:
Development flow
This creates a copy of the example project locally. Then, you can run the example either through Docker or locally via:
Docker
Locally
Install the ngrok binary for your platform.
Get a URL to access the processing block from the outside world via:
This yields a public URL for your block under Forwarding
. Note down the address that includes https://
.
Now that the custom processing block was created, and you've made it accessible to the outside world, you can add this block to Edge Impulse. In a project, go to Create Impulse, click Add a processing block, choose Add custom block (in the bottom left corner of the modal), and paste in the public URL of the block:
After you click Add block the block will show like any other processing block.
Add a learning bloc, then click Save impulse to store the impulse.
Processing blocks have configuration options which are rendered on the block parameter page. These could be filter configurations, scaling options, or control which visualizations are loaded. These options are defined in the parameters.json
file. Let's add an option to smooth raw data. Open example-custom-processing-block-python/parameters.json
and add a new section under parameters
:
Then, open example-custom-processing-block-python/dsp.py
and replace its contents with:
Restart the Python script, and then click Custom block in the studio (in the navigation bar). You now have a new option 'Smooth'. Every time an option changes we'll re-run the block, but as we have not written any code to respond to these changes nothing will happen.
We support a number of different types for configuration fields. These are:
int
- renders a numeric textbox that expects integers.
float
- renders a numeric textbox that expects floating point numbers.
string
- renders a textbox that expects a string.
boolean
- renders a checkbox.
select
- renders a dropdown box. This also requires the parameter valid
which should be an array of valid values. E.g. this renders a dropdown box with options 'low', 'high' and 'none':
To show the user what is happening we can also draw visuals in the processing block. Right now we support graphs (linear and logarithmic) and arbitrary images. By showing a graph of the smoothed sample we can quickly identify what effect the smooth option has on the raw signal. Open dsp.py
and replace the content with the following script. It contains a very basic smoothing algorithm and draws a graph:
Restart the script, and click the Smooth toggle to observe the difference. Congratulations! You have just created your first custom processing block.
If you extract set features from the signal, like the mean, that you that return, you can also label these features. These labels will be used in the feature explorer. To do so, add a labels
array that contains strings that map back to the features you return (labels
and features
should have the same length).
In the previous step we drew a linear graph, but you can also draw logarithmic graphs or even full images. This is done through the type
parameter:
This draws a graph with a logarithmic scale:
To show an image you should return the base64 encoded image and its MIME type. Here's how you draw a small PNG image:
If you output high-dimensional data (like a spectrogram or an image) you can enable dimensionality reduction for the feature explorer. This will run UMAP over the data to compress the features into three dimensions. To do so, set:
On the info
object in parameters.json
.
Your custom block behaves exactly the same as any of the built-in blocks. You can process all your data, train neural networks or anomaly blocks, and validate that your model works.
However, we cannot automatically generate optimized native code for the block, like we do for built-in processing blocks, but we try to help you write this code as much as possible.
In your custom DSP code, open the parameters.json
file, you should have something similar to the following:
The cppType
field is used to generate a function that you can implement on your custom C++ library that you get from the deployment page.
When you export your project to a C++ library we generate structures for all the configuration options in the model-parameters/dsp_blocks.h
header file. You only need to implement the extract_custom_block_features
function. It takes your {cppType}
and generates the following extract_{cppType}_features
function.
For example with the above cppType
parameter:
Implement your function in the main.cpp file (or somewhere else, just make sure it is referenced).
Also, please have a look at the video on the top of this page (around minute 25) where Jan explains how to how to implement your custom DSP block in with your C++ library.
With good feature extraction you can make your machine learning models smaller and more reliable, which are both very important when you want to deploy your model on embedded devices. With custom processing blocks you can now develop new feature extraction pipelines straight from Edge Impulse. Whether you're following the latest research, want to implement proprietary algorithms, or are just exploring data.
We realise that not every ML model requires setting epochs and learning rate, and we also realise that you might want to add extra options to the UI. Longer term we'll implement a parameter system similar to what use.
To get new data for your project, just run (requires v1.16 or higher):
In Keras you do this by adding a Rescaling
layer after training your model. Here's a .
For PyTorch you do this by first converting the trained model to ONNX, then injecting a Mul operator to the trained ONNX file. .
In Keras you do this by adding a lambda layer. .
For PyTorch you do this by first converting the trained model to ONNX, then transposing using .
If you have a model that requires input to be scaled differently (e.g. Resnet50) you can typically do a matrix subtract or matrix multiplication layer. .
An end-to-end example showing how to move and verify normalization code from a Python function to a neural network graph (using Resnet50 in Keras) can be found .
Internally in Edge Impulse vision models require the input shape to be (n, Height, Width, Channels
(NHWC). PyTorch uses (n, Channels, Height, Width)
(NCHW) internally, and thus this needs to be converted when you train a model. We do this automatically when you output an ONNX file in NCHW format, but this is done by injecting a ton of Transpose
layers (which lowers performance). If your training pipeline natively supports outputting TFLite / SavedModel files in NHWC format then please do that (f.e. Ultralytics YOLOv5 does this in their ).
If you have a training pipeline that cannot output TFLite files by default (e.g. scikit-learn), you can use jax to implement the inference function; and compile that to TFLite. See our . If there's any TFLite ops in your final model that are not supported by the EON Compiler (so you cannot run on device), then please let us know on the .
Host your block directly within Edge Impulse with the :
When training locally you can use the to get latency, RAM and ROM estimates. This is very useful as you can immediately see whether your model will fit on device. Additionally, you can use this API as part your experiment tracking (f.e. in Weights & Biases or MLFlow) to wield out models that won't fit your latency or memory constraints.
A reference device (for latency calculation) - you can get a list of all devices via in the latencyDevices
object.
Make sure you followed the tutorial, and have a trained impulse.
This tutorial shows you the development flow of building custom processing blocks, and requires you to run the processing block on your own machine or server. Enterprise customers can share processing blocks within their organization, and run these on our infrastructure. See for more details.
Processing blocks take data and configuration parameters in, and return features and visualizations like graphs or images. To communicate to custom processing blocks, Edge Impulse studio will make HTTP calls to the block, and then use the response both in the UI, while generating features, or when training a machine learning model. Thus, to load a custom processing block we'll need to run a small server that responds to these HTTP calls. You can write this in any language, but we have created in Python. To load this example, open a terminal and run:
Then go to and you should be shown some information about the block.
As this block is running locally the studio cannot reach the block. To resolve this we can use which can make a local port accessible from a public URL. After you've finished development you can move the processing block to a server with a publicly accessible address (or run it on our infrastructure through your enterprise account). To set up a tunnel:
Sign up for .
For all options that you can return in a graph, see the return types in the API documentation.
An example of this function for the spectral analysis block is listed in the .
Blog post:
For inspiration we have published all our own blocks here: . If you've made an interesting block that you think is valuable for the community, please let us know on the or by opening a pull request. We'd be happy to help write efficient native code for the block, and then publish it as a standard block!
This is the specification for the deployment-metadata.json
file from Building deployment blocks.
Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5.
Tutorials
Want to see FOMO in action? Check out our Detect objects with centroids (FOMO) tutorial.
For example, FOMO lets you do 60 fps object detection on a Raspberry Pi 4:
And here's FOMO doing 30 fps object detection on an Arduino Nicla Vision (Cortex-M7 MCU), using 245K RAM.
You can find the complete Edge Impulse project with the beers vs. cans model, including all data and configuration here: https://studio.edgeimpulse.com/public/89078/latest.
So how does that work? First, a small primer. Let's say you want to detect whether you see a face in front of your sensor. You can approach this in two ways. You can train a simple binary classifier, which says either "face" or "no face", or you can train a complex object detection model which tells you "I see a face at this x, y point and of this size". Object detection is thus great when you need to know the exact location of something, or if you want to count multiple things (the simple classifier cannot do that) - but it's computationally much more intensive, and you typically need much more data for it.
The design goal for FOMO was to get the best of both worlds: the computational power required for simple image classification, but with the additional information on location and object count that object detection gives us.
The first thing to realize is that while the output of the image classifier is "face" / "no face" (and thus no locality is preserved in the outcome) the underlying neural network architecture consists of a number of convolutional layers. A way to think about these layers is that every layer creates a diffused lower-resolution image of the previous layer. E.g. if you have a 16x16 image the width/height of the layers may be:
16x16
4x4
1x1
Each 'pixel' in the second layer maps roughly to a 4x4 block of pixels in the input layer, and the interesting part is that locality is somewhat preserved. The 'pixel' in layer 2 at (0, 0) will roughly map back to the top left corner of the input image. The deeper you go in a normal image classification network, the less of this locality (or "receptive field") is preserved until you finally have just 1 outcome.
FOMO uses the same architecture, but cuts off the last layers of a standard image classification model and replaces this layer with a per-region class probability map (e.g. a 4x4 map in the example above). It then has a custom loss function which forces the network to fully preserve the locality in the final layer. This essentially gives you a heatmap of where the objects are.
The resolution of the heat map is determined by where you cut off the layers of the network. For the FOMO model trained above (on the beer bottles) we do this when the size of the heat map is 8x smaller than the input image (input image of 160x160 will yield a 20x20 heat map), but this is configurable. When you set this to 1:1 this actually gives you pixel-level segmentation and the ability to count a lot of small objects.
A difference between FOMO and other object detection algorithms is that it does not output bounding boxes, but it's easy to go from heat map to bounding boxes. Just draw a box around a highlighted area.
However, when working with early customers we realized that bounding boxes are merely an implementation detail of other object detection networks, and are not a typical requirement. Very often the size of objects is not important as cameras are in fixed locations (and objects thus fixed size), but rather you just want the location and the count of objects.
Thus, we now train on the centroids of objects. This makes it much easier to count objects that are close (every activation in the heat map is an object), and the convolutional nature of the neural network ensures we look around the centroid for the object anyway.
A downside of the heat map is that each cell acts as its own classifier. E.g. if your classes are "lamp", "plant" and "background" each cell will be either lamp, plant, or background. It's thus not possible to detect objects with overlapping centroids. You can see this in the Raspberry Pi 4 video above at 00:18 where the beer bottles are too close together. This can be solved by using a higher resolution heat map.
A really cool benefit of FOMO is that it's fully convolutional. If you set an image:heat map factor of 8 you can throw in a 96x96 image (outputs 12x12 heat map), a 320x320 image (outputs 40x40 heat map), or even a 1024x1024 image (outputs 128x128 heat map). This makes FOMO incredibly flexible, and useful even if you have very large images that need to be analyzed (e.g. in fault detection where the faults might be very, very small). You can even train on smaller patches, and then scale up during inference.
Additionally FOMO is compatible with any MobileNetV2 model. Depending on where the model needs to run you can pick a model with a higher or lower alpha, and transfer learning also works (although you need to train your base models specifically with FOMO in mind). This makes it easy for end customers to use their existing models and fine-tune them with FOMO to also add locality (e.g. we have customers with large transfer learning models for wildlife detection).
Together this gives FOMO the capabilities to scale from the smallest microcontrollers all the way to full gateways or GPUs. Just some numbers:
The video on the top classifies 60 times / second on a stock Raspberry Pi 4 (160x160 grayscale input, MobileNetV2 0.1 alpha). This is 20x faster than MobileNet SSD which does ~3 frames/second.
The second video on the top classifies 30 times / second on an Arduino Nicla Vision board with a Cortex-M7 MCU running at 480MHz) in ~240K of RAM (96x96 grayscale input, MobileNetV2 0.35 alpha).
During Edge Impulse Imagine we demonstrated a FOMO model running on a Himax WE-I Plus doing 14 frames per second on a DSP (video). This model ran in under 150KB of RAM (96x96 grayscale input, MobileNetV2 0.1 alpha). [1]
The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps. on a Cortex-M4F at 80MHz. [1]
[1] Models compiled using EON Compiler.
To build your first FOMO models:
Create a new project in Edge Impulse.
Make sure to set your labeling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in Object detection
Add an 'Object Detection (Images)' block to your impulse.
Under Images, select 'Grayscale'
Under Object detection, select 'Choose a different model' and select one of the FOMO models.
Make sure to lower the learning rate to 0.001 to start.
FOMO is currently compatible with all fully-supported development boards that have a camera, and with Edge Impulse for Linux (any client). Of course, you can export your model as a C++ Library and integrate it as usual on any device or development board, the output format of models is compatible with normal object detection models; and our SDK runs on almost anything under the sun (see Running your impulse locally for an overview) from RTOS's to bare-metal to special accelerators and GPUs.
Additional configuration for FOMO can be accessed via expert mode.
FOMO is sensitive to the ratio of objects to background cells in the labelled data. By default the configuration is to weight object output cells x100 in the loss function, object_weight=100
, as a way of balancing what is usually a majority of background. This value was chosen as a sweet spot for a number of example use cases. In scenarios where the objects to detect are relatively rare this value can be increased, e.g. to 1000, to have the model focus even more on object detection (at the expense of potentially more false detections).
FOMO uses MobileNetV2 as a base model for its trunk and by default does a spatial reduction of 1/8th from input to output (e.g. a 96x96
input results in a 12x12
output). This is implemented by cutting MobileNet off at the intermediate layer block_6_expand_relu
Choosing a different cut_point
results in a different spatial reduction; e.g. if we cut higher at block_3_expand_relu
FOMO will instead only do a spatial reduction of 1/4 (i.e. a 96x96
input results in a 24x24
output)
Note though; this means taking much less of the MobileNet backbone and results in a model with only 1/2 the params. Switching to a higher alpha may counteract this parameter reduction. Later FOMO releases will counter this parameter reduction with a UNet style architecture.
FOMO can be thought of logically as the first section of MobileNetV2 followed by a standard classifier where the classifier is applied in a fully convolutional fashion.
In the default configuration this FOMO classifier is equivalent to a single dense layer with 32 nodes followed by a classifier with num_classes
outputs.
For a three way classifier, using the default cut point, the result is a classifier head with ~3200 parameters.
We have the option of increasing the capacity of this classifier head by either 1) increasing the number of filters in the Conv2D
layer, 2) adding additional layers or 3) doing both.
For example we might change the number of filters from 32 to 16, as well as adding another convolutional layer, as follows.
For some problems an additional layer can improve performance, and in this case actually uses less parameters. It can though potentially take longer to train and require more data. In future releases the tuning of this aspect of FOMO can be handled by the EON Tuner.
Just like the rest of our Neural Network-based learning blocks, FOMO is delivered as a set of basic math routines free of runtime dependencies. This means that there are virtually no limitations to running FOMO, other than:
Making sure the model itself can fit into the target's memory (flash/RAM), and
making sure the target also has enough memory to hold the image buffer (flash/RAM)in addition to your application logic
In all, we have seen buffer, model and app logic (including wireless stack) fit in as little as 200KB for 64x64 pixel images. But we would definitely recommend a target with at least 512KB so that you can take advantage of larger image sizes and a wider range of model optimizations.
With regards to latency, the speed of the target will determine the maximum number of frames that can be processed in a given interval (fps). This will of course be influenced by any other tasks the CPU may need to complete, but we have consistently seen MCUs running @ 80MHz complete a full pass on a 64x64 pixel image in under one second, which should translate to just under 1fps once you add the rest of your app logic. Keep in mind that frame throughput can increase dramatically at higher speeds or when tensor acceleration is available. We have measured 40-60 fps consistently on a Raspberry Pi 4 and ~15 fps on unaccelerated 480MHz targets. The table below summarizes this trade-off:
After training and validating your model, you can now deploy it to any device. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption.
The Deployment page consists of a variety of deploy options to choose from depending on your target device. Regardless of whether you are using a fully supported development board or not, Edge Impulse provides deploy options through C++ library in which you can use to deploy your model on any targets (as long as the target has enough compute can handle the task).
The following are the 5 main categories of deploy options currently supported by Edge Impulse:
Deploy as a customizable library
Deploy as a pre-built firmware - for fully supported development boards
Run directly on your phone or computer
Use Edge Impulse for Linux for Linux targets
Create a custom deployment block (Enterprise feature)
This deploy option lets you turn your impulse into a fully optimized source code that can be further customized and integrated with your application. This option supports the following libraries:
You can run your impulse locally as an Arduino library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package.
To deploy as an Arduino library, select Arduino library on the Deployment page and click Build to create the library. Download the .ZIP file and import it as a sketch in your Arduino IDE then run your application.
For a full tutorial on how to run your impulse locally as an arduino library, have a look at Running your impulse locally - Arduino.
You can run your Impulse as a C++ library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package that can be easily ported to your custom applications.
Visit Running your impulse locally for a deep dive on how to deploy your impulse as a C++ library.
If you want to deploy your impulse to an STM32 MCU, you can use the Cube.MX CMSIS-PACK. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in any STM32 project with a single function call.
Have a look at Running your impulse locally - using CubeAI for a deep dive on how to deploy your impulse on STM32 based targets using the Cube.MX CMSIS-PACK.
When you want to deploy your impulse to a web app you can use the WebAssembly library.This packages all your signal processing blocks, configuration and learning blocks up into a single package that can run without any compilation.
Have a look at Running your impulse locally - through WebAssembly (Browser) fora deep dive on how you can run your impulse to classify sensor data in your Node.js application.
For this option, you can use a ready-to-go binary for your development board that bundles signal processing blocks, configuration and learning blocks up into a single package. This option is currently only available for fully supported development boards as shown in the image below:
To deploy your model using ready to go binaries, select your target device and click "build". Flash the downloaded firmware to your device then run the following command:
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
If you are developing for Linux based devices, you can use Edge Impulse for Linux for deployment. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
For a deep dive on how to deploy your impulse to linux targets using Edge Impulse for linux, you can visit the Edge Impulse for Linux tutorial.
You can run your impulse directly on your computer/mobile phone without the need of additional app. To run on your computer, you simply just need to select "computer" then click "Switch to classification mode". To run on your mobile phone, select 'Mobile Phone' then scan the QR code and click 'switch to classification mode".
When building your impulse for deployment, Edge Impulse gives you the option of adding another layer of optimization to your impulse using the EON compiler. The EON Compiler lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.
To activate the EON Compiler, select you preferred deployment option then go to Enable EON™ Compiler then enable it and click 'Build' to build your impulse for deployment.
To have a peek of how your impulse would utilize compute resources of your target device, Edge Impulse also gives an estimate of latency, flash, RAM to be consumed by your target device even before deploying your impulse locally. This can really save you a lot of engineering time costs incurred by recurring iterations and experiments.
You can also select whether to run the unquantized float32 or the quantized int8 models as shown in the image below.
The above confusion matrix is only based on the test data to help you know how your model performs on unseen real world data. It can also help you know whether your model has learned to overfit on your training data which is a common occurrence.
Training and deploying high performing ML models is usually considered as a continuous process rather than a one time exercise. When you are validating your model and discover an overfit, you might consider adding some more diverse data then perform model retraining while maintaining the initially set DSP and Neural Network block configurations.
Also during inference If you find that the data distribution has drifted significantly from the initial training distribution, it is usually a good common practice to retrain your model on the newer data distribution to keep up with the high model performance.
The Retrain model feature in the Edge Impulse Studio is useful when adding new data to your project. It uses already known parameters from your selected DSP and ML blocks then uses them to automatically regenerate new features and retrain the Neural Network model in one single step. You can consider this a shortcut for retraining your model since you don’t need to go through all the blocks in your impulse one by one again.
To retrain your model after adding some data, navigate to the Retrain model tab and click Train model.
Building data pipelines is a very useful feature where you can stack several transformation blocks similar to the . They can be used in a standalone mode (just execute several transformation jobs in a pipeline), to feed a dataset or to feed a project.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
The examples in the screenshots below shows how to create and use a pipeline to create the 'AMS Activity 2022' dataset.
To create a new pipeline, click on '+Add a new pipeline:
In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.
Select Copy as pipeline step and paste it to the configuration json file.
You can then paste the copied step directly to the respected field.
Below, you have an option to feed the data to either a organisation dataset or an Edge Impulse project
By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮
button and select Edit pipeline.
Once the pipeline has successfully finished, it can send an email to the Users to notify.
Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline now.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline from code. This will display an overlay with curl
, Node.js
and Python
code samples.
You will need to create an API key to run the pipeline from code.
Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:
The series of fusion processors from utilize ARM's low power Cortex-M55 CPUs with dedicated Ethos-U55 microNPUs to run embedded ML workloads quickly and efficiently. The devices feature both 'High Power' cores designed for large model architectures, as well as 'High Efficiency' cores designed for low power continuous monitoring. The and are both fully supported by Edge Impulse. The Ensemble E7 kits feature multiple core types, dual MEMS microphones, accelerometers, and a MIPI camera interface.
To get started with the Alif Ensemble E7 and Edge Impulse you'll need the following hardware:
OR
To set this device up in Edge Impulse, you will need to install the following software:
The latest Alif Security Toolkit
:
Extract the archive, and read through the included Security Toolkit Quick Start Guide
to finalize the installation
Once you have installed it's time to connect the development board to Edge Impulse.
To interface the Alif Ensemble E7 AppKit or Development Kit, you'll need to make sure your hardware is properly configured and connected to your computer. Follow the steps below to prepare your specific kit for connection to Edge Impulse
After configuring the hardware, the next step is to flash the default Edge Impulse Firmware. This will allow us to collect data directly from your Ensemble device. To update the firmware:
Navigate to the directory where you installed the Alif Security Toolkit
Copy the .bin
files from the Edge Impulse firmware directory into the build/images
directory of the Alif Security Toolkit
Copy all .json
files from the Edge Impulse firmware directory into the build/config
directory of the Alif Security Toolkit
From a command prompt or terminal, run the following commands:
Now, the Ensemble device can connect to the Edge Impulse CLI
installed earlier. To test the CLI for the first time, either:
OR
Then, from a command prompt or terminal on your computer, run:
Device choice
If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.FTDI_USBtoUART or /dev/cu.usbserial-*. You may see two FTDI
serial ports enumerated for AppKit devices. If so, select the second entry in the list, which generally is the serial data connection to the Ensemble device.
If you see failures connecting to one serial port, make sure to test other serial connections just in case.
This will start a wizard which will ask you to log in and choose an Edge Impulse project. You should see your new or cloned project listed on the command line. Use the arrow keys and hit Enter
to select your project.
That's all! Your device is now connected to Edge Impulse. To verify this, go to
With everything set up you can now build your first machine learning model with these tutorials. This will walk you through the process of collecting data and training a new ML model:
Alternatively, you can test on-device inference with a demo model included in the base firmware binary. To do this, you may run the following command from your terminal:
Then, once you've tested out training and deployment with the Edge Impulse Firmware, learn how to integrate impulses with your own custom Ensemble based application:
This is a list of development boards that are fully supported by Edge Impulse. These boards come with a special firmware which enables data collection from all their sensors, allows you to build new ready-to-go binaries that include your trained impulse, and come with examples on integrating your impulse with your custom firmware. These boards are the perfect way to start building machine learning solutions on real embedded hardware.
Different development board or custom PCB? No problem! You can upload data to Edge Impulse in a variety of ways, such as using the , the SDK, or by (e.g. CSV, JPG, WAV).
From there, your trained model can be deployed as a . It requires some effort, but most build systems (for computers, smartphones, and microcontrollers) will work with our C++ library. This, of course, requires that your build system has a C++ compiler and that there is enough flash/RAM on your device to run the library/model. Also, if you feel like porting the official Edge Impulse firmware to your own board, use this .
Just want to experience Edge Impulse? You can also use your !
In this section, we will show how to synchronize research data with a bucket in your organizational dataset. The goal of this step is to gather data from different sources and sort them to obtain a sorted dataset (that we will then validate in the next section).
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
The reference design described in the consists of 10 subjects performing 1.5 - 2 hours of activities in a research lab. Participants have a study ID (e.g. AMS_001) that is used to refer to the participant. For each participant we have 4 CSV files:
accelerometer.csv
- data from the wearable end device.
ppg.csv
- data from the wearable end device.
polar_h10.csv
- reference data from a commercial reference device (Polar H10).
labels.csv
- labels of the activity, as recorded by the research lab.
We've mimicked a proper research study, and have split the data up into two locations.
accelerometer.csv
/ ppg.csv
- live in the company data lake in S3. The data lake uses an internal structure with non-human readable IDs for each participant (e.g. 2E93ZX
for anonymized data):
polar_h10.csv
/ labels.csv
are uploaded by the research partner to an . The files are prefixed with the study ID:
To create the mapping between the study ID and the internal data lake ID we use a study master sheet. It contains information about all participants, ID mapping, and metadata. E.g.:
Notes: This master sheet was made using a Google Sheet but can be anything. All data (data lake, portal, output) are hosted in an Edge Impulse S3 bucket but can be stored anywhere (see below).
With the storage bucket in place you can create your first dataset. Datasets in Edge Impulse have three layers:
The dataset, a larger set of data items, grouped together.
Data item, an item with metadata and files attached.
Data file, the actual files.
No required format for data files
There is no required format for data files. You can upload data in any format, whether it's CSV, Parquet, or a proprietary data format.
There are three ways of uploading data into your organization. You can either:
Upload data directly to the storage bucket (recommended method). In this case use Add data... > Add dataset from bucket and the data will be discovered automatically.
Creating a new structure in S3 like this:
Syncing the S3 folder with a research dataset in your Edge Impulse organization (like AMS Activity Study 2022
).
Updating the metadata with the metadata from the master sheet (Age
, BMI
, etc...).
With the data sorted we then:
Combine the data into a single Parquet file. This is essentially the contract we have for our dataset. By settling on a standard format (strong typed, same column names everywhere) this data is now ready to be used for ML, new algorithm development, etc. Because we also add metadata for each file here we're very quickly building up a valuable R&D datastore.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You can optionally show a check mark in the list of data items, and show a check list for data items. This can be used to quickly view which data items are complete (if you need to capture data from multiple sources) or whether items are in the right format.
Checklists look trivial, but are actually very powerful as they give quick insights in dataset issues. Missing these issues until after the study is done can be super expensive.
Checklists are written to ei-metadata.json
and are automatically being picked up by the UI.
Checklists are driven by the metadata for a data item. Set the ei_check
metadata item to either 0
or 1
to show a check mark in the list. Set an ei_check_KEYNAME
metadata item to 0
or 1
to show the item in the check list.
To query for items with or without a check mark, use a filter in the form of:
For the reference design described and used in the previous pages, the combiner takes in a data item, and writes out:
A checklist, e.g.:
✔ - PPG file present
✔ - Accelerometer file present
✘ - Correlation between Polar/PPG HR is at least 0.5
If the checklist is OK, a combined.parquet
file.
A hr.png
file with the correlation between HR found from PPG, and HR from the reference device. This is useful for two reasons:
If the correlation is too low we're looking at the wrong file, or data is missing.
Verify if the PPG => HR algorithm actually works.
Development Boards | Officially Supported Sensors | Memory** | Storage*** | Architecture |
---|
Different development board or different sensors? No problem, you can always collect data using the or the SDK, and deploy your model back to the device with the tutorials. Also, if you feel like porting your board, use this .
.
Navigate to the (you will need to register to create an account with Alif, or log in to your existing Alif account). and download the latest App Security Toolkit (tested with version 0.56.0) for windows or linux.
(Optional) :
If you are using MacOS, we recommended installing in order to use the Alif Security Toolkit for programming.
Full setup instructions may be found in Alif's . A summary of the hardware setup instructions is shown below.
Note: If you have Revision B or earlier kits, board modifications are required to connect the camera interface. Alif provides a guide for configuring the baseboard for MIPI camera support via . If you are using a revision C or later development kit, you do not need to make these modifications.
Note: If you have Revision B or earlier kits, board modifications are required to connect the LCD interface. Alif provides a guide for configuring the baseboard for LCD support via . If you are using a revision C or later development kit, you do not need to make these modifications.
and unzip the file.
Create a new project from the
Clone an existing Edge Impulse public project, like this . Click the link and then press Clone
at the top right of the public project.
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. If you choose to host the data yourself your infrastructure should be available through the , and you are responsible for setting up proper backups. To configure a new storage bucket, head to your organization, choose Data > Buckets, click Add new bucket, and fill in your access credentials. Our solution engineers are also here to help you set up the buckets for you.
Upload data through the .
Upload the files through the .
The sorter is the first step of the . It's job is to fetch the data from all locations (here: internal data lake, portal, metadata from study master sheet) and create a research dataset in Edge Impulse. It does this by:
Need to verify that the data is correct (see )
All these steps can be run through different and executed one after the other using .
To make it easy to create these lists on the fly you can set these metadata items directly from a .
Requirement
Minimum
Recommended
Memory Footprint (RAM)
256 KB 64x64 pixels (B&W, buffer included)
≥ 512 KB 96x96 pixels (B&W, buffer Included)
Latency (100% load)
80 MHz < 1 fps
> 80 MHz + acceleration ~15 fps @ 480MHz 40-60fps in RPi4
| 4MB | 4MB | Cortex-M55 400MHz + U55-256MACC |
| 256KB | 1MB | Cortex-M4F 64MHz |
| 64KB | 512KB | Cortex-M4 64MHz |
| 128KB | 1MB | Cortex-M7 480MHz |
| 512KB | 2MB | Cortex-M7 480MHz |
| 4MB | 4MB | ESP32 240MHz |
| 2MB | 2MB | ARC DSP 400MHz |
| 1MB | 2MB | Cortex-M4 150MHz + Cortex-M0+ 100MHz |
| 256KB | 1MB | Cortex-M4F 64MHz |
| 512KB | 1MB | Cortex-M33 128MHz |
| 256KB | 1MB | Cortex-M33 64MHz |
| 512KB | 1MB | Cortex-M33 128MHz |
| 256KB | 1MB | Cortex-M33 64MHz |
| 32MB SDRAM / 1MB SRAM | 32MB external / 2MB internal | Cortex-M7 480MHz |
| 512KB | 2MB | Cortex-M33 200MHz |
| 2MB | 2MB | ARC DSP 400MHz |
| 2MB | 2MB | ARC DSP 400MHz |
| 256KB | 1MB | Cortex-M4F 40MHz |
| 256KB | 1.5MB | Cortex-M33 78MHz |
| 1.5MB | 8MB | Cortex-M4F 156MHz |
| 128KB | 1MB | Cortex-M4F 80MHz |
|
| 32KB | 256KB | SAMD21 Cortex-M0+ |
| 80KB | 352KB | Cortex-M4F 48MHz |
| 256KB | 2MB | Cortex-M0+ 133MHz |
The Nicla Vision is a ready-to-use, standalone camera for analyzing and processing images on the Edge. Thanks to its 2MP color camera, smart 6-axis motion sensor, integrated microphone, and distance sensor, it is suitable for asset tracking, object recognition, and predictive maintenance. Some of its key features include:
Powerful microcontroller equipped with a 2MP color camera
Tiny form factor of 22.86 x 22.86 mm
Integrated microphone, distance sensor, and intelligent 6-axis motion sensor
Onboard Wi-Fi and Bluetooth® Low Energy connectivity
Standalone when battery-powered
Expand existing projects with sensing capabilities
Enable fast Machine Vision prototyping
Compatible with Nicla, Portenta, and MKR products
Its exceptional capabilities are supported by a powerful STMicroelectronics STM32H747AII6 Dual ARM® Cortex® processor, combining an M7 core up to 480 Mhz and an M4 core up to 240 Mhz. Despite its industrial strength, it keeps energy consumption low for battery-powered standalone applications.
The Arduino Nicla Vision is available for around 95 EUR from the Arduino Store.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
There are two ways to connect the Nicla Vision to Edge Impulse:
Using the official Edge Impulse firmware - it supports all onboard sensors, including camera.
Using an ingestion script. This supports analog, IMU, proximity sensors and microphone (limited to 8 kHz), but not the camera. It is only recommended if you want to modify the ingestion flow for third-party sensors.
Use a micro-USB cable to connect the development board to your computer. Under normal circumstances, flash process should work without entering the bootloader manually. However if run into difficulties flashing the board, you can enter the bootloader by pressing RESET twice. The onboard LED should start pulsating to indicate this.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse ingestion sketches and unzip the file.
Open the nicla_vision_ingestion.ino
(for IMU/proximity sensor) or nicla_vision_ingestion_mic.ino
(for microphone) sketch in a text editor or the Arduino IDE.
For IMU/proximity sensor data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select the desired sample frequency (in Hz). For example, for the accelerometer sensor:
For microphone data ingestion, you do not need to change the default parameters in the nicla_vision_ingestion_mic.ino
sketch.
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Vision using the Arduino CLI:
Alternatively if you open the sketch in the Arduino IDE, you can compile and upload the sketch from there.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_vision_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the table below for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Vision.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. You will also name your sensor axes - in the case of the microphone, you need to enter audio
. If you want to switch projects/sensors run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
The above screenshots are for Edge Impulse Ingestion scripts and Data forwarder. If you use the official Edge Impulse firmware for the Nicla Vision, the content will be slightly different.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Use the nicla_vision_ingestion.ino
sketch and the Edge Impulse data forwarder to easily send data from any sensor on the Nicla Vision into your Edge Impulse project.
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Vision. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
Use the Running your impulse locally: On your Arduino tutorial and select one of the Nicla Vision examples.
The Arduino Nicla Voice is a development board with a high-performance microphone and IMU, a Cortex-M4 Nordic nRF52832 MCU and the Syntiant NDP120 Neural Decision Processor™. The NDP120 supports multiple Neural Network architectures and is ideal for always-on low-power speech recognition applications. You'll be able to sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance audio interfaces.
The Edge Impulse firmware for this development board is open source and hosted on GitHub.
To set this device up in Edge Impulse, you will need to install the following software:
Download the Nicla Voice firmware for audio or IMU below and connect the USB cable to your computer:
The archive contains different scripts to flash the firmware on your OS, ie for macOS:
install_lib_mac.command: script will install the Arduino Core for the Nicla board and the pyserial package required to update the NDP120 chip. You only need to run this script once.
flash_mac.command: to flash both the MCU and NDP120 chip. You should use this script on a brand new board
The additional scripts below can be used for specific actions:
flash_mac_mcu.command: to flash only the Nordic MCU, ie if you recompiled the firmware and doesn't need to update the NDP120 model.
flash_mac_model.command: to flash only the NDP120 model.
format_mac_ext_flash.command: to format the external flash that contains the NDP120 model
After flashing the MCU and NDP chips, connect the Nicla Voice directly to your computer's USB port. Linux, Mac OS, and Windows platforms are supported. From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Use syntiant compatible pre-processing blocks
The Arduino Nicla Voice is based on the Syntiant NDP120 Neural Decision Processor™ and needs to use dedicated Syntiant DSP blocks.
With everything set up you can now build your first machine learning model and evaluate it using the Arduino Nicla Voice Board with this tutorial:
How to use Arduino-CLI with macOS M1 chip? You will need to install Rosetta2 to run the Arduino-CLI. See details on Apple website.
How to label my classes? The NDP chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise
and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset
(we prefix it with 'z' in order to get this class last in the list).
The Arduino Nano 33 BLE Sense is a tiny development board with a Cortex-M4 microcontroller, motion sensors, a microphone and BLE - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 30 USD from Arduino and a wide range of distributors.
You can also use the Arduino Tiny Machine Learning Kit to run image classification models on the edge with the Arduino Nano and attached OV7675 camera module (or connect the hardware together via jumper wire and a breadboard if purchased separately).
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-nano-33-ble-sense.
Arduino Nano 33 BLE Sense rev2?
Arduino recently released a new version of the Arduino Nano 33 BLE Sense, the rev2 version which has different sensors than the original version. We are working on adding a dedicated "official firmware" so you can easily flash this board version. In the meantime, to ingest data, please have a look at Data Ingestion (API), Data Forwarder (CLI) or Data Uploader (CLI and Studio)
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. Then press RESET twice to launch into the bootloader. The on-board LED should start pulsating to indicate this.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
You will need the following hardware:
Arduino Nano 33 BLE Sense board with headers.
OV7675 camera module.
Micro-USB cable.
Solderless breadboard and female-to-male jumper wires.
First, slot the Arduino Nano 33 BLE Sense board into a solderless breadboard:
With female-to-male jumper wire, use the following wiring diagram, pinout diagrams, and connection table to link the OV7675 camera module to the microcontroller board via the solderless breadboard:
Download the full pinout diagram of the Arduino Nano 33 BLE Sense here.
Finally, use a micro-USB cable to connect the Arduino Nano 33 BLE Sense development board to your computer.
Now build & train your own image classification model and deploy to the Arduino Nano 33 BLE Sense with Edge Impulse!
The Nicla Sense ME is a tiny, low-power tool that sets a new standard for intelligent sensing solutions. With the simplicity of integration and scalability of the Arduino ecosystem, the board combines four state-of-the-art sensors from Bosch Sensortec:
BHI260AP motion sensor system with integrated AI.
BMM150 magnetometer.
BMP390 pressure sensor.
BME688 4-in-1 gas sensor with AI and integrated high-linearity, as well as high-accuracy pressure, humidity and temperature sensors.
Designed to easily analyze motion and the surrounding environment – hence the “M” and “E” in the name – it measures rotation, acceleration, pressure, humidity, temperature, air quality and CO2 levels by introducing completely new Bosch Sensortec sensors on the market.
Its tiny size and robust design make it suitable for projects that need to combine sensor fusion and AI capabilities on the edge, thanks to a strong computational power and low-consumption combination that can even lead to standalone applications when battery operated.
The Arduino Nicla Sense ME is available for around 55 USD from the Arduino Store.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the nicla_sense_ingestion.ino
sketch in a text editor or the Arduino IDE.
For data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the Environmental sensors:
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Sense using the Arduino CLI:
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_sense_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the table below for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required to run the Edge Impulse Arduino library deployment example applications for the Nicla Sense without any changes.
Else, when deploying the model, you will see an error like the following:
If your axis names are different, when using the generated Arduino Library for the inference, you can modify the eiSensors nicla_sensors[]
(near line 70) in the sketch example to add your custom names. e.g.:
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with the Edge Impulse continuous motion recognition tutorial.
Looking to connect different sensors? Use the nicla_sense_ingestion
sketch and the Edge Impulse Data forwarder to easily send data from any sensor on the Nicla Sense into your Edge Impulse project.
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Sense ME. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
Use the Running your impulse locally: On your Arduino tutorial and select one of the Nicla Sense examples.
The Portenta H7 is a powerful development board from Arduino with both a Cortex-M7 microcontroller and a Cortex-M4 microcontroller, a BLE/WiFi radio, and an extension slot to connect the Portenta vision shield - which adds a camera and dual microphones. At the moment the Portenta H7 is partially supported by Edge Impulse, letting you collect data from the camera, build computer vision models, and deploy trained machine learning models back to the development board. The Portenta H7 and the vision shield are available directly from Arduino for ~$150 in total.
There are two versions of the vision shield: one that has an Ethernet connection and one with a LoRa radio. Both of these can be used with Edge Impulse.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-portenta-h7.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Using the vision shield using two edge connectors on the back Portenta H7.
Use a USB-C cable to connect the development board to your computer. Then, double-tap the RESET button to put the device into bootloader mode. You should see the green LED on the front pulsating.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Double press on the RESET button on your board to put it in the bootloader mode.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Download your custom firmware from the Deployment tab in the Studio and install the firmware with the same method as in the "Update the firmware" section and run the edge-impulse-run-impulse
command:
Note that it may take up to 10 minutes to compile the firmware for the Arduino Portenta H7
Use the Running your impulse locally: On your Arduino tutorial and select one of the portenta examples:
For an end-to-end example that classifies data and then sends the result over LoRaWAN. Please see the example-portenta-lorawan example.
If you come across this issue:
You probably forgot to double press the RESET button before running the flash script.
The Nordic Semiconductor nRF9160 DK is a development board with an nRF9160 SIP incorporating a Cortex M-33 for your application, a full LTE-M/NB-IoT modem with GPS along with 1 MB of flash and 256 KB RAM. It also includes an nRF52840 board controller with Bluetooth Low Energy connectivity. The Development Kit is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF9160 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF9160 DK is available for around 150 USD from a variety of distributors including Digikey.
If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf-91.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF9160 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications. You can also remove the shield before flashing the board.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Install the nRF Command Line Tools.
Flash the board controller, you only need to do this once. Go to step 4 if you've performed this step before.
Ensure that the PROG/DEBUG
switch is in nRF52
position.
Copy board-controller.bin
to the JLINK
mass storage device.
Flash the application:
Ensure that the PROG/DEBUG
switch is in nRF91
position.
Run the flash script for your Operating System.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF9160 DK exposes multiple UARTs. If prompted, choose the top one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Espressif ESP-EYE (ESP32) is a compact development board based on Espressif's ESP32 chip, equipped with a 2-Megapixel camera and a microphone. ESP-EYE also offers plenty of storage, with 8 MB PSRAM and 4 MB SPI flash - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 22 USD from Mouser and a wide range of distributors.
There are plenty of other boards built with ESP32 chip - and of course there are custom designs utilizing ESP32 SoM. Edge Impulse firmware was tested with ESP-EYE and ESP FireBeetle boards, but there is a possibility to modify the firmware to use it with other ESP32 designs. Read more on that in Using with other boards section of this documentation.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-espressif-esp32.
To set this device up in Edge Impulse, you will need to install the following software:
Python 3.
The ESP documentation website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The standard firmware supports the following sensors:
Camera: OV2640, OV3660, OV5640 modules from Omnivision
Microphone: I2S microphone on ESP-EYE (MIC8-4X3-1P0)
LIS3DHTR module connected to I2C (SCL pin 22, SDA pin 21)
Any analog sensor, connected to A0
The analog sensor and LIS3DHTR module were tested on ESP32 FireBeetle board and Grove LIS3DHTR module.
ESP32 is a very popular chip both in a community projects and in industry, due to its high performance, low price and large amount of documentation/support available. There are other camera enabled development boards based on ESP32, which can use Edge Impulse firmware after applying certain changes, e.g.
AI-Thinker ESP-CAM
M5STACK ESP32 PSRAM Timer Camera X (OV3660)
M5STACK ESP32 Camera Module Development Board (OV2640)
The pins used for camera connection on different development boards are not the same, therefore you will need to change the #define here to fit your development board, compile and flash the firmware. Specifically for AI-Thinker ESP-CAM, since this board needs an external USB to TTL Serial Cable to upload the code/communicate with the board, the data transfer baud rate must be changed to 115200 here.
The analog sensor and LIS3DH accelerometer can be used on any other development board without changes, as long as the interface pins are not changed. If I2C/ADC pins that accelerometer/analog sensor are connected to are different, from described in Sensors available section, you will need to change the values in LIS3DHTR component for ESP32, compile and flash it to your board.
Additionally, since Edge Impulse firmware is open-source and available to public, if you have made modifications/added new sensors capabilities, we encourage you to make a PR in firmware repository!
To deploy your impulse on your ESP32 board, please see:
Generate an Edge Impulse firmware (ESP32-EYE only)
Download a C++ library (using ESP-IDF)
Download an Arduino library
The Nordic Thingy:53™ is an easy-to-use prototyping platform, it makes it possible to create prototypes and proof-of-concepts without the need to build custom hardware. Thingy:53 is built around the nRF5340 SoC. The capacity of its dual Arm Cortex-M33 processors enables it to do embedded machine learning (ML), both collecting data and running trained ML models on the device. The Bluetooth Low Energy radio allows it to connect to smart phones, tablets, laptops and similar devices, without the need for a wired connection. Other protocols like Thread, Zigbee and proprietary 2.4 GHz protocols are also supported by the radio. It also includes a well of different integrated sensors, an NFC antenna, and has two buttons and one RGB LED that simplifies input and output.
Nordic's Thingy:53 is fully supported by Edge Impulse and every Thingy:53 is shipped with Edge Impulse firmware already flashed. You'll be able to sample raw data, build models, and deploy trained machine learning models directly out-of-the-box via the Edge Impulse Studio or the Nordic nRF Edge Impulse iPhone and Android apps over BLE connection. The Thingy:53 is available for around 120 USD from a variety of distributors.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nordic-thingy53.
To set this device up in Edge Impulse via USB serial or external debug probe, you will need to install the following software:
nRF Connect for Desktop v3.11.1 (only needed to update device firmware through USB or external debug probe).
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Brand new Thingy:53 devices will work out-of-the-box with the Edge Impulse Studio and the Nordic nRF Edge Impulse iPhone and Android apps. However, if your device has been flashed with some other firmware, then follow the steps below to update your device to the latest Edge Impulse firmware.
Use a USB cable to connect the development board to your computer. Then, set the power switch to 'on'.
Download the latest Edge Impulse firmware:
Edge Impulse firmware: nordic-thingy53-full.zip
*-full.zip
contains HEX files to upgrade the device through the external probe.
Edge Impulse firmware: nordic-thingy53-dfu.zip
*-dfu.zip
contains dfu_application.zip
package to upgrade the already flashed device through the Serial/USB bootloader.
Follow Nordic's instructions to update the firmware on the Thingy:53 through your choice of debugging connection:
See the section below on Connecting to the nRF Edge Impulse mobile application.
With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
If prompted to select a device, choose ZEPHYR
:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with this tutorial:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Now that you have created an Edge Impulse account and trained your first Edge Impulse machine learning model, using the Nordic nRF Edge Impulse app you can deploy your impulse to your Nordic Thingy:53 and acquire/upload new sensor data into your Edge Impulse projects.
Select the Devices tab to connect to your Thingy:53 device to your mobile phone:
To remove your connected Thingy:53 from your project, select the connected device name and scroll to the bottom of the device page to remove it.
To view existing data samples in your Edge Impulse project, select the Data Acquisition tab. To record and upload a new data sample into your project, click on the "+" button at the top right of the app. Select your sensor, type in the sample label, and choose a sample length and frequency, then select Start Sampling.
Build and deploy your Edge Impulse model to your Thingy:53 via the Deployment tab. Select your project from the top drop-down, select your connected Thingy:53 device, and click Build:
The app will start building your project and uploading the firmware to the connected Thingy:53:
If you encounter connection errors during deployment, please see Troubleshooting.
Every Thingy:53 is shipped with a default Edge Impulse model. This model is created from the Tutorial: Continuous motion recognition and it's corresponding Edge Impulse project.
Select the Inferencing tab to view the inferencing results of the model flashed to the connected Thingy:53:
Select the Settings tab to view your logged-in account information, BLE scanner settings, and application version. Click on your account name to view your Edge Impulse projects and logout of your account.
Lost BLE connection to device
Reconnect your device by selecting your device name on the Devices tab and click "Reconnect".
Make sure power cables are plugged in properly.
Do not use iPhone/Android app multitasking during data acquisition, firmware deployment, or inferencing tasks, as the BLE streaming connection will be closed.
The Nordic Semiconductor Thingy:91 is an easy-to-use battery-operated prototyping platform for cellular IoT using LTE-M, NB-IoT and GPS. It is ideal for creating Proof-of-Concept (PoC), demos and initial prototypes in your cIoT development phase. Thingy:91 is built around the nRF9160 SiP and is certified for a broad range of LTE bands globally, meaning the Nordic Thingy:91 can be used just about anywhere in the world. There is an nRF52840 multiprotocol SoC on the Thingy:91. This offers the option of adding Bluetooth Low Energy connectivity to your project.
Nordic's Thingy:91 is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. Thingy:91 is available for around 120 USD from a variety of distributors.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nordic-thingy91.
To set this device up in Edge Impulse, you will need to install the following software:
nRF Connect for Desktop v3.7.1 - install exactly version 3.7.1, please follow the below instructions to downgrade or newly install v3.71:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Before you start a new project, you need to update the Thingy:91 firmware to our latest build.
Use a micro-USB cable to connect the development board to your computer. Then, set the power switch to 'on'.
Download the latest Edge Impulse firmware. The extracted archive contains the following files:
firmware.hex
: the Edge Impulse firmware image for the nRF9160 SoC, and
connectivity-bridge.hex
: a connectivity application for the nRF52840 that you only need on older boards (hardware version < 1.4)
Open nRF Connect for Desktop and launch the Programmer application.
Scroll down in the menu on the right and make sure Enable MCUboot is selected.
Switch off the Nordic Thingy:91.
Press the multi-function button (SW3) while switching SW1 to the ON position.
In the Programmer navigation bar, click Select device.
In the menu on the right, click Add HEX file > Browse, and select the firmware.hex file from the firmware previously downloaded.
Scroll down in the menu on the right to Device and click Write:
In the MCUboot DFU window, click Write. When the update is complete, a Completed successfully message appears.
You can now disconnect the board.
Thingy:91 hardware version < 1.4.0
Updating the firmware with older hardware versions may fail. Moreover, even if the update works, the device may later fail to connect to Edge Impulse Studio:
In these cases, you will also need to flash the connectivity-bridge.hex
onto the nRF52840 in the Thingy:91. Follow the steps here to update the nRF52840 SOC application with the connectivity-bridge.hex
file through USB.
If this method doesn't work, you will need to flash both hex files using an external probe."
With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The Thingy:91 exposes multiple UARTs. If prompted, choose the first one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with this tutorial:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The OpenMV Cam is a small and low-power development board with a Cortex-M7 microcontroller supporting MicroPython, a μSD card socket and a camera module capable of taking 5MP images - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models through the studio and the OpenMV IDE. It is available for 80 USD directly from .
To set this device up in Edge Impulse, you will need to install the following software:
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse. To make this easy we've put some tutorials together which takes you through all the steps to acquire data, train a model, and deploy this model back to your device.
- LoraWAN Vision AI Sensor is an image recognition AI sensor designed for developers. SenseCAP A1101 - LoRaWAN Vision AI Sensor combines TinyML AI technology and LoRaWAN long-range transmission to enable a low-power, high-performance AI device solution for both indoor and outdoor use.
This sensor features Himax high-performance, low-power AI vision solution which supports the Google TensorFlow Lite framework and multiple TinyML AI platforms.
It is fully supported by Edge Impulse which means you will be able to sample raw data from the camera, build models, and deploy trained machine learning models to the module directly from the studio without any programming required. SenseCAP - Vision AI Module is available for purchase directly from .
To set A1101 up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen.
Problems installing the Edge Impulse CLI?
With all the software in place, it's time to connect the A1101 to Edge Impulse.
BL702 is the USB-UART chip which enables the communication between the PC and the Himax chip. You need to update this firmware in order for the Edge Impulse firmware to work properly.
Connect the A1101 to the PC via a USB Type-C cable while holding down the Boot button on the A1101.
Open previously installed Bouffalo Lab Dev Cube software, select BL702/704/706, and then click Finish
Go to the MCU tab. Under Image file, click Browse and select the firmware you just downloaded.
Click Refresh, choose the Port related to the connected A1101, set Chip Erase to True, click Open UART, click Create & Download and wait for the process to be completed .
You will see the output as All Success if it went well.
If the flashing throws an error, click Create & Download multiple times until you see the All Success message.
A1101 does not come with the right Edge Impulse firmware yet. To update the firmware:
Connect the A1101 again to the PC via USB Type-C cable and double-click the Boot button on the A1101 to enter mass storage mode
After this you will see a new storage drive shown on your file explorer as SENSECAP. Drag and drop the firmware.uf2 file to SENSECAP drive
Once the copying is finished SENSECAP drive will disappear. This is how we can check whether the copying is successful or not.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Device connected to Edge Impulse correctly!
With everything set up, you can now build and run your first machine learning model with these tutorials:
Frames from the onboard camera can be directly captured from the studio:
Finally, once a model is trained, it can be easily deployed to the A1101 – Vision AI Module to start inferencing!
Drag and drop the firmware.uf2 file from EDGE IMPULSE to SENSECAP drive.
When you run this on your local interface:
it will ask you to click a URL, then you will see a live preview of the camera on your device.
Since our focus here is on describing the model training process, we won't go into the details of the cloud platform data display. But if you're interested, you can always visit the SenseCAP cloud platform to try adding devices and viewing data. It's a great way to get a better understanding of the platform's capabilities!
LoRaWAN® network coverage is required when using sensors, there are two options.
Seeed provides:
If you have interests, please kindly click for more details.
Open SenseCAP Mate and login
Under Config screen, select Vision AI Sensor
Press and hold the configuration button on the SenseCap A1101 for 3 seconds to enter bluetooth pairing mode
Click Setup and it will start scanning for nearby SenseCAP A1101 devices- Go to Settings and make sure Object Detection and User Defined 1 is selected. If not, select it and click Send
Go to General and click Detect, you'll see the actual data here.
Click Connect button. Then you will see a pop up on the browser. Select SenseCAP Vision AI - Paired and click Connect
View real-time inference results using the preview window!
The cats are detected with bounding boxes around them. Here "0" corresponds to each detection of the same class. If you have multiple classes, they will be named as 0, 1, 2, 3, 4
and so on. Also the confidence score for each detected object (0.72 in above demo) is displayed!
The Renesas CK-RA6M5, Cloud Kit for RA6M5 MCU Group, enables users to experience the cloud connectivity options available from Renesas and Renesas Partners. A broad array of sensors on the CK-RA6M5 provide multiple options for observing user interaction with the Cloud Kit. By selecting from a choice of add-on devices, multiple cloud connectivity options are available.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
An earlier prototype version of the Renesas CK-RA6M5 Cloud Kit is also supported. The layout of this earlier prototype version is available .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
Edge Impulse Studio can collect data directly from your CK-RA6M5 Cloud Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your CK-RA6M5 Cloud Kit you first need to flash it with our base firmware image.
Check that:
J22 is set to link pins 2-3
J21 link is closed
J16 Link is open
Connect J14 and J20 on the CK-RA6M5 board to USB ports on the host PC using the 2 micro USB cables supplied.
Power LED (LED6) on the CK-RA6M5 board lights up white, indicating that the CK-RA6M5 board is powered on.
If the CK-RA6M5 board is not powered through the Debug port (J14) the current available to the board may be limited to 100 mA.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
is a thumb-sized board based on Himax HX6537-A processor which is equipped with a 2-Megapixel OV2640 camera, microphone, 3-axis accelerometer and 3-axis gyroscope. It offers storage with 32 MB SPI flash, comes pre-installed with ML algorithms for face recognition and people detection and supports customized models as well. It is compatible with the XIAO ecosystem and Arduino, all of which makes it perfect for getting started with AI-powered camera projects!
It is fully supported by Edge Impulse which means you will be able to sample raw data from the camera, build models, and deploy trained machine learning models to the module directly from the studio without any programming required. Grove - Vision AI Module is available for purchase directly from .
Quick links access:
To set this board up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the Edge Impulse CLI?
With all the software in place it's time to connect the board to Edge Impulse.
BL702 is the USB-UART chip which enables the communication between the PC and the Himax chip. You need to update this firmware in order for the Edge Impulse firmware to work properly.
Connect the board to the PC via a USB Type-C cable while holding down the Boot button on the board
Open previously installed Bouffalo Lab Dev Cube software, select BL702/704/706, and then click Finish
Go to MCU tab. Under Image file, click Browse and select the firmware you just downloaded.
Click Refresh, choose the Port related to the connected board, set Chip Erase to True, click Open UART, click Create & Download and wait for the process to be completed .
You will see the output as All Success if it went well.
Note: If the flashing throws an error, try to click Create & Download multiple times until you see the All Success message.
The board does not come with the right Edge Impulse firmware yet. To update the firmware:
Connect the board again to the PC via USB Type-C cable and double-click the Boot button on the board to enter mass storage mode
After this you will see a new storage drive shown on your file explorer as GROVEAI. Drag and drop the firmware.uf2 file to GROVEAI drive
Once the copying is finished GROVEAI drive will disappear. This is how we can check whether the copying is successful or not.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build and run your first machine learning model with these tutorials:
The Silicon Labs Thunderboard Sense 2 is a complete development board with a Cortex-M4 microcontroller, a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio - and even stream your machine learning results over BLE to a phone. It's available for around 20 USD directly from .
The Edge Impulse firmware for this development board is open source and hosted on on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. The development board should mount as a USB mass-storage device (like a USB flash drive), with the name TB004
. Make sure you can see this drive.
The development board does not come with the right firmware yet. To update the firmware:
Drag the silabs-thunderboard-sense2.bin
file to the TB004
drive.
Wait 30 seconds.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To fix this error, install the Simplicity Studio 5 IDE and flash the binary through the IDE's built in "Upload application..." menu under "Debug Adapters", and select your Edge Impulse firmware to flash:
Your Edge Impulse inferencing application should then run successfully with edge-impulse-run-impulse
.
The Nordic Semiconductor nRF52840 DK is a development board with a Cortex-M4 microcontroller, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF52840 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF52840 DK is available for around 50 USD from a variety of distributors including .
If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF52840 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Drag the nrf52840-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
If you don't see the JLINK
drive show up when you connect your nRF52840 DK you'll have to update the interface firmware.
Set the power switch to 'off'.
Hold BOOT/RESET while you set the power switch to 'on'.
Your development board should be mounted as BOOTLOADER
.
After 20 seconds disconnect the USB cable, and plug the cable back in.
The development board should now be mounted as JLINK
.
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Flash new firmware via:
Open the app and login with your edgeimpulse.com credentials:
Select your Thingy:53 project from the drop-down menu at the top:
.
.
See the guide.
- end-to-end tutorial.
.
- collecting datasets using the OpenMV IDE.
- run your trained impulse on the OpenMV Cam H7 Plus.
.
Download the latest
See the guide.
(tinyuf2-sensecap_vision_ai_xxx.bin.)
Download the latest and extract it to obtain firmware.uf2 file
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your A1101, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
After building the machine learning model and downloading the Edge Impulse firmware from Edge Impulse Studio, deploy the model uf2 to SenseCAP - Vision AI by following steps 1 and 2 under
If you want to compile the Edge Impulse firmware from the source code, you can visit and follow the instructions included in the README.
The model used for the official firmware can be found in this .
In addition to connecting directly to a computer to view real-time detection data, you can also transmit these data through LoraWAN® and finally upload them to the or a third-party cloud platform. On the SenseCAP cloud platform, you can view the data in a cycle and display it graphically through your mobile phone or computer. The SenseCAP cloud platform and SenseCAP Mate App use the same account system.
You can get more information on
for Helium network
for standard LoraWAN® network
Download
to open a preview window of the camera stream
.
See the guide.
, and unzip the file, then locate the flash-script
folder included, which we will be using in the following steps.
An earlier prototype version of the Renesas CK-RA6M5 Cloud Kit required a USB to Serial interface as shown . This is no longer the case.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices on the left sidebar. The device will be listed there:
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Firmware source code:
Pre-compiled firmware:
.
Download the latest
See the guide.
Download and extract it to obtain tinyuf2-grove_vision_ai.bin file
and extract it to obtain firmware.uf2 file
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
After building the machine learning model and downloading the Edge Impulse firmware from Edge Impulse Studio, deploy the model uf2 to Grove - Vision AI by following steps 1 and 2 under .
If you want to compile the Edge Impulse firmware from the source code, you can visit and follow the instructions included in the README.
The model used for the official firmware can be found in this .
.
See the guide.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Did you know? You can also stream the results of your impulse over BLE to a nearby phone or gateway: see .
When dragging and dropping an Edge Impulse pre-built .bin firmware file, the binary seems to flash, but when the device reconnects a FAIL.TXT file appears with the contents "Error while connecting to CPU" and the following errors appear from the :
.
See the guide.
If this is not the case, see at the bottom of this page.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Download the latest and drag the .bin
file onto the BOOTLOADER
drive.
Install the .
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_PROXIMITY
cm
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_ORIENTATION
heading, pitch, roll
#define SAMPLE_ENVIRONMENTAL
temperature, barometer, humidity, gas
#define SAMPLE_ROTATION_VECTOR
rotX, rotY, rotZ, rotW
The Synaptics Katana KA10000 board is a low-power AI evaluation kit from Synaptics that has the KA10000 AI Neural Network processor onboard. The evaluation kit is provided with a separate Himax HM01B0 QVGA monochrome camera module and 2 onboard zero power Vesper microphones. The board has an embedded STLIS2Dw12 accelerometer and an optional TI OPT3001 ambient light sensor. The connectivity to the board is provided with an IEEE 802.11n ultra low power WiFi module that is integrated with a Bluetooth 5.x, in addition to 4 Peripheral Modules (PMOD) connectors to provide I2C. UART, GPIO, I2S/SPI interfaces.
The package contains several accessories:
The Himax image sensor.
The PMOD-I2C USB firmware configuration board.
The PMOD-UART USB adapter.
2 AAA batteries
Enclosure.
The Edge Impulse firmware for this board is open source and hosted on GitHub: edgeimpulse/firmware-synaptics-ka10000.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
In order to update the firmware, it is necessary to use the PMOD-I2C USB firmware configuration board. The PMOD-I2C board is connected to the Katana board on the north right PMOD-I2C interface (as shown in the image at the top of this page), then you need to use a USB C cable to connect the firmware configuration board to the host PC.
In addition to the PMOD-I2C configuration board. You need to connect the PMOD-UART extension to the Katana board which is located on the left side of the board. Then you need to use a Micro-USB cable to connect the board to your computer.
The board is shipped originally with a sound detection firmware by default. You can upload new firmware to the flash memory by following these instructions:
Download the latest Edge Impulse firmware, and unzip the file.
Verify that you have correctly connected the firmware configuration board.
Run the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials, and board-specific public projects:
Tutorial: Adding sight to your sensors (Synaptics KA10000): https://studio.edgeimpulse.com/public/114204/latest
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Texas Instruments CC1352P Launchpad is a development board equipped with the multiprotocol wireless CC1352P microcontroller. The Launchpad, when paired with the BOOSTXL-SENSORS and CC3200AUDBOOST booster packs, is fully supported by Edge Impulse, and is able to sample accelerometer & microphone data, build models, and deploy directly to the device without any programming required. The CC1352P Launchpad, BOOSTXL-SENSORS, and CC3200AUDBOOST boards are available for purchase directly from Texas Instruments.
If you don't have either booster pack or are using different sensing hardware, you can use the Data forwarder to capture data from any other sensor type, and then follow the Running your impulse locally tutorial to run your impulse. Or, you can clone and modify the open source firmware-ti-launchxl project on GitHub.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-ti-launchxl.
To set this device up in Edge Impulse, you will need to install the following software:
Install the desktop version for your operating system here
Add the installation directory to your PATH
See Troubleshooting for more details
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the Edge Impulse CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
To interface the Launchpad with sensor hardware, you will need to either connect the BOOSTXL-SENSORS to collect accelerometer data, or the CC3200AUDBOOST to collect audio data. Follow the guides below based on what data you want to collect.
Before you start
The Launchpad jumper connections should be in their original configuration out of the box. If you have already modified the jumper connections, see the Launchpad's User Guide for the original configuration.
2. Connect the development board to your computer
Use a micro-USB cable to connect the development board to your computer.
3. Update the firmware
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
Problems flashing firmware onto the Launchpad?
See the Troubleshooting section for more information.
3. Setting keys
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Which device do you want to connect to?
The Launchpad enumerates two serial ports. The first is the Application/User UART, which the edge-impulse firmware communicates through. The other is an Auxiliary Data Port, which is unused.
When running the edge-impulse-daemon
you will be prompted on which serial port to connect to. On Mac & Linux, this will appear as:
Generally, select the lower numbered serial port. This usually corresponds with the Application/User UART. On Windows, the serial port may also be verified in the Device Manager
If a selected serial port fails to connect. Test the other port before checking troubleshooting for other common issues.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
4. Verifying that the device is connected
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build and run your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse, and you can run your impulse locally with custom firmware or sensor data.
Failed to flash
If the UniFlash CLI is not added to your PATH, the install scripts will fail. To fix this, add the installation directory of UniFlash (example /Applications/ti/uniflash_6.4.0
on macOS) to your PATH on:
If during flashing you encounter further issues, ensure:
The device is properly connected and/or the cable is not damaged.
You have the proper permissions to access the USB device and run scripts. On macOS you can manually approve blocked scripts via System Preferences->Security Settings->Unlock Icon
If on Linux you may want to try copying tools/71-ti-permissions.rules to /etc/udev/rules.d/. Then re-attach the USB cable and try again.
Alternatively, the gcc/build/edge-impulse-standalone.out
binary file may be flashed to the Launchpad using the UniFlash GUI or web-app. See the Texas Instruments Quick Start Guide for more info.
Sony's Spresense is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio.
To get started with the Sony Spresense and Edge Impulse you'll need:
The Spresense main development board - available for around 55 USD from a wide range of distributors.
The Spresense extension board - to connect external sensors.
A micro-SD card to store samples.
In addition you'll want some sensors, these ones are fully supported (note that you can collect data from any sensor on the Spresense with the data forwarder):
For image models: the Spresense CXD5602PWBCAM1 camera add-on.
For accelerometer models: the Spresense Sensor EVK-70 add-on.
For audio models: an electret microphone and a 2.2K Ohm resistor, wired to the extension board's audio channel A, following this schema (picture here).
Note: for audio models you must also have a FAT formatted SD card for the extension board, with the Spresense's DSP files included in a BIN
folder on the card, see instructions here and a screenshot of the SD card directory here.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-sony-spresense.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Make sure the SD card is formatted as FAT before inserting it into the Spresense.
Use a micro-USB cable to connect the main development board (not the extension board) to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Install Python 3.7 or higher.
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete. The on-board LEDs should stop blinking to indicate that the new firmware is running.
From a command prompt or terminal, run:
Mac: Device choice
If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.SLAB_USBtoUART or /dev/cu.usbserial-*
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you see:
Upgrade pyserial:
If the edge-impulse-daemon
or edge-impulse-run-impulse
commands do not start it might be because of an error interacting with the SD card or because your board has an old version of the bootloader. To see the debug logs, run:
And press the RESET button on the board. If you see Welcome to nash
you'll need to update the bootloader. To do so:
Install and launch the Arduino IDE.
Go to Preferences and under 'Additional Boards Manager URLs' add https://github.com/sonydevworld/spresense-arduino-compatible/releases/download/generic/package_spresense_index.json
(if there's already text in this text box, add a ,
before adding the new URL).
Then go to Tools > Boards > Board manager, search for 'Spresense' and click Install.
Select the right board via: Tools > Boards > Spresense boards > Spresense.
Select your serial port via: Tools > Port and selecting the serial port for the Spresense board.
Select the Spresense programmer via: Tools > Programmer > Spresense firmware updater.
Update the bootloader via Tools > Burn bootloader.
Then update the firmware again (from step 3: Update the bootloader and the firmware).
You can use your Linux x86_64 device or computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a webcam and microphone plugged into your system, they are automatically detected and can be used to build models.
Instruction set architectures
If you are not sure about your instruction set architectures, use:
To set this device up in Edge Impulse, run the following commands:
Ubuntu/Debian:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Counting objects using FOMO Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
You can use your Intel or M1-based Mac computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a Macbook, the webcam and microphone of your system are automatically detected and can be used to build models.
To connect your Mac to Edge Impulse:
Last, install the Edge Impulse CLI:
Problems installing the CLI?
See the Installation and troubleshooting guide.
With the software installed, open a terminal window and run::
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your Mac is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just open a terminal and run:
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The Raspberry Pi RP2040 is the debut microcontroller from Raspberry Pi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around $4 from Raspberry Pi foundation and a wide range of distributors.
To get started with the Raspberry Pi RP2040 and Edge Impulse you'll need:
A Raspberry Pi 2040 microcontroller. The pre-built firmware and Edge Impulse Studio exported binary are tailored for Raspberry Pi Pico, but with a few simple steps you can collect the data and run your models with other RP2040-based boards, such as Arduino Nano RP2040 Connect. For more details, check out "Using with other RP2040 boards".
(Optional) If you are using the Raspberry Pi Pico, the Grove Shield for Pi Pico makes it easier to connect external sensors for data collection/inference.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-pi-rp2040.
To set this device up in Edge Impulse, you will need to install the following software:
If you'd like to interact with the board using a set of pre-defined AT commands (not necessary for standard ML workflow), you will need to also install a serial communication program, for example minicom
, picocom
or use Serial Monitor from Arduino IDE (if installed).
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place, it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer while holding down the BOOTSEL button, forcing the Raspberry Pi Pico into USB Mass Storage Mode.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Drag the ei_rp2040_firmware.uf2
file from the folder to the USB Mass Storage device.
Wait until flashing is complete, unplug and replug in your board to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model. Since Raspberry Pi Pico does not have any built-in sensors, we decided to add the following ones to be supported out of the box, with a pre-built firmware:
Grove Ultrasonic Ranger (GP16; pin D16 on Grove Shield for Pi Pico).
DHT11 Temperature & Humidity sensor (GP18; pin D18 on Grove Shield for Pi Pico).
Analog signal sensor (pin A0).
There is a vast variety of analog signal sensors, that can take advantage of RP2040 10-bit ADC (Analog to Digital Converter), from common ones, such as Light sensor, Sound level sensor to more specialized ones, e.g. Carbon Dioxide sensor, Natural Gas sensor or even an EMG Detector.
Once you have the compatible sensors, you can then follow these tutorials:
Support for Arduino RP2040 Connect was added to the official RP2040 firmware for Edge Impulse. That includes data acquisition and model inference support for:
onboard MP34DT05 microphone
onboard ST LSM6DSOX 6-axis IMU
the sensors described above still can be connected
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
While RP2040 is a relatively new microcontroller, it was already utilized to build several boards:
The official Raspberry Pi Pico RP2040
Arducam Pico4ML (Camera, screen and microphone)
Seeed Studio XIAO RP2040 (extremely small footprint)
Black Adafruit Feather RP2040 (built-in LiPoly charger)
And others. While pre-built Edge Impulse firmware is mainly tested with Pico board, it is compatible with other boards, with the exception of I2C sensors and microphone - different boards use different pins for peripherals, so if you’d like to use LSM6DS3/LSM6DSOX accelerometer & gyroscope modules or microphone, you will need to change pin values in Edge Impulse RP2040 firmware source code, recompile it and upload it to the board.
The Jetson Nano is an embedded Linux dev kit featuring a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Jetson Nano is available from 59 USD from a wide range of distributors, including Sparkfun, Seeed Studio.
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
Powering your Jetson
Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via a 5V, 4A power supply on the DC barrel connector. Don't forget to change the jumper! Here is an example power supply for sale.
An added bonus to powering via the DC barrel plug, you can carry out your first boot w/o an external monitor or keyboard.
Depending on your hardware, follow NVIDIA's setup instructions (NVIDIA Jetson Nano Developer Kit or NVIDIA Jetson Nano 2GB Developer Kit) for both "Write Image to SD Card" and "Setup and First Boot." Do not use the latest SD card image, but rather, download the 4.5.1 version for your respective board from this page. When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Jetson again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Due to some incompatibilities, we don't run models on the GPU by default. You can enable this by following the TensorRT instructions in the C++ SDK.
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
If you see an error similar to this when running Linux C++ SDK examples with GPU acceleration,
then please download and use the SD card image version 4.6.1 for your respective board from this page. The error is likely caused by an incompatible version of nVidia's GPU libraries - or the absence of these libraries. If you must use older JetPack version (4.5.1 is the earliest supported), then you need to rename libei_debug7.a located in tflite/linux-jetson-nano/ to libei_debug.a and recompile your application code.
The ST IoT Discovery Kit (also known as the B-L475E-IOT01A) is a development board with a Cortex-M4 microcontroller, MEMS motion sensors, a microphone and WiFi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 50 USD from a variety of distributors including Digikey.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-st-b-l475e-iot01a.
Two variants of this board
There are two variants of this board, the B-L475E-IOT01A1 (US region) and the B-L475E-IOT01A2 (EU region) - the only difference is the sub-GHz radio. Both are usable in Edge Impulse.
To set this device up in Edge Impulse, you will need to install the following software:
On Windows:
ST Link - drivers for the development board. Run dpinst_amd64
on 64-bits Windows, or dpinst_x86
on 32-bits Windows.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?"
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one the furthest from the buttons.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name DIS_L4IOT
. Make sure you can see this drive.
Drag the DISCO-L475VG-IOT01A.bin
file to the DIS_L4IOT
drive.
Wait until the LED stops flashing red and green.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, choose an Edge Impulse project, and set up your WiFi network. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you experience the following error when attempting to connect to a WiFi network:
You have hit a known issue with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.
If the LED does not flash red and green when you copy the .bin
file to the device and instead is a solid red color, and you are unable to connect the device with Edge Impulse, there may be an issue with your device's native firmware.
To restore functionality, use the following tool from ST to update your board to the latest version:
You might need to set up udev rules on Linux before being able to talk to the device. Create a file named /etc/udev/rules.d/50-stlink.rules
and add the following content:
Then unplug the development board and plug it back in.
The AKD1000-powered PCIe boards can be plugged into a developer’s existing linux system to unlock capabilities for a wide array of edge AI applications, including Smart City, Smart Health, Smart Home and Smart Transportation. Linux machines with the AKD1000 are supported by Edge Impulse so that you can sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance ML applications.
To learn more about BrainChip technology please visit BrainChip's website: https://brainchip.com/products/
To enable this device for Edge Impulse deployments you must install the following dependencies on your Linux target that has an Akida PCIe board attached.
Python 3.8: Python 3.8 is required for deployments via the Edge Impulse CLI or AKD1000 deployment blocks because the binary file that is generated is reliant on specific paths generated for the combination of Python 3.8 and Python Akida™ Library 2.2.2 installations. Alternatively, if you intend to write your own code with the Python Akida™ Library or the Edge Impulse SDK via the BrainChip MetaTF Deployment Block option you may use Python 3.7 - 3.10.
Python Akida™ Library 2.2.2: A python package for quick and easy model development, testing, simulation, and deployment for BrainChip devices
Akida™ PCIe drivers: This will build and install the driver on your system to communicate with the above AKD1000 reference PCIe board
Edge Impulse Linux: This will enable you to connect your development system directly to Edge Impulse Studio
With all software set up, connect your camera or microphone to your operating system and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
After adding data via Data acquisition starting an Impulse Design you can add BrainChip Akida™ Learning Block. The type of Learning Blocks visible depend on the type of data collected. Using BrainChip Akida™ Learning Blocks will ensure that models generated for deployment will be compatible with BrainChip Akida™ devices.
In the Learning Block of the Impulse Design one can compare between Float, Quantized, and Akida™ versions of a model. If you added a Processing Block to your Impulse Design you will need to generate features before you can train your model. If the project uses a transfer learning block you may be able to select a base model from BrainChip’s Model zoo to transfer learn from. More models will be available in the future, but if you have a specific request please let us know via the Edge Impulse forums.
In order to achieve full hardware acceleration models must be converted from their original format to run on an AKD1000. This can be done by selecting the BrainChip MetaTF Block from the Deployment Screen. This will generate a .zip file with models that can be used in your application for the AKD1000. The block uses the CNN2SNN toolkit to convert quantized models to SNN models compatible for the AKD1000. One can then develop an application using the Akida™ python package that will call the Akida™ formatted model found inside the .zip file.
Alternatively, you can use the AKD1000 Block to generate a pre-built binary that can be used by the Edge Impulse Linux CLI to run on your Linux installation with a AKD1000 Mini PCIe present.
The output from this Block is an .eim file that, one saved, can be run with the following command:
We have multiple projects that are available to clone immediately to quickly train and deploy models for the AKD1000.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
akida
library)It is mainly related to initialization of the Akida™ NSoC and model and is could be caused by lack of Akida Python libraries. Please check if you have an Akida™ Python library installed:
Example output:
If you don't have the library (WARNING: Package(s) not found: akida
) then install it:
If you have the library, then check if the EIM artifact is looking for the library in the correct place. First, download your EIM model using Edge Impulse Linux CLI tools:
Then run the EIM model with debug
option:
Now check if your Location
directory from pip show akida
command is listed in your sys.path
output. If not (usually it happens if you are using Python virtual environments), then export PYTHONPATH
:
And try to run the model with edge-impulse-linux-runner
once again.
If the previous step didn't help, try to get additional debug data. With your EIM model downloaded, open one terminal window and do:
Then in another terminal:
This should give you additional info in the first terminal about the possible root of your issue.
This error could mean that your camera is in use by another process. Check if you don't have any application open that is using the camera. This error could all exists when your previous attempt to run edge-impulse-linux-runner
failed with exception. In that case, check if you have a gst-launch-1.0
process running. For example:
In this case, the first number (here 5615
) is a process ID. Kill the process:
And try to run the model with edge-impulse-linux-runner
once again.
The Raspberry Pi 4 is a versatile Linux development board with a quad-core processor running at 1.5GHz, a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Raspberry Pi 4 is available from 35 USD from a wide range of distributors, including DigiKey.
In addition to the Raspberry Pi 4 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the Camera Module work fine on the development board out of the box.
You can set up your Raspberry Pi without a screen. To do so:
Raspberry Pi OS - Bullseye release
Last release of the Raspberry Pi OS requires Edge Impulse Linux CLI version >= 1.3.0.
Flash the Raspberry Pi OS image to an SD card.
After flashing the OS, find the boot
mass-storage device on your computer, and create a new file called wpa_supplicant.conf in the boot
drive. Add the following code:
(Replace the fields marked with <>
with your WiFi credentials)
Next, create a new file called ssh
into the boot
drive. You can leave this file empty.
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router, or by scanning your network. E.g. on macOS and Linux via:
Here 192.168.1.19
is your IP address.
Connect to the Raspberry Pi over SSH. Open a terminal window and run:
Log in with password raspberry
.
If you have a screen and a keyboard / mouse attached to your Raspberry Pi:
Flash the Raspberry Pi OS image to an SD card.
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Connect to your WiFi network.
Click the 'Terminal' icon in the top bar of the Raspberry Pi.
To set this device up in Edge Impulse, run the following commands:
If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:
Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera. Then reboot the Raspberry.
If you want to install Edge Impulse on your Raspberry Pi using Docker you can run the following commands:
Once on the Docker container, run:
and
You should now be able to run Edge Impulse CLI tools from the container running on your Raspberry.
Note that this will only work using an external USB camera
With all software set up, connect your camera or microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Raspberry Pi again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The SK-TDA4VM is a Linux enabled development kit from Texas Instruments with a focus on smart cameras, robots, and ADAS that need multiple connectivity options and ML acceleration. The TDA4VM processor has 8 TOPS of hardware-accelerated AI combined with low power capabilities to make this device capable of many applications.
In order to take full advantage of the TDA4VM's AI hardware acceleration Edge Impulse has integrated TI Deep Learning Library and TDA4VM optimized EdgeAI models for low-to-no-code training and deployments from Edge Impulse Studio.
First, one needs to follow the TDA4VM Getting Started Guide to install the Linux distribution to the SD card of the device.
To set this device up in Edge Impulse, run the following commands on the SK-TDA4VM:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Counting objects using FOMO Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Texas Instruments provides number of models that are optimized to run on the TDA4VM. Those that have Edge Impulse support are found in the links below. Each Github repository has instructions on installation to your Edge Impulse project. The original source of these optimized models are found at Texas Instruments EdgeAI Model Zoo.
The Renesas RZ/V2L is the latest state-of-the-art general-purpose 64-bit Linux MPU with a dual-core ARM Cortex-A55 processor running at 1.2GHz and ARM Mali-G31 3D graphic engine.
The RZ/V2L EVK consists of a SMARC SOM module and an I/O carrier board that provides a USB serial interface, 2 channel Ethernet interfaces, a camera and an HDMI display interface, in addition to many other interfaces (PMOD, microphone, audio output, etc.). The RZ/V2L EVK can be acquired directly through the Renesas website. Since the RZ/V2L is intended for vision AI, the EVK already contains the Google Coral Camera Module.
The Renesas RZ/V2L board realizes hardware acceleration through the DRP-AI IP that consists of a Dynamically Configurable Processor (DRP), and Multiply and Accumulate unit (AI-MAC). The DRP-AI IP is designed to process the entire neural network plus the required pre- and post-processing steps. Additional optimization techniques reduce power consumption and increase processing performance. This leads to high power efficiency and allows using the MPU without a heat sink.
Note that, the DRP-AI is designed for feed-forward neural networks that are usually in vision-based architectures. For more information about the DRP-AI, please refer to the white paper published by the Renesas team.
The Renesas tool “DRP-AI translator” is used to translate machine learning models and optimize the processing for DRP-AI. The tool is fully supported by Edge Impulse. This means that machine learning models downloaded from the studio can be directly deployed to the RZ/V2L board.
For more technical information about RZ/V2L, please refer to the Renesas RZ/V2L documentation.
Renesas provides Yocto build system to build all the necessary packages and create the Linux image. In this section, we will build the Linux image with the Edge Impulse CLI tools. However, Renesas recommends using the Ubuntu 20.04 Linux distribution to build the Linux image. Therefore, we recommend building the image inside a Docker container if you are not using Ubuntu 20.04.
Install Docker Desktop for macOS and Windows. For Linux, please refer to the Ubuntu installation instructions.
This guide assumes that the user does not have any experience in Yocto. The objective is to provide the user with the necessary configurations to build the Linux image and Edge Impulse CLI. For further details about Yocto please refer to this page.
In order to build the Yocto Image, please download the latest version of RZ/V2L Verified Linux Package (VLP) v3.0.2 from the Renesas download section (can be found here). Please create an account on Renesas' website to be able to download the package. Once the package is downloaded, please copy the package to the docker container using this command
In addition to the Verified Linux Package (VLP) v3.0.2, you need to download the DRP-AI
package from Renesas' website as well. Please consult this link.
Decompress the package using the unzip
command. Inside the package, you will find a directory that contains several PDF files. Please refer to the file that ends with rz-v2l-linux.pdf
to have a look at the build instruction. You need also to do a similar thing for the DRP-AI package. The idea is to extract the Yocto layers for the Linux image and the DRP-AI. Please follow the Renesas documentation to see how to compile these two layers together.
Note that, it is recommended to add the Mali GPU support layer and the codec layer to get the advantage of the GPU hardware acceleration. Installation instructions for GPU and codec layers can be found on Renesas' website.
Note: The Renesas documentation might refer to adding additional layers such as the ISP layer. Please do not add this layer for now (current version VLP 3.0.2), as the SW setup isn’t compatible yet. This is going to change with the next release b/o 2023.
Please build the Weston image instead of building the minimal image when going through the instructions.
If you are a root user inside a docker container, you will need to disable the security check in order to allow for bitbake
to start the build process. This can be done by commenting out the sanity check in poky/meta/conf/sanity.conf
as follows:
Yocto configurations without Firefox
Once you finish the build instructions, we need to add Edge Impulse CLI packages to the Yocto build. Edge Impulse CLI requires to have nodejs
and npm
packages installed in addition to upgrading the glibc
version from 2.28 to 2.31. To do this, we need to add the following configurations to Yocto build configurations page at the end of local.conf
file that is located inside the build directory build/conf/local.conf
.
Yocto configurations with Firefox
This step is optional, but it allows adding support for the Firefox browser. First, you need to follow the above instructions on installing nodejs
and upgrading glibc
. Second, you need to follow the instructions on Adding the HTML5 website from Renesas' website.
Once the image has been built you will see the images
subdirectory inside the build/tmp/deploy
directory. To flash the image to an SD card, Renesas has published a guide on how to do this on their renesas.info
website. Please go to this page and go to section 4 on that page.
If you are inside a docker container, you will need to copy the build directory from the container to the host. Use this command to do so, you need to specify the path on the container and the path on the host:
If you are not using the docker container then it should be straightforward as described above.
screen
The easiest way is to connect through serial to the RZ/V2L board using the USB mini b port.
After connecting the board with a USB-C cable, please power the board with the red power button.
Please install screen
to the host machine and then execute the following command from Linux to access the board:
You will see the boot process, then you will be asked to log in:
Log in with username root
.
There is no password
Note that, it should be possible to use an Ethernet cable and log in via SSH if the daemon is installed on the image. However, for simplicity purposes, we do not refer to this one here.
Once you have logged in to the board, please run the following command to install Edge Impulse Linux CLI
With all software set up, connect your google coral camera to your Renesas board (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Currently, all Edge Impulse models can run on the RZ/V2L CPU which is a dedicated Cortex A55. In addition, you can bring your own model to Edge Impulse and use it on the device. However, if you would like to benefit from the DRP-AI hardware acceleration support including higher performance and power efficiency, please use one of the following models:
For object detection:
Yolov5 (v5)
FOMO (Faster objects More Objects)
For Image classification:
MobileNet v1, v2
It supports as well models built within the studio using the available layers on the training page.
Note that, on the training page you have to select the target before starting the training in order to tell the studio that you are training the model for the RZ/V2L. This can be done on the top right in the training page.
If you would like to do object detection with Yolov5 (v5) you need to fix the image resolution in the impulse design to 320x320, otherwise, you might risk that the training fails.
With everything set up you can now build your first machine learning model with these tutorials:
If you are interested in using the EON tuner in order to improve the accuracy of the model this is possible only for image classification for now. EON tuner supports for object detection is arriving soon.
If you use the EON tuner with image classification, you need to filter the int8
models since they are not supported by the DRP-AI. Also, you need to filter the grayscale models as well. Note that if you leave the EON tuner page, the filter will reset to the default settings, which means you need to re-filter the above models.
To run your impulse locally, just connect to your Renesas RZ/V2L and run:
This will automatically compile your model with full hardware acceleration and download the model to your Renesas board, and then start classifying.
Or you can select the RZ/V2L board from the deployment page, this will download an eim
model that you can use with the above runner as follows:
Go to the deployment page and select:
Then run the following on the RZ/V2L:
You will see the model inferencing results in the terminal also we stream the results to the local network. This allows you to see the output of the model in real-time in your web browser. Open the URL shown when you start the runner
and you will see both the camera feed and the classification results.
Since the RZ/V2L benefits from hardware acceleration using the DRP-AI, we provide you with the drp-ai
library that uses our C++ Edge Impulse SDK and models headers that run on the hardware accelerator. If you would like to integrate the model source code into your applications and benefit from the drp-ai
then you need to select the drp-ai
library.
We have an example showing how to use the drp-ai
library that can be found in Deploy your model as a DRP-AI library.
The are a collection of versatile, fanless Edge AI Boxes with integrated NVIDIA® Jetson™ by Advantech.
Comprehensive product portfolio comprised of NVIDIA Jetson platforms
Flexible I/O and iDoor enable customers to adapt to different applications
Industrial design for wide temperature and vibration tolerance
BSP support
Supports remote management for large scale deployment
For detailed instructions on setting up your device with the Scailable AI Manager, see this tutorial:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials:
The adds local artificial intelligence (Artificial Intelligence = AI) and sends the optimized data via 4G/LTE to your existing (alarm or IoT) application platform and includes major functionalities:
Local Artificial Intelligence (EDGE AI). The AI data algorithm is processed in the Gateway. This prevents high data flows to the cloud. Flexible adjustment will be possible by updating new data in the AI client running in the gateway.
Mobile Router Features. This is not just a gateway. It is a high-end industrial router with routing, security, VPN and mobile LTE (dual SIM) functionalities.
Easy Configuration and Deployment. (Remote) web server configuration of router and AI functionalities.
Remote Management. Firmware and AI client updates can easily be performed 'over the air'. Network diagnostics and other useful tools help you with the rollout.
Protocols in & out. Protocols in (to camera, such as .jpg, .mjpeg, rtsp, H264, H265) and protocols out (to application platform, such as REST, MQTT, JSON) are provided and configurable.
For detailed instructions on setting up your device with the Scailable AI Manager, see these tutorials:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials:
Thanks to work done by Edge Impulse partner , the Advantech MIC AI Series is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the . The Scailable AI Manager can be installed on any Advantech NVIDIA device using .
Thanks to work done by Edge Impulse partner , the MCS AI Gateway 4434S with ICR-V3 (arm v7) or V4 is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the .
Solving regression problems is one of the most common applications for machine learning models, especially in supervised machine learning. Models are trained to understand the relationship between independent variables and an outcome or dependent variable. The model can then be leveraged to predict the outcome of new and unseen input data, or to fill a gap in missing data.
To build a regression model you collect data as usual, but rather than setting the label to a text value, you set it to a numeric value.
You can use any of the built-in signal processing blocks to pre-process your vibration, audio or image data, or use custom processing blocks to extract novel features from other types of sensor data.
You have full freedom in modifying your neural network architecture - whether visually or through writing Keras code.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
If you want to see the accuracy of your model across your test dataset, go to Model testing. You can adjust the Maximum error percentage by clicking on "⋮" button.
Neural networks are great, but they have one big flaw. They're terrible at dealing with data they have never seen before (like a new gesture). Neural networks cannot judge this, as they are only aware of the training data. If you give it something unlike anything it has seen before it'll still classify as one of the four classes.
Tutorial
Want to see the Anomaly Detection in action? Check out our Continuous Motion Recognition tutorial.
K-means clustering
This method looks at the data points in a dataset and groups those that are similar into a predefined number K of clusters. A threshold value can be added to detect anomalies: if the distance between a data point and its nearest centroid is greater than the threshold value, then it is an anomaly.
The main difficulty resides in choosing K, since data in a time series is always changing and different values of K might be ideal at different times. Besides, in more complex scenarios where there are both local and global outliers, many outliers might pass under the radar and be assigned to a cluster.
In most of your DSP blocks, you have an option to calculate the feature importance. Edge Impulse Studio will then output a Feature Importance graphic that will help you determine which axes and values generated from your DSP block are most significant to analyze when you want to do anomaly detection.
This process of generating features and determining the most important features of your data will further reduce the amount of signal analysis needed on the device with new and unseen data.
In your anomaly detection block, you can click on the Select suggested axes button to harness the value of the feature importance output.
Here is the process in the background:
Create X number of clusters and group all the data.
For each of these clusters we store the center and the size of the cluster.
During inference we calculate the closest cluster for a new data point, and show the distance from the edge of the cluster. If it’s within a cluster (no anomaly)you thus get a value below 0.
In the above picture, known clusters in are in blue, new classified data in orange. It's clearly outside of any known clusters and can thus be tagged as an anomaly.
Tutorial: Continuous Motion Recognition
The data explorer is a visual tool to explore your dataset, find outliers or mislabeled data, and to help label unlabeled data. The data explorer first tries to extract meaningful features from your data (through signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm to map these features to a 2D space. This gives you a one-look overview of your complete dataset.
To access the data explorer head to Data acquisition, click Data explorer, then select a way to generate the data explorer. Depending on you data you'll see three options:
Using a pre-trained model - here we use a large neural network trained on a varied dataset to generate the embeddings. This works very well if you don't have any labeled data yet, or want to look at new clusters of data. This option is available for keywords and for images.
Using your trained impulse - here we use the neural network block in your impulse to generate the embeddings. This typically creates even better visualizations, but will fail if you have completely new clusters of data as the neural network hasn't learned anything about them. This option is only available if you have a trained impulse.
Using the preprocessing blocks in your impulse - here we skip the embeddings, and just use your selected signal processing blocks to create the data explorer. This creates a similar visualization as the feature explorer but in a 2D space and with extra labeling tools. This is very useful if you don't have any labeled data yet, or if you have new clusters of data that your neural network hasn't learned yet.
Then click Generate data explorer to create the data explorer. If you want to make a different choice after creating the data explorer click ⋮ in the top right corner and select Clear data explorer.
Want to see examples of the same dataset visualized in different ways? Scroll down!
To view an item in your dataset just click on any of the dots (some basic information appears on hover). Information about the sample, and a preview of the data item appears at the bottom of the data explorer. You can click Set label (or l on your keyboard) to set a new label for the data item, or press Delete item (or d on your keyboard) to remove the data item. These changes are queued until you click Save labels (at the top of the data explorer).
The data explorer marks unlabeled data in gray (with an 'Unlabeled' label). To label this data you click on any gray dot. To then set a label by clicking the Set label button (or by pressing l
on your keyboard) and enter a label. Other unlabeled data in the vicinity of this item will automatically be labeled as well. This way you can quickly label clustered data.
To upload unlabeled data you can either:
Use the upload UI and select the 'Leave data unlabeled' option.
Select the items in your dataset under Data acquisition, select all relevant items, click Edit labels and set the label to an empty string.
When uploading data through the ingestion API, set the x-no-label
header to 1, and the x-label
to an empty string.
Or, if you want to start from scratch, click the three dots on top of the data explorer, and select Clear all labels
.
The data explorer uses a three-stage process:
It runs your data through an input and a DSP block - like any impulse.
It passes the result of 1) through part of a neural network. This forces the neural network to compress the DSP output even further, but to features that are highly specialized to distinguish the exact type of data in your dataset (called 'embeddings').
The embeddings are passed through t-SNE, a dimensionality reduction algorithm.
So what are these embeddings actually? Let's imagine you have the model from the Continuous motion recognition tutorial. Here we slice data up in 2-second windows and run a signal processing step to extract features. Then we use a neural network to classify between motions. This network consists of:
33 input features (from the signal processing step)
A layer with 20 neurons
A layer with 10 neurons
A layer with 4 neurons (the number of different classes)
While training the neural network we try to find the mathematical formula that best maps the input to the output. We do this by tweaking each neuron (each neuron is a parameter in our formula). The interesting part is that each layer of the neural network will start acting like a feature extracting step - just like our signal processing step - but highly tuned for your specific data. For example, in the first layer, it'll learn what features are correlated, in the second it derives new features, and in the final layer, it learns how to distinguish between classes of motions.
In the data explorer we now cut off the final layer of the neural network, and thus we get the derived features back - these are called "embeddings". Contrary to features we extract using signal processing we don't really know what these features are - they're specific to your data. In essence, they provide a peek into the brain of the neural network. Thus, if you see data in the data explorer that you can't easily separate, the neural network probably can't either - and that's a great way to spot outliers - or if there's unlabeled data close to a labeled cluster they're probably very similar - great for labeling unknown data!
Here's an example of using the data explorer to visualize a very complex computer vision dataset (distinguishing between the four cats of one of our infrastructure engineers).
For less complex datasets, or lower-dimensional data you'll typically see more separation, even without custom models.
If you have any questions about the data explorer or embeddings, we'd be happy to help on the forums or reach out to your solutions engineer. Excited? Talk to us to get access to the data explorer, and finally be able to label all that sensor data you've collected!
After collecting data for your project, you can now create your Impulse. A complete Impulse will consist of 3 main building blocks: input block, processing block and a learning block.
This view is one of the most important, here you will build your own machine learning pipeline.
Impulse example for movement classification using accelerometer data
Impulse example for object detection using images
The input block indicates the type of input data you are training your model with. This can be time series (audio, vibration, movements) or images.
The input axes field lists all the axis referenced from your training dataset
The window size is the size of the raw features that is used for the training
The window increase is used to artificially create more features (and feed the learning block with more information)
The frequency is automatically calculated based on your training samples. You can modify this value but you currently cannot use values lower than 0.000016 (less than 1 sample every 60s).
Zero-pad data: Adds zero values when raw feature is missing
Below is a sketch to summarize the role of each parameters:
Axes: Images
Image width & height: Most of our pre-trained models work with square images.
Resize mode: You have three options, Squash, Fit to the shortest axis, Fit to the longest axis
A processing block is basically a feature extractor. It consists of DSP (Digital Signal Processing) operations that are used to extract features that our model learns on. These operations vary depending on the type of data used in your project.
You don't have much experience with DSP? No problem, Edge Impulse usually uses a star to indicate the most recommended processing block based on your input data as shown in the image below.
In the case where the available processing blocks aren't suitable for your application, you can build your own custom processing blocks and import into your project.
After adding your processing block, it is now time to add a learning block to make your impulse complete. A learning block is simply a neural network that is trained to learn on your data.
Learning blocks vary depending on what you want your model to do and the type of data in your training dataset. Algorithms include: classification, regression, anomaly detection, image transfer learning, keyword spotting or object detection. You can also create your own custom learning block (enterprise feature).
The Spectral features block extracts frequency, power and other characteristics of a signal. Low-pass and high-pass filters can also be applied to filter out unwanted frequencies. It is great for analyzing repetitive patterns in a signal, such as movements or vibrations from an accelerometer. It is also great for complex signals that have transients or irregular waveform, such as ECG and PPG signals.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Picking the right parameters for DSP algorithms can be difficult. It often requires a lot of experience and experimenting. The autotuning function makes this process easier by looking at the entire dataset and recommending a set of parameters that is tuned for your dataset.
Prior to calculating the Fast Fourier Transform (FFT), the time-series data inside the window of your sample can be filtered, which often helps to smooth out the signal or drop unwanted artifacts. In the image above, a "window" is shown inside the white box; only the readings inside that box will be used for filtering and calculating the FFT.
Edge Impulse will slide the window over your sample, as given by the time series input block parameters during Impulse creation in order to generate several training/test samples from your longer time series sample.
Scale axes - Multiply all raw input values by this number.
Input decimation ratio - Decimating (downsampling) the signal reduces the number of features and improves frequency resolution in relevant bands without increasing resource usage.
Type - The type of filter to apply to the raw data (low-pass, high-pass, or none).
Cut-off frequency - Cut-off frequency of the filter in hertz. Also, this will remove unwanted frequency bins from the generated features.
Order - Order of the Butterworth filter. Must be an even number. A higher order has a sharper cutoff at the expense of latency. You can also set to zero, in which case, the signal won't be filtered, but unwanted frequency bins will still be removed from the output.
Removing frequency bins beyond the cut off reduces model size, which saves resources, and also leads to models that train well with less data
After filtering via a Butterworth IIR filter (if enabled), the mean is subtracted from the signal. Several statistical features (RMS, skewness, kurtosis) are calculated from the filtered signal after the mean has been removed. This filtered signal is passed to the Spectral power section, which computes the FFT in order to compute the spectral features.
Analysis type - There are two types of analysis you can choose from.
FFT base analysis is best at analyzing repetitive patterns in a signal,
Wavelet works better for complex signals that have transients or irregular waveform.
If you are unsure which one to choose, using the autotuning function will give you a good starting point. After selecting an analysis type, relevant parameters will appear for the selected type.
FFT based analysis
This section controls how the FFT is applied to each filtered window from your sample. If the window from your sample is larger than the FFT size, then the window will be broken into frames (or "sub-windows"), and the FFT is calculated from each frame.
FFT length - The FFT size. This determines the number of FFT bins as well as the resolution of frequency peaks that you can separate. A lower number means more signals will average together in the same FFT bin, but also reduces the number of features and model size. A higher number will separate more signals into separate bins, but generates a larger model.
Take log of spectrum? - When selected, log (base 10) will be applied to each FFT bin. This gives more range to (ie, captures more information about) low intensity signals at the expense of range for higher intensity signals. It is enabled by default and is generally a good choice, but it ultimately depends on the kind if signal sampled.
Overlap FFT frames? - Successive frames (sub-windows) overlap by 1/2 within the larger window (given by the white box in the image) if this is checked. If unchecked, frames will not overlap. This "sliding frame" method can prevent transient events from being missed if they happen to appear on a frame boundary. Enabled by default. Disabling improves latency. No impact on model size or RAM usage.
Note that several FFTs will be computed, depending on the settings. For example, if you have 100 readings for a single axis in your window and set the FFT length to 16 with no overlap, then 6 FFTs will be computed (for that single axis), as we have 6 full frames (each with 16 points) that will fully cover those 100 readings/points.
For each FFT bin (i.e. range of frequencies), the maximum value from all of the frames is kept as the feature. Continuing with the example above, we throw away 1/2 of every FFT (as it's simply a mirror image of the other half). We also throw away the bin at 0 Hz (as we filter out the DC bias anyway when we subtracted the mean), but we keep the Nyquist bin. As a result, we end up with 8 usable bins from each of our 16-point FFTs. For each bin, we find the maximum value from our 6 FFTs that we computed (in that particular bin). So, the number of features would be 8.
Note that you may see fewer spectral features if you enable filtering, as we throw away any frequency bins higher than the cutoff frequency (for the low-pass filter) or lower than the cutoff frequency (for the high-pass filter).
See this video to learn more about the FFT.
Wavelet based analysis
This section controls how the wavelet based analysis is applied to your signal. We use the Discrete Wavelet Transform (DWT) to decompose a signal into multiple levels of approximations and details and then extract multiple features at each level.
Wavelet decomposition level The level at which you wish to decompose the signal. Higher level reveals more information about the signal at a cost of more computing requirement and may introduce noise due to numerical precision limitations.
Wavelet The wavelet kernel. There are many types of wavelet to choose from, the best choice is often the one that mimics the pattern of interests in the signal.
If you are unsure which one to choose, using the autotuning function will give you a good starting point.
See this video to learn more about the DWT.
Filter response - If filtering is enabled, and order is non-zero, then the frequency response of the filter is shown. This shows how much attenuation there will be across the frequency spectrum.
After filter - Shows the current window after filtering is applied (in the time domain).
Spectral power - Shows power vs. frequency as computed by the chosen FFT size. Power is either linear or log based on settings. This is shown if the selected analysis type is FFT.
Wavelet function - Shows the wavelet kernel function. This is shown if the selected analysis type is Wavelet.
Wavelet approximation - Shows the approximation of the signal at the highest decomposition level. This is shown if the selected analysis type is Wavelet.
Using FFTs:
The spectral analysis block generates 2 types of features per axis/channel:
Statistical features
RMS
Skewness
Kurtosis
Spectral features
Maximum value from FFT frames for each bin that was not filtered out
Note that the standard deviation is not calculated because when the mean is subtracted from a signal, the RMS equals the standard deviation.
The total number of features will change, depending on how you set the filter and FFT parameters.
For example, let's consider an input signal sampled at 62.5 Hz with 3 axis and the following parameters:
Low-pass filter
Filter cutoff set to 3 Hz
The number of generated features per axis is:
3 values for statistics (RMS, Skewness, Kurtosis)
1 value for the FFT bin capturing 1.95 to 5.86 Hz
With 3 axes/channels, that gives us a total of 12 features generated in total for the input signal.
Using Wavelets:
The Wavelet block implements the discrete wavelet decomposition plus feature extraction and dimensionality reduction. After decomposition, 14 features are calculated at each level:
Entropy
Zero cross
Mean cross
5 percentile
25 percentile
75 percentile
95 percentile
Median
Mean
Stdev
Variance
RMS
Skewness
Kurtosis
For example, for a 4-level decomposition, with 14 features per component, it will generate 70 features in total.
The Flatten block performs statistical analysis on the signal. It is useful for slow-moving averages like temperature data, in combination with other blocks.
GitHub repository containing all DSP block code: .
Scaling
Scale axes: Multiplies axes by this number
Method
Average: Calculates the average value for the window
Minimum: Calculates the minimum value in the window
Maximum: Calculates the maximum value in the window
Root-mean square: Calculates the RMS value of the window
Standard deviation: Calculates the standard deviation of the window
Skewness: Calculates the skewness of the window
Kurtosis: Calculates the kurtosis of the window
The Flatten block first rescales axes of the signal if value is different than 1. Then statistical analysis is performed on each window, computing between 1 and 7 features for each axis, depending on the number of selected methods.
When creating an impulse to solve an image classification problem, you will most likely want to use transfer learning. This is particularly true when working with a relatively small dataset.
Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks.
To choose transfer learning as your learning block, go to create impulse and click on Add a Learning Block, and select Transfer Learning.
To choose your preferred pre-trained network, go to Transfer learning on the left side of your screen and click choose a different model. A pop up will appear on your screen with a list of models to choose from as shown in the image below.
Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on an ImageNet dataset as it's pre-trained network for you to fine-tune for your specific application. The pre-trained networks comes with varying input blocks ranging from 96x96 to 320x320 and both RGB & Grayscale images for you to choose from depending on your application & target deployment hardware.
Before you start training your model, you need to set the following neural network configurations:
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%.
You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.
The preset configurations just don't work for your model? No worries, Expert Mode is for you! Expert Mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.
You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.
It's very hard to build a computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make building your model easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only re-training the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
Tutorial
Want to see MobileNetV2 SSD FPN-Lite models in action? Check out our tutorial.
To build your first object detection models using MobileNetV2 SSD FPN-Lite :
Create a new project in Edge Impulse.
Make sure to set your labelling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in
Resize your image to fit 320x320px
Add an 'Object Detection (Images)' block to your impulse.
Under Images, choose RGB.
Under Object detection, select 'Choose a different model' and select 'MobileNetV2 SSD FPN-Lite 320x320'
You can start your training with a learning rate of '0.15'
Click on 'Start training'
Here, we are using the MobileNetV2 SSD FPN-Lite 320x320 pre-trained model. The model has been trained on the COCO 2017 dataset with images scaled to 320x320 resolution.
In the MobileNetV2 SSD FPN-Lite, we have a base network (MobileNetV2), a detection network (Single Shot Detector or SSD) and a feature extractor (FPN-Lite).
Base network:
MobileNet, like VGG-Net, LeNet, AlexNet, and all others, are based on neural networks. The base network provides high-level features for classification or detection. If you use a fully connected layer and a softmax layer at the end of these networks, you have a classification.
But you can remove the fully connected and the softmax layers, and replace it with detection networks, like SSD, Faster R-CNN, and others to perform object detection.
Detection network:
The most common detection networks are SSD (Single Shot Detection) and RPN (Regional Proposal Network).
When using SSD, we only need to take one single shot to detect multiple objects within the image. On the other hand, regional proposal networks (RPN) based approaches, such as R-CNN series, need two shots, one for generating region proposals, one for detecting the object of each proposal.
As a consequence, SSD is much faster compared with RPN-based approaches but often trades accuracy with real-time processing speed. They also tend to have issues in detecting objects that are too close or too small.
Feature Pyramid Network:
Detecting objects in different scales is challenging in particular for small objects. Feature Pyramid Network (FPN) is a feature extractor designed with feature pyramid concept to improve accuracy and speed.
MobileNetV2 SSD FPN-Lite 320x320 is available with
Performance calibration allows you to test, fine-tune, and simulate running event detection models using continuous real-world or synthetically generated streams of data. It is designed to provide an immediate understanding of how your model is expected to perform in the field.
Currently only available for Audio data projects
Performance calibration is currently only available for projects that contain audio data. It's designed for use with projects that are detecting specific events (such as spoken keywords), as opposed to classifying ambient conditions. Please stay tuned for future information on support for other types of sensor data!
Performance Calibration is a tool for testing and configuring embedded machine learning pipelines for event detection. It provides insight into how your pipeline will perform on streaming data, which is what your application will encounter in the real world. It works within Studio, and does not require you to deploy to a physical device.
After testing is complete, you can use Performance Calibration to configure a post-processing algorithm that will interpret the output of your ML pipeline, transforming it into a stream of actionable events. The results of testing are used to help guide selection of the optimal post-processing algorithm for your use case.
For example, a developer working on a keyword spotting application could use Performance Calibration to understand how well their ML pipeline detects keywords in a sample of real world audio, and to select the post-processing algorithm that provides the best quality output.
Performance Calibration gives you an accurate prediction of how your ML pipeline will perform when it is deployed in the real world. Analyzing real world performance before deployment in the field allows you to iterate on your pipeline much more quickly, helping you identify and solve common performance issues much earlier in the process.
Interpreting the output of an ML pipeline on streaming data requires a post-processing algorithm, which edge ML developers have traditionally had to write and tune by hand, balancing the trade-off between false positives and false negatives to fit their particular use case. By quantifying and automating this process, Performance Calibration gives developers precise control over the trade-offs they select for their application.
Performance can be measured using either recordings of real-world data, or with realistic synthetic recordings generated using samples from your test dataset. This allows you to easily test your model’s performance under various scenarios, such as varying levels of background noise, or with different environmental sounds that might occur in your deployment environment.
When Performance Calibration runs, your ML pipeline is run across the input data with the same latency as is predicted for the target selected on the Dashboard page of your project. This results in a set of raw predictions which must be filtered by a post-processing algorithm to produce a signal every time a particular event class is detected.
The post-processing algorithm has configurable parameters that determine the overall performance of the pipeline. These parameters can be adjusted to control the trade-off between false acceptance rate (how often an event is mistakenly detected) and false rejection rate (how often an event is mistakenly ignored). This allows you to determine how sensitive your application is to inputs.
False positives and false negatives
No ML model is perfect, so developers using ML for event detection always need to pick a trade-off between false positives and false negatives. The appropriate trade-off depends on the application. For example, if you're attempting to detect a dangerous situation in an industrial facility, it may be important to minimize false negatives. On the other hand, if you're concerned about annoying users with unintentional activations of a smart home device, you may wish to minimize false positives.
The following page walks through the process of using Performance Calibration with an example project. Check out our blog post for more information!
First, make sure you have an audio project in your Edge Impulse account. No projects yet? Follow one of our tutorials to get started:
Or, clone the "Bird sound classifier" project that is used in this documentation to your Edge Impulse account: https://studio.edgeimpulse.com/public/16060/latest
Once you've trained your impulse, select the Performance calibration tab and set your testing configuration settings:
Select noise labels. Which label is used to represent generic background noise or "silence"?
Select any other labels that should be ignored by your application, i.e. other classes that are equivalent to background noise or "silence".
Choose an audio sample type: simulated real world audio or upload your own in a zip file.
Then, click Run test.
Simulated real world audio is a synthetically generated audio stream consisting of samples taken from your testing dataset and layered on top of artificial background noise. For free Edge Impulse projects, you can choose to generate either 10 minutes or 30 minutes of simulated real world audio.
Already have a long, real-world recording of background noise which includes your target model's classes? Upload your own audio sample (.wav) in a zip file, along with its Label Tracks in Audacity format (.txt).
Your impulse can be configured with a post-processing algorithm that will minimize either false activations or false rejections. The chart shows a range of suggested configs. If you save one, it will be used when your impulse is deployed.
Selecting from the various "Suggested config" icons on the FRR/FAR chart will update the Selected config information. Click on Save selected config to use the selected FAR and FRR trade off when your impulse is deployed. This config information is also accessible in the deployed Edge Impulse library.
Mean FAR: The mean False Acceptance Rate. Measures how often labels are mistakenly detected. Does not include statistics for noise labels.
Mean FRR: The mean False Rejection Rate. Measures how often events are mistakenly missed. Does not include statistics for noise labels.
Averaging window duration (ms): The raw inference results are averaged across this length of time.
Detection threshold (ms): A class is considered a positive match when it exceeds this threshold.
Suppression period (ms): Matches are ignored for this length of time following a positive result.
Shows the performance statistics for each label.
FAR: False Acceptance Rate. Measures how often a label is mistakenly detected.
FRR: False Rejection Rate. Measures how often a label is mistakenly missed.
True Positives: The number of times each label was correctly triggered.
False Positives: The number of times each label was incorrectly triggered.
True Negatives: The number of times each label was correctly not triggered.
False Negatives: The number of times each label was incorrectly not triggered.
False acceptance rate and false rejection rate
FAR is also sometimes known as the False Positive Rate, and FRR as the False Negative Rate. These industry-standard metrics are calculated as follows:
Shows any errors your impulse makes on a sample of data, with a table of results.
Error: False positives are displayed in red while false negatives are displayed in blue.
Type: Spurious match, incorrect match, duplicate match, or blank.
Label: The data label the model predicted in the audio stream.
Start time: The timestamp starting location of the selected error in the audio data stream.
Play button: Preview the audio stream at the error's start time.
What we refer to as "Ground Truth" in this context is the sound/label association that the synthetically generated audio contains at a given time.
Incorrect match: A detection matches the wrong ground truth
Spurious match: This match detection has not been associated with any ground truth.
Duplicate match: The same ground truth was detected more than once. The first correct detection is considered a true positive but subsequent detections are considered false positives.
The two most common image processing problems are image classification and object detection.
Image classification takes an image as an input and outputs what type of object is in the image. This technique works great, even on microcontrollers, as long as we only need to detect a single object in the image.
On the other hand, object detection takes an image and outputs information about the class and number of objects, position, (and, eventually, size) in the image.
Edge Impulse provides two different methods to perform object detection:
Using MobileNetV2 SSD FPN
Using FOMO
Specifications | MobileNetV2 SSD FPN | FOMO |
---|---|---|
Live classification lets you validate your model with data captured directly from any device or supported development board. This gives you a picture on how your model will perform with real world data. To achieve this, go to Live classification and connect the device or development board you want to capture data from.
All of your connected devices and sensors will appear under Devices as shown below. The devices can be connected through the Edge Impulse CLI or WebUSB:
To perform live classification using your phone, go to Devices and click Connect a new device then select "Use your mobile phone". Scan the QR code using your phone then click Switch to classification mode and start sampling.
To perform live classification using your computer, go to Devices and click Connect a new device then select "Use your computer". Give permissions on your computer then click Switch to classification mode and start sampling.
If you have selected the Classification learning block in the Create impulse page, a NN Classifier page will show up in the menu on the left. This page becomes available after you've extracted your features from your DSP block.
Tutorials
Want to see the Classification block in action? Check out our tutorials:
The basic idea is that a neural network classifier will take some input data, and output a probability score that indicates how likely it is that the input data belongs to a particular class.
So how does a neural network know what to predict? The neural network consists of a number of layers, each of which is made up of a number of neurons. The neurons in the first layer are connected to the neurons in the second layer, and so on. The weight of a connection between two neurons in a layer is randomly determined at the beginning of the training process. The neural network is then given a set of training data, which is a set of examples that it is supposed to predict. The network's output is compared to the correct answer and, based on the results, the weights of the connections between the neurons in the layer are adjusted. This process is repeated a number of times, until the network has learned to predict the correct answer for the training data.
A particular arrangement of layers is referred to as an architecture, and different architectures are useful for different tasks. This way, after a lot of iterations, the neural network learns; and will eventually become much better at predicting new data.
On this page, you can configure the model and the training process and, have an overview of your model performances.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
Depending on your project type, we may offer to choose between different architecture presets to help you get started.
The neural network architecture takes as inputs your extracted features, and pass the features to each layer of your architecture. In the classification case, the last used layer is a softmax layer. It is this last layer that gives the probability of belonging to one of the classes.
From the visual (simple) mode, you can add the following layers:
If have advanced knowledge in machine learning and Keras, you can switch to the Expert Mode and access the full Keras API to use custom architectures:
This panel displays the output logs during the training. The previous training logs can also be retrieved from the Jobs tab in the Dashboard page (enterprise feature).
This section gives an overview of your model performances and helps you evaluate your model. It can help you determine if the model is capable of meeting your needs or if you need to test other hyper parameters and architectures.
From the Last training performances you can retrieve your validation accuracy and loss.
The Confusion matrix is one of most useful tool to evaluate a model. it tabulates all of the correct and incorrect responses a model produces given a set of data. The labels on the side correspond to the actual labels in each sample, and the labels on the top correspond to the predicted labels from the model.
The features explorer, like in the processing block views, indicated the spatial distribution of your input features. In this page, you can visualize which ones have been correctly classified and which ones have not.
On-device performance: Based on the target you chose in the Dashboard page, we will output estimations for the inferencing time, peak RAM usage and flash usage. This will help you validate that your model will be able to run on your device based on its constraints.
The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.
EON Tuner Search Space
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, by your customers or by your internal knowledge.
For example, you can be constrained to use a grayscale camera, your engineers have already worked on a dedicated digital signal processing method to pre-process your sensor data or you just have the feeling that a particular neural network architecture will be more suited for a project.
In those cases, you can use the EON Tuner Search Space to define the scope of your project.
First, make sure you have an audio, motion, or image classification project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:
Log in to the Edge Impulse Studio and open a project.
Select the EON Tuner tab.
Click the Configure target button to select your model’s dataset category, target device, and time per inference (in ms).
Click on the Dataset category dropdown and select the use case unique to your motion, audio, or image classification project.
Click Save and then select Start EON Tuner
Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:
Now you’re ready to deploy your automatically configured Edge Impulse model to your target edge device!
The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.
The Tuner can directly analyze the performance on any device fully supported by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.
The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or dataset categories.
The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.
Depending on the selected dataset category, the EON Tuner considers a variety of Processing blocks when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.
Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance
During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.
Detailed logs of the run are also available. To view them, click on the button next to Target shown below.
As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.
While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.
View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32
, or the quantized int8
, version of the neural network.
Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.
The selected sorting criteria will be shown in the top left corner of each result.
Custom blocks are cloud jobs that can be hosted and used on Edge Impulse. They serve a dedicated task, are extremely flexible, let you customize your experience and fasten your time-to-market.
- to fetch, sort, validate, combine and transform existing data into robust datasets that can be imported into your projects.
- to create custom deployment targets for your products.
- to create and host your custom signal processing techniques and use them directly in your projects.
- to use your custom models and load pre-trained weights with PyTorch, Keras or scikit-learn.
One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.
In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
Then, create a new folder on your computer named custom-deploy-block
.
When a user deploys with a custom deployment block two things happen:
A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite and SavedModel formats, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.
This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.
To obtain this package go to your project's Dashboard, look for Administrative zone, enable Custom deploys, and click Save.
If you now go to the Deployment page, a new option appears under 'Create library':
Once you click Build you'll receive a ZIP file containing five items:
trained.tflite
- if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the int8
optimization, otherwise this is the float32
model.
trained.savedmodel.zip
- if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.
model-parameters
- impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.
tflite-model
- neural network as source code in a way that can be used by the SDK to quickly run your impulse.
Store the unzipped file under custom-deploy-block/input
.
With the basic information in place we can create a new deployment block. Here we'll build a standalone application that runs our impulse on Linux, very useful when running your impulse on a gateway or desktop computer. First, open a command prompt or terminal window, navigate to the custom-deploy-block
folder (that you created under 1.), and run:
This will prompt you to log in, and enter the details for your block.
Unzip under custom-deploy-block/app
.
To build this application we need to combine the application with the edge-impulse-sdk
, model-parameters
and tflite-model
folder, and invoke the (already included) Makefile.
To build the application we use Docker, a virtualization technique which lets developers package up an application with all dependencies in a single package. In this container we'll place the build tools required for this application, and scripts to combine the trained impulse with the base application.
First, let's create a small build script. As a parameter you'll receive --metadata
which points to the deployment information. In here you'll also get information on the input and output folders where you need to read from and write to.
Create a new file called custom-deploy-block/build.py
and add:
build.py
Next, we need to create a Dockerfile, which contains all dependencies for the build. These include GNU Make, a compiler, and both the build script and the base application.
Create a new file called custom-deploy-block/Dockerfile
and add:
Dockerfile
To test the build script we first build the container, then invoke it with the files from the input
directory. Open a command prompt or terminal, navigate to the custom-deploy-block
folder and:
Build the container:
Invoke the build script - this mounts the current directory in the container under /home
, and then passes the downloaded metadata script to the container:
Or if you run Windows or macOS, you can use Docker to run this application:
With the deployment block ready you can make it available in Edge Impulse. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization. The transformation block is now available in Edge Impulse under Deployment blocks. You can go here to set the logo, update the description, and set extra command line parameters.
The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and you'll find a new section 'Custom targets'. Select your new deployment target and click Build.
And now you'll have a freshly built binary from your own deployment block!
Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.
For example:
Your project requires to use a grayscale camera because you already purchased the hardware.
Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.
You have the feeling that a particular neural network architecture will be more suited for your project.
This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.
Please read first the documentation to configure your Target, Dataset category and desired Time per inference.
The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!
A blank template looks like the following:
To understand the core concepts, we recommend having a look at the available templates. We provide templates for different dataset categories as well as one for your current impulse if it has already been trained.
Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks
in your templates and each block
can contain several elements:
or
You can easily add pre-defined blocks using the + Add block section.
Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:
Example of a template where we want to compare, on the one side, MFCC vs MFE pre-processing with a custom NN architecture and on the other side, keyword spotting transfer learning architecture:
Only available for enterprise customers
Support for custom DSP & ML blocks: the EON Tuner can now use custom organization DSP & ML blocks by adding them to the custom search space. This feature will only be available for enterprises.
The parameters set in the custom DSP block are automatically retrieved.
Example using a custom ToF (Time of Flight) pre-processing block:
Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:
Upload portals are a secure way to let external parties upload data to your datasets. Through an upload portal they get an easy user interface to add data, but they have no access to the content of the dataset, nor can they delete any files. Data that is uploaded through the portal can be stored on-premise or in your own cloud infrastructure.
In this tutorial we'll set up an upload portal, show you how to add new data, and how to show this data in Edge Impulse for further processing.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure.
With your storage bucket configured you're ready to set up your first upload portal. In your organization go to Data > Upload portals and choose Create new upload portal. Here, select a name, a description, the storage bucket, and a path in the storage bucket.
Note: You'll need to enable CORS headers on the bucket. If these are not configured you'll get prompted with instructions. Talk to your user success engineer (when your data is hosted by Edge Impulse), or your system administrator to configure this.
After your portal is created a link is shown. This link contains an authentication token, and can be shared directly with the third party.
Click the link to open the portal. If you ever forget the link: no worries. Click the ⋮
next to your portal, and choose View portal.
To upload data you can now drag & drop files or folders to the drop zone on the right, or use Create new folder to first create a folder structure. There's no limit to the amount of files you can upload here, and all files are hashed, so if you upload a file that's already present the file will be skipped.
Note: Files with the same name but with a different hash are overwritten.
Mount the portal directly into a transformation block via Custom blocks > Transformation blocks > Edit block, and select the portal under mount points.
Here's a Python script which uploads, lists and downloads data to a portal. To upload data you'll need to authenticate with a JWT token, see below this script for more info.
And here's a script to generate JWT tokens:
Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware
deployment-metadata.json
- this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: .
edge-impulse-sdk
- a copy of the latest .
Next, we'll add the application. The base application can be found at .
.
Voila. You now have an output
folder which contains a ZIP file. Unzip output/deploy.zip
and now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see ):
Deployment blocks do not have access to the internet by default. If you need this, or if you need to pull additional information from the project (e.g. access to DSP blocks) you can set the 'privileged' flag on a deployment block. This will enable outside internet access, and will pass in the project.apiKey
parameter in the (if a development API key is set) that you can use to authenticate with the .
You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating , , training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features,
Organizational features are only available for enterprise customers. for more information.
If you want to process data in a portal as part of a you can either:
Mount the bucket that the portal is in, as a transformation block. This will also give you access to all other data in the bucket, very useful if you need to sync other data (see ).
If the data in your portal is already in the right format you can also directly import the uploaded data to your project. In your project view, go to , **** select 'Upload portal' and follow the steps of the wizard:
If you need a secure way for external parties to contribute data to your datasets then upload portals are the way to go. They offer a friendly user interface, upload data directly into your storage buckets, and give you an easy way to use the data directly in Edge Impulse.
Any questions, or interested in the enterprise version of Edge Impulse? for more information.
Labelling method
Bounding boxes
Bounding Boxes
Input size
320x320
Square (any size)
Image format
RGB
Greyscale & RGB
Output
Bounding boxes
Centroids
MCU
❌
✅
CPU/GPU
✅
✅
Limitations
- Works best with big objects - Models use high compute resources (in the edge computing world) - Image size is fixed
- Works best when objects have similar sizes & shapes - The size of the objects are not available - Objects should not be too close to each other
Your Edge Impulse organization helps your team with the full lifecycle of your TinyML deployment. It contains tools to collect and maintain large datasets, allows your data scientists to quickly access relevant data through their familiar tools, adds versioning and traceability to your machine learning models, and lets you quickly create new Edge Impulse projects for on-device deployment.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
To get started, follow these tutorials:
User management - to add collaborators with different access rights.
Upload portals - to allow external parties to securely contribute data to your datasets.
Custom blocks - to match any specific use cases using dedicated cloud jobs.
Research data - to explain how to deal with such complex data infrastructure.
Since the creation of Edge Impulse, we have been helping customers to deal with complex data pipelines, complex data transformation methods and complex clinical validation studies.
In most cases, before even thinking about machine learning algorithms, researchers need to build quality datasets from real-world data. These data come from various devices (prototype devices being developed vs clinical/industrial-grade reference devices), have different formats (excel sheets, images, csv, json, etc...), and be stored in various places (researcher computer, Dropbox folder, Google Drive, S3 buckets, etc...).
Dealing with such complex data infrastructure is time-consuming and expensive to develop and maintain. With this Research data section, we want to help you understand how to create a full research data pipeline by:
We have built a health reference design that describes an end-to-end ML workflow for building a wearable health product using Edge Impulse. It covers an activity study in a research lab, where data is recorded from the wearable end device (PPG + accelerometer), a reference device (Polar H10 HR monitor), plus labels (e.g. sitting, running, biking). The data is collected and validated, then written to a research dataset in an Edge Impulse organization, and finally imported into an Edge Impulse project where we train a classifier.
It handles data coming from multiple sources, data alignment, and a multi-stage pipeline before the data is imported into an Edge Impulse project. We won't cover in detail all the code snippets, our solution engineers can help you set this end-to-end ML workflow.
Within an organization you can work on one or more projects with multiple people. These can be colleagues, outside researchers, or even members of the community. They will only get access to the specific data in the project, and not to any of the raw data in your organizational datasets.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
To invite a user in an organization, click on the "Add user button, enter the email address and select the role:
Each one of the users can have different roles:
Admins have full rights on the organization
Members have full access on the datasets, custom blocks but cannot join a project without being invited
Guests have only limited access to the selected datasets
To give someone access, go to your project's dashboard, and find the "Collaborators" widget. Click the '+' icon, and type the username or e-mail address of the other user. This user needs to have an Edge Impulse account already.
Organizational datasets contain a powerful query system which lets you explore and slice data. You control the query system through the 'Filter' text box, and you use a language which is very similar to SQL (documentation).
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
For example, here are some queries that you can make:
dataset like '%AMS Activity Study%'
- returns all items and files from the study.
bucket_name = 'edge-impulse-health-reference-design' AND --labels sitting,walking
- returns data whose label is 'sitting' and 'walking, and that is stored in the 'edge-impulse-health-reference-design' bucket.
metadata->ei_check = 0
- return data that have a metadata field 'ei_check' which is '0'.
created > DATE('2022-08-01')
- returns all data that was created after Aug 1, 2022.
After you've created a filter, you can select one or more data items, and select Actions...>Download selected to create a ZIP file with the data files. The file count reflects the number of files returned by the filter.
The previous queries all returned all files for a data item. But you can also query files through the same filter. In that case the data item will be returned, but only with the files selected. For example:
file_name LIKE '%.png'
- returns all files that end with .png
.
If you have an interesting query that you'd like to share with your colleagues, you can just share the URL. The query is already added to it automatically.
These are all the available fields in the query interface:
dataset
- Dataset.
bucket_id
- Bucket ID.
bucket_name
- Bucket name.
bucket_path
- Path of the data item within the bucket.
id
- Data item ID.
name
- Data item name.
total_file_count
- Number of files for the data item.
total_file_size
- Total size of all files for the data item.
created
- When the data item was created.
metadata->key
- Any item listed under 'metadata'.
file_name
- Name of a file.
file_names
- All filenames in the data item, that you can use in conjunction with CONTAINS
. E.g. find all items with file X, but not file Y: file_names CONTAINS 'x' AND not file_names CONTAINS 'y'
.
Transformation blocks take raw data from your and convert the data into a different dataset or files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.
In this tutorial we build a Python-based transformation block that loads Parquet files, calculates features from the Parquet file, and then writes a new file back to your dataset. If you haven't done so, go through first.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
The file which you can use to test the transformation block. This contains some data from the dataset in Parquet format.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
1.1 - Parquet schema
This is the Parquet schema for the gestures.parquet
file which we'll transform:
To build a transformation block open a command prompt or terminal window, create a new folder, and run:
This will prompt you to log in, and enter the details for your block. E.g.:
Then, create the following files in this directory:
2.1 - Dockerfile
We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt
) and which script to run (transform.py
).
Note: Do not use a WORKDIR
under /home
! The /home
path will be mounted in by Edge Impulse, making your files inaccessible.
ENTRYPOINT vs RUN / CMD
If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
2.2 - requirements.txt
This file describes the dependencies for the block. We'll be using pandas
and pyarrow
to parse the Parquet file, and numpy
to do some calculations.
2.3 - transform.py
This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):
--in-file
or --in-directory
- A file (if the block operates on a file), or a directory (if the block operates on a data item) from the organizational dataset. In this case the gestures.parquet
file.
--out-directory
- Directory to write files to.
--hmac-key
- You can use this HMAC key to sign the output files. This is not used in this tutorial.
--metadata
- Key/value pairs containing the metadata for the data item, plus additional metadata about the data item in the dataItemInfo
key. E.g.:
{ "subject": "AAA001", "ei_check": "1", "dataItemInfo": { "id": 101, "dataset": "Human Activity 2022", "bucketName": "edge-impulse-tutorial", "bucketPath": "janjongboom/human_activity/AAA001/", "created": "2022-03-07T09:20:59.772Z", "totalFileCount": 14, "totalFileSize": 6347421 } }
Add the following content. This takes in the Parquet file, groups data by their label, and then calculates the RMS over the X, Y and Z axes of the accelerometer.
2.4 - Building and testing the container
On your local machine
To test the transformation block locally, if you have Python and all dependencies installed, just run:
Docker
You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.
To build the container and test the block, open a command prompt or terminal window and navigate to the source directory. First, build the container:
Then, run the container (make sure gestures.parquet
is in the same directory):
Seeing the output
This process has generated a new Parquet file in the out/
directory containing the RMS of the X, Y and Z axes. If you inspect the content of the file (e.g. using parquet-tools) you'll see the output:
Success!
With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.
The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.
If you make any changes to the block, just re-run edge-impulse-blocks push
and the block will be updated.
Next, upload the gestures.parquet
file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.
This makes the gestures.parquet
file available from the Data page.
With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'
.
Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page select 'Import data into Dataset'. Under 'output dataset', select 'Same dataset as source', and under 'Transformation block' select the new transformation block.
Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally uploads the data back to your dataset. If you have multiple files selected the transformations will also run in parallel.
You can now find the transformed file back in your dataset:
Updating metadata from a transformation block
You can update the metadata of blocks directly from a transformation block by creating a ei-metadata.json
file in the output directory. The metadata is then applied to the new data item automatically when the transform job finishes. The ei-metadata.json
file has the following structure:
Some notes:
If action
is set to add
the metadata keys are added to the data item. If action
is set to replace
all existing metadata keys are removed.
Environmental variables
Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:
EI_API_KEY
- an API key with 'member' privileges for the organization.
EI_ORGANIZATION_ID
- the organization ID that the block runs in.
EI_API_ENDPOINT
- the API endpoint (default: https://studio.edgeimpulse.com/v1).
Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the so you can automatically convert new incoming data. If you're interested in transformation blocks or any of the other enterprise features,
When collecting data, we split the dataset into training and testing sets. The model was trained with only the training set, and the testing set is used to validate how well the model will perform on un-seen data. This will ensure that the model has not learned to overfit the training data, which is a common occurrence.
To test your model, go to Model testing, and click Test all. The model will classify all of the test set samples and give you an overall accuracy of how your model performed.
This is also accompanied by a confusion matrix to show you how your models performs for each class.
To see a classification in detail, go to the individual sample you are want to evaluate and click the three dots next to it, then just select show classification. This will open a new window that will display the expected outcome, and the predicted output of your model with its accuracy. This detailed view can also give you a hint on why an item has been misclassified.
Every learning block has a threshold. This can be the minimum confidence that a neural network needs to have, or the maximum anomaly score before a sample is tagged as an anomaly. You can configure these thresholds to tweak the sensitivity of these learning blocks. This affects both live classification and model testing.
The Himax WE-I Plus is a tiny development board with a camera, a microphone, an accelerometer and a very fast DSP - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 65 USD from Sparkfun.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-himax-we-i-plus.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you export to the Himax WE-I Plus you could receive the error: "All licenses are in use by other developers.". Unfortunately we have a limited number of licenses for the MetaWare compiler and these are shared between all Studio users. Try again in a little bit, or export your project as a C++ Library, add it to the edgeimpulse/firmware-himax-we-i-plus project and compile locally.
If no device shows up in your OS (ie: COMxx, /dev/tty.usbxx) after connecting the board and your USB cable supports data transfer, you may need to install FTDI VCP driver.
CY8CKIT-062S2 Pioneer Kit and CY8CKIT-028-SENSE expansion kit required
This guide assumes you have the IoT sense expansion kit (CY8CKIT-028-SENSE) attached to a PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit
The Infineon Semiconductor PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit (Cypress CY8CKIT-062S2) enables the evaluation and development of applications using the PSoC 62 Series MCU. This low-cost hardware platform enables the design and debug of the PSoC 62 MCU and the Murata 1LV Module (CYW43012 Wi-Fi + Bluetooth Combo Chip). The PSoC 6 MCU is Infineon' latest, ultra-low-power PSoC specifically designed for wearables and IoT products. The board features a PSoC 6 MCU, and a CYW43012 Wi-Fi/Bluetooth combo module. Infineon CYW43012 is a 28nm, ultra-low-power device that supports single-stream, dual-band IEEE 802.11n-compliant Wi-Fi MAC/baseband/radio and Bluetooth 5.0 BR/EDR/LE. When paired with the IoT sense expansion kit, the PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit can be used to easily interface a variety of sensors with the PSoC™ 6 MCU platform, specifically targeted for audio and machine learning applications which are fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models to your PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit, directly from the Edge Impulse Studio.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-infineon-cy8ckit-062s2.
To set this device up with Edge Impulse, you will need to install the following software:
Infineon CyProgrammer. A utility program we will use to flash firmware images onto the target.
The Edge Impulse CLI which will enable you to connect your CY8CKIT-062S2 Pioneer Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Edge Impulse Studio can collect data directly from your CY8CKIT-062S2 Pioneer Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your CY8CKIT-062S2 Pioneer Kit you first need to flash it with our base firmware image.
Download the latest Edge Impulse firmware, and unzip the file. Once downloaded, unzip it to obtain the firmware-infineon-cy8ckit-062s2.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the CY8CKIT-062S2 Pioneer Kit to your development computer (where you downloaded and installed Infineon CyProgrammer).
You can use Infineon CyProgrammer to flash your CY8CKIT-062S2 Pioneer Kit with our base firmware image. To do this, first select your board from the dropdown list on the top left corner. Make sure to select the item that starts with CY8CKIT-062S2-43012
:
Then select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-infineon-cy8ckit-062s2.hex
). You can now press the Connect
button to connect to the board, and finally the Program
button to load the base firmware image onto the CY8CKIT-062S2 Pioneer Kit.
Keep Infineon CyProgrammer Handy
Infineon CyProgrammer will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
With all the software in place, it's time to connect the CY8CKIT-062S2 Pioneer Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices on the left sidebar. The device will be listed there:
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Nordic Semiconductor nRF5340 DK is a development board with dual Cortex-M33 microcontrollers, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF5340 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF5340 DK is available for around 50 USD from a variety of distributors.
If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf52840-5340.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF5340 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Drag the nrf5340-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF5340 DK exposes multiple UARTs. If prompted, choose the bottom one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Install the nRF Command Line Tools.
Flash new firmware via:
The Silicon Labs xG24 Dev Kit (xG24-DK2601B) is a compact, feature-packed development platform built for the EFR32MG24 Cortex-M33 microcontroller. It provides the fastest path to develop and prototype wireless IoT products. This development platform supports up to +10 dBm output power and includes support for the 20-bit ADC as well as the xG24's AI/ML hardware accelerator. The platform also features a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models directly from the Edge Impulse Studio - and even stream your machine learning results over BLE to a phone.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-silabs-xg24.
To set this device up with Edge Impulse, you will need to install the following software:
Simplicity Commander. A utility program we will use to flash firmware images onto the target.
The Edge Impulse CLI which will enable you to connect your xG24 Dev Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Edge Impulse Studio can collect data directly from your xG24 Dev Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your xG24 Dev Kit you first need to flash it with our base firmware image.
Download the latest Edge Impulse firmware, and unzip the file. Once downloaded, unzip it to obtain the firmware-xg24.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the xG24 Dev Kit to your development computer (where you downloaded and installed Simplicity Commander).
You can use Simplicity Commander to flash your xG24 Dev Kit with our base firmware image. To do this, first select your board from the dropdown list on the top left corner:
Then go to the "Flash" section on the left sidebar, and select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-xg24.hex
). You can now press the Flash
button to load the base firmware image onto the xG24 Dev Kit.
Keep Simplicity Commander Handy
Simplicity Commander will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
With all the software in place, it's time to connect the xG24 Dev Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices on the left sidebar. The device will be listed there:
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Syntiant TinyML Board is a tiny development board with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor™, featuring ultra low-power consumption, a fully connected neural network architecture, and fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance audio interfaces.
The Edge Impulse firmware for this development board is open source and hosted on GitHub.
IMU data acquisition - SD Card
An SD Card is required to use IMU data acquisition as the internal RAM of the MCU is too small. You don't need the SD Card for inferencing only or for audio projects.
To set this device up in Edge Impulse, you will need to install the following software:
Select one of the 2 firmwares below for audio or IMU projects:
Insert SD Card if you need IMU data acquisition and connect the USB cable to your computer. Double-click on the script for your OS. The script will flash the Arduino firmware and a default model on the NDP101 chip.
Flashing issues
0x000000: read 0x04 != expected 0x01
Some flashing issues can occur on the Serial Flash. In this case, open a Serial Terminal on the TinyML board and send the command: :F. This will erase the Serial Flash and should fix the flashing issue.
Connect the Syntiant TinyML Board directly to your computer's USB port. Linux, Mac OS, and Windows 10 platforms are supported.
Audio - USB microphone (macOS/Linux only)
Check that the Syntiant TinyML enumerates as "TinyML" or "Arduino MKRZero". For example, in Mac OS you'll find it under System Preferences/Sound:
Audio acquisition - Windows OS
Using the Syntiant TinyML board as an external microphone for data collection doesn't currently work on Windows OS.
IMU
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model and evaluate it using the Syntiant TinyML Board with this tutorial:
How to use Arduino-CLI with macOS M1 chip? You will need to install Rosetta2 to run the Arduino-CLI. See details on Apple website.
How to label my classes? The NDP101 chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise
and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset
(we prefix it with 'z' in order to get this class last in the list).
The Advantech ICAM-500 series is a highly integrated Industrial AI Camera that reduces installation and maintenance effort significantly, equipped with programmable variable focus lenses, LED illumination, SONY industrial grade image sensor, multiple core ARM processors, and NVIDIA AI system on module.
Thanks to work done by Edge Impulse partner Scailable, the Advantech ICAM-500 is seamlessly integrated for vision-based model deployments from the Edge Impulse Studio via the Scailable Cloud Platform. The Scailable AI Manager can be installed on any Advantech NVIDIA device using Allxon.
For detailed instructions on setting up your device with the Scailable AI Manager, see these tutorials:
For a end-to-end guide on integrating your Edge Impulse model to the Scailable Cloud Platform, and deploying your Edge Impulse model to your product device, see these tutorials: