Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The API references for the ingestion service, remote management service, and the studio API; plus SDK documentation for the acquisition and inferencing libraries can be found in the API references.
Edge Impulse for Linux SDKs for Node.js, Python, Go and C++
The enterprise version of Edge Impulse offers team collaboration on projects, go to Dashboard, find the Collaborators section, and click the '+' icon. If you have an interesting research or community project we can enable collaboration on the free version of Edge Impulse as well, by emailing hello@edgeimpulse.com.
You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.
The minimum hardware requirements for the embedded device depends on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video, view our inference performance metrics for more details.
We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking ⋮
, and selecting Switch to expert mode.
Another big part of Edge Impulse are the processing blocks, as they clean up the data, and already extract important features from your data before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: edgeimpulse/processing-blocks (and you can build your own processing blocks as well).
The EON Compiler compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware.
By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.
You cannot import a pre-trained model, but you can import your model architecture and then retrain. Add a neural network block to your impulse, go to the block, click ⋮
, and select Switch to expert mode. You then have access to the full Keras API.
Edge Impulse uses UMAP (a dimensionality reduction algorithm) to project high dimensionality input data into a 3 dimensional space. This even works for extremely high dimensionality data such as images.
Yes. The enterprise version of Edge Impulse can integrate directly with your cloud service to access and transform data.
Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in the Studio. Multiply this by active power consumption of your MCU to get an indication of power cost per inference.
More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).
See .eim models? on the Edge Impulse for Linux pages.
Using the Edge Impulse Studio data acquisition tools (like the serial daemon or data forwarder), you can collect data samples manually with a pre-defined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the Edge Impulse CLI, data ingestion API, web uploader, enterprise data storage bucket tools or enterprise upload portals. You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.
Yes! A "supported board" simply means that there is an official or community-supported firmware that has been developed specifically for that board that helps you collect data and run impulses. Edge Impulse is designed to be extensible to computers, smartphones, and a nearly endless array of microcontroller build systems.
You can collect data and upload it to Edge Impulse in a variety of ways. For example:
Transmitting data to the Data forwarder
Using the Edge Impulse for Linux SDK
By uploading files directly (e.g. CBOR, JSON, CSV, WAV, JPG, PNG)
Your trained model can be deployed as part a C++ library. It requires some effort, but most build systems will work with our C++ library, as long as that build system has a C++ compiler and there is enough flash/RAM on your device to run the library (which includes the DSP block and model).
Congratulations, you've trained your first embedded Machine Learning model! This pages lists next steps you can take to make your devices smarter.
You've ran your model in the browser, but you can also run it on a wide variety of devices. Head to the development boards section for a full overview. If you have a device that is not supported, no problem, you can export your model as a C++ library that runs on any embedded device. See Running your impulse locally for more information.
Making a machine learning model that responds to your voice is cool, but you can do a lot more with Edge Impulse. Here are a number of tutorials to get you started:
Your model was trained on +/- 20 seconds of data, which is a very small amount of data. To make your model more robust you can add more data.
If your model does not respond well enough on your keyword (e.g. if you have someone saying the word in a different tone or pitch), record some more data of the keyword.
If the model is too sensitive (triggers when you say something else), then say some different words and label them with the 'unknown' class.
You can record new data from your computer, your phone, or a development board. Go to Data acquisition and click Show options for instructions. Then, to split your data into individual samples, click the three dots next to a sample, and select Split sample (more info).
Think your model is awesome, and want to share it with the world? Go to Dashboard and click Make this project public. This will make your whole project - including all data, machine learning models and visualizations - available, and can be viewed and cloned by anyone with the URL.
Do you have any other questions or want to share your awesome ideas? Head to the forum!
A gentle introduction to the exciting field of embedded machine learning.
Machine learning (ML) is a way of writing computer programs. Specifically, it’s a way of writing programs that process raw data and turn it into information that is meaningful at an application level.
For example, one ML program might be designed to determine when an industrial machine has broken down based on readings from its various sensors, so that it can alert the operator. Another ML program might take raw audio data from a microphone and determine if a word has been spoken, so it can activate a smart home device.
Unlike normal computer programs, the rules of ML programs are not determined by a developer. Instead, ML uses specialized algorithms to learn rules from data, in a process known as training.
In a traditional piece of software, an engineer designs an algorithm that takes an input, applies various rules, and returns an output. The algorithm’s internal operations are planned out by the engineer and implemented explicitly through lines of code. To predict breakdowns in an industrial machine, the engineer would need to understand which measurements in the data indicate a problem and write code that deliberately checks for them.
This approach works fine for many problems. For example, we know that water boils at 100°C at sea level, so it’s easy to write a program that can predict whether water is boiling based on its current temperature and altitude. But in many cases, it can be difficult to know the exact combination of factors that predicts a given state. To continue with our industrial machine example, there might be various different combinations of production rate, temperature, and vibration level that might indicate a problem but are not immediately obvious from looking at the data.
To create an ML program, an engineer first collects a substantial set of training data. They then feed this data into a special kind of algorithm, and let the algorithm discover the rules. This means that as ML engineers, we can create programs that make predictions based on complex data without having to understand all of the complexity ourselves.
Through the training process, the ML algorithm builds a model of the system based on the data we provide. We run data through this model to make predictions, in a process called inference.
There are many different types of machine learning algorithms, each with their own unique benefits and drawbacks. Edge Impulse helps engineers select the right algorithm for a given task.
Machine learning is an excellent tool for solving problems that involve pattern recognition, especially patterns that are complex and might be difficult for a human observer to identify. ML algorithms excel at turning messy, high-bandwidth raw data into usable signals, especially combined with conventional signal processing.
For example, the average person might struggle to recognize the signs of a machine failure given ten different streams of dense, noisy sensor data. However, a machine learning algorithm can often learn to spot the difference.
But ML is not always the best tool for the job. If the rules of a system are well defined and can be easily expressed with hard-coded logic, it’s usually more efficient to work that way.
Limitations of machine learning
Machine learning algorithms are powerful tools, but they can have the following drawbacks:
They output estimates and approximations, not exact answers
ML models can be computationally expensive to run
Training data can be time consuming and expensive to obtain
It can be tempting to try and apply ML everywhere—but if you can solve a problem without ML, it is usually better to do so.
Recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest of microcontrollers. Embedded machine learning, also known as TinyML, is the field of machine learning when applied to embedded systems such as these.
Bandwidth—ML algorithms on edge devices can extract meaningful information from data that would otherwise be inaccessible due to bandwidth constraints.
Latency—On-device ML models can respond in real-time to inputs, enabling applications such as autonomous vehicles, which would not be viable if dependent on network latency.
Economics—By processing data on-device, embedded ML systems avoid the costs of transmitting data over a network and processing it in the cloud.
Reliability—Systems controlled by on-device models are inherently more reliable than those which depend on a connection to the cloud.
Privacy—When data is processed on an embedded system and is never transmitted to the cloud, user privacy is protected and there is less chance of abuse.
Welcome to Edge Impulse! We enable developers to create the next generation of intelligent device solutions with . In the documentation you'll find user guides, tutorials and API documentation. For support, visit the .
If you're new to the idea of embedded machine learning, or machine learning in general, you may enjoy our quick guide:
Follow these three steps to build your first embedded Machine Learning model - no worries, you can use almost any device to get started.
You'll need some data:
If you have an existing development board or device, you can collect data with a few lines of code using the or the SDK.
If you want to collect live data from a supported development kit, select your board from the list of and follow the instructions to connect your board to edge impulse.
If you already have a dataset, you can upload it via the .
If you have a mobile phone you can use it as a sensor to collect data, see .
Try the tutorials on , , , or . These will let you build machine learning models that detect things in your home or office.
After training your model you can run your model on your device:
If you want to integrate the model with your own firmware or project you can export your complete model (including all signal processing code and machine learning models) to a C++ or Arduino library with no external dependencies (open source and royalty-free), see .
If you have a fully supported development board (or your mobile phone) you can build new firmware - which includes your model - directly from the UI. It doesn't get easier than that!
If you have a gateway, a computer or a web browser where you want to run your model, you can export to and run it anywhere you can run JavaScript.
We have some great tutorials, but you have full freedom in the models that you design in Edge Impulse. You can plug in new signal processing blocks, and completely new neural networks. See and .
You can upload an already existing dataset to your project directly through the Edge impulse Studio. The data should be in the (CBOR, JSON, CSV), or as WAV, JPG or PNG files.
To upload data using the uploader, go to the Data acquisition page and click on the uploader button as shown in the image below:
When uploading your data, you can choose the category you want your data to fall in i.e training set, testing set or automatically split the dataset between training and testing set. You can also choose whether to infer labels from files name or enter a label of which the files should automatically fall in.
There are some major advantages to deploying ML on embedded devices. The key advantages are neatly expressed in the unfortunate acronym BLERP, . They are:
The best way to learn about embedded machine learning is to see it for yourself. To train your own model and deploy it to any device, including your mobile phone, follow our .
You can access any feature in the Edge Impulse Studio through the . We also have the if you want to send data directly, and we have an open to control devices from the Studio.
For startups and enterprises looking to scale edge ML algorithm development from prototype to production, we offer an . This includes all of the tools needed to go from data collection to model deployment, such as a robust dataset builder to future-proof your data, integrations with all major cloud vendors, dedicated technical support, custom DSP and ML capabilities, and full access to the Edge Impulse APIs to automate your algorithm development.
To get more information, please .
All collected data for each project can be viewed on the Data acquisition tab. You can see how your data has been split for train/test set as well as the data distribution for each class in your dataset. You can also send new sensor data to your project either by file upload, WebUSB, Edge Impulse API, or Edge Impulse CLI.
The panel on the right allows you to collect data directly from any fully supported platform:
Through WebUSB.
Using the Edge Impulse CLI daemon.
From the Edge Impulse for Linux CLI.
The WebUSB and the Edge Impulse daemon work with any fully supported device by flashing the pre-built Edge Impulse firmware to your board. See the list of fully supported boards.
When using the Edge Impulse for Linux CLI, run edge-impulse-linux --clean
and it will add your platform to the device list of your project. You will then will be able to interact with it from the Record new data panel.
Import from S3 buckets (Enterprise feature).
Upload portals (Enterprise feature).
The train/test split is a technique for training and evaluating the performance of a machine learning algorithms. It indicates how your data is split between training and testing samples. For example, an 80/20 split indicates that 80% of the dataset is used for model training purposes while 20% is used for model testing.
This section also shows how your data samples in each class are distributed to prevent imbalanced datasets which might introduce bias during model training.
Manually navigating to some categories of data can be time consuming, especially when dealing with a large dataset. The data acquisition filter enables the user to filter data samples based on some criteria of choice. This can be based on:
Label - class to which a sample represents.
Sample name - unique ID representing a sample.
Signature validity
Enabled and disabled samples
Length of sample - duration of a sample.
The filtered samples can then be manipulated by editing label, deleting, moving from trains set to test set and vise versa a shown in the image above.
The data manipulations above can also be applied at the data sample level just by simply navigating to the individual data sample then clicking ⋮ and selecting the type of action you might want to perform to the specific sample. This might be renaming , editing its label, disabling, cropping, splitting, downloading, and even deleting the sample when desired.
To crop a data sample, go to the sample you want to crop and click ⋮, then select Crop sample. You can specific a length, or drag the handles to resize the window, then move the window around to make your selection.
Made a wrong crop? No problem, just click Crop sample again and you can move your selection around. To undo the crop, just set the sample length to a high number, and the whole sample will be selected again.
Besides cropping you can also split data automatically. Here you can perform one motion repeatedly, or say a keyword over and over again, and the events are detected and can be stored as individual samples. This makes it easy to very quickly build a high-quality dataset of discrete events. To do so head to Data acquisition, record some new data, click, and select Split sample. You can set the window length, and all events are automatically detected. If you're splitting audio data you can also listen to events by clicking on the window, the audio player is automatically populated with that specific split.
Samples are automatically centered in the window, which might lead to problems on some models (the neural network could learn a shortcut where data in the middle of the window is always associated with a certain label), so you can select "Shift samples" to automatically move the data a little bit around.
Splitting data is - like cropping data - non-destructive. If you're not happy with a split just click Crop sample and you can move the selection around easily.
The labelling queue will only appear on your data acquisition page if you are dealing with an object detection tasks. The labelling queue shows a list of images that have been staged for annotation for your project.
If you are not dealing with an object detection task, you can simply disable the labelling queue bar by going to Dashboard > Project info > Labeling method and clicking the dropdown and selecting "one label per data item" as shown in the image below.
For more information about the labelling queue and how to perform data annotation using AI assisted labelling on Edge Impulse, you can have a look at our documentation here.
If you are working on an object detection project, you will most likely see "labelling queue" bar on your data acquisition page. The labeling queue shows you all the data that has not been labelled in your dataset.
Can't see the labeling queue? Go to Dashboard, and under 'Project info > Labeling method' select 'Bounding boxes (object detection)'.
In object detection, labelling is the process of adding a bounding box around specific objects in an image so that your machine learning model can learn and infer from it. Edge impulse studio has an inbuilt data annotation tool with AI assisted labelling to assist you in your labelling workflows as we will see.
In the Edge Impulse studio, labelling your data is as easy as dragging a box around the object, then entering a label and saving as shown below.
However, as simple the manual labelling process might look like, it sometimes can become tedious and time consuming especially when dealing with huge datasets. To make your life easier, Edge Impulse studio has an inbuilt AI-assisted labelling feature to automatically assist you in your labelling workflows.
there are 3 ways you can use to perform AI assisted labelling on the Edge Impulse Studio:
Using yolov5
Using your own model
Using object tracking
By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!
To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!
You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab. Then, from the “Labeling queue”, click the Label suggestions dropdown and select “Classify using ”:
You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!
If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.
Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:
Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!
We are excited to see what you build with the AI-Assisted Labeling feature in Edge Impulse, please post your project on our forum or tag us on social media, @Edge Impulse!
There is a wide variety of devices that you can connect to your Edge Impulse project. These devices can help you collect datasets for your project, test your trained ML model and even deploy your ML model directly to your development board with a pre-built binary application (for fully supported development platforms).
On the Devices tab, you'll find a list of all your connected devices and a guide on how to connect new devices that are currently supported by Edge Impulse.
To connect a new device, click on the Connect a new device button on the top right of your screen.
You will get a pop-up with multiple options of devices you can connect to your Edge Impulse project. Available options include:
After creating your Edge Impulse Studio project, you will be directed to the project's dashboard. The dashboard gives a quick overview of your project such as your project ID, the number of devices connected, the amount of data collected, the preferred labeling method, among other editable properties. You can also enable some additional capabilities to your project such as collaboration, making your project public, and showcasing your public projects using Markdown READMEs as we will see.
The figure below shows the various sections and widgets of the dashboard that we will cover here.
The project README enables you to explain the details of your project in a short way. Using this feature, you can add visualizations such as images, GIFs, code snippets, and text to your project in order to bring your colleagues and project viewers up to speed with the important details of your project. In your README you might want to add things like:
What the project does
Why the project is useful
Motivations of the project
How to get started with the project
What sensors and target deployment devices you used
How you plan to improve your project
Where users can get help with your project
To create your first README, navigate to the "about this project" widget and click "add README"
For more README inspiration, check out the public Edge Impulse project tutorials below:
To share your private project with the world, and click Make this project public.
By doing this, all of your data, block configurations, intermediate results, and final models will be shared with the world. Your project will be publicly accessible and can be cloned with a single click with the provided URL:
To add a collaborator, go to your project's dashboard and find the "Collaborators" widget. Click the '+' icon and type the username or e-mail address of the other user. The user will be invited to create an Edge Impulse account if it doesn't exist.
The user will be automatically added to the project and will get an email notification inviting them to start contributing to your project. To remove a user, simply click on the three dots besides the user then tap ‘Delete’ and they will be automatically removed.
The project info widget shows the project's specifications such as the project ID, labeling method, and latency calculations for your target device.
On the labeling method dropdown, you need to specify the type of labeling your dataset and model expect. This can be either one label per data item or bounding boxes. Bounding boxes only work for object detection tasks in the studio. Note that if you interchange the labeling methods, learning blocks will appear to be hidden when building your impulse.
One of the amazing Edge Impulse superpowers is the latency calculation component. This is an approximate time in milliseconds that the trained model and DSP operations are going to take during inference based on the selected target device. This hardware in the loop approach ensures that the target deployment device compute resources are not underutilized or over-utilized. It also saves developers' time associated with numerous inference iterations back and forth the studio in search of optimum models.
In the Block Output section, you can download the results of the DSP and ML operations of your impulse.
The downloadable assets include the extracted features, Tensorflow SavedModel, and both quantized and unquantized TensorFlow lite models. This is particularly helpful when you want to perform other operations to the output blocks outside the Edge Impulse studio. For example, if you need a TensorflowJS model, you will just need to download the TensorFlow saved model from the dashboard and convert it to TensorFlowJS model format to be served on a browser.
Changing Performance Settings is only available for enterprise customers
This section consists of editable parameters that directly affect the performance of the studio when building your impulse. Depending on the selected or available settings, your jobs can either be fast or slow.
The use of GPU for training and Parallel DSP jobs is currently an internal experimental feature that will be soon released.
To bring even more flexibility in projects, the administrative zone gives developers the power to enable other additional features that are not found in edge impulse projects by default. Most of these features are usually advanced features intended for organizations or sometimes experimental features.
To activate these features you just need to check the boxes against the specific features you want to use and click save experiments.
The danger zone widget consists of irrevocable actions that let you to:
Delete your project. This action removes all devices, data, and impulses from your project.
Delete all data in this project.
Perform train/test split. This action re-balances your dataset by splitting all your data automatically between the training and testing set and resets the categories for all data
Launch the getting started wizard. This will remove all data, and clear out your impulse.
After collecting data for your project, you can now create your Impulse. A complete Impulse will consist of 3 main building blocks: input block, processing block and a learning block.
This view is one of the most important, here you will build your own machine learning pipeline.
Impulse example for movement classification using accelerometer data
Impulse example for object detection using images
The input block indicates the type of input data you are training your model with. This can be time series (audio, vibration, movements) or images.
The input axes field lists all the axis referenced from your training dataset
The window size is the size of the raw features that is used for the training
The window increase is used to artificially create more features (and feed the learning block with more information)
The frequency is automatically calculated based on your training samples. You can modify this value but you currently cannot use values lower than 0.000016 (less than 1 sample every 60s).
Zero-pad data: Adds zero values when raw feature is missing
Below is a sketch to summarize the role of each parameters:
Axes: Images
Image width & height: Most of our pre-trained models work with square images.
Resize mode: You have three options, Squash, Fit to the shortest axis, Fit to the longest axis
You don't have much experience with DSP? No problem, Edge Impulse usually uses a star to indicate the most recommended processing block based on your input data as shown in the image below.
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio):
The source code of these blocks are available in the .
If you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing, follow our tutorial on .
The Raw Data block generates windows from data samples without any specific signal processing. It is great for signals that have already been pre-processed and if you just need to feed your data into the Neural Network block.
GitHub repository containing all DSP block code: .
Scaling
Scale axes: Multiplies each axis by this number. This can be used to normalize your data between 0 and 1.
The Raw Data block retrieves raw samples and applies the Scaling parameter.
The Flatten block performs statistical analysis on the signal. It is useful for slow-moving averages like temperature data, in combination with other blocks.
GitHub repository containing all DSP block code: .
Scaling
Scale axes: Multiplies axes by this number
Method
Average: Calculates the average value for the window
Minimum: Calculates the minimum value in the window
Maximum: Calculates the maximum value in the window
Root-mean square: Calculates the RMS value of the window
Standard deviation: Calculates the standard deviation of the window
Skewness: Calculates the skewness of the window
Kurtosis: Calculates the kurtosis of the window
The Flatten block first rescales axes of the signal if value is different than 1. Then statistical analysis is performed on each window, computing between 1 and 7 features for each axis, depending on the number of selected methods.
.
.
.
.
You can invite up to three collaborators to join and contribute to your project. To have unlimited collaborators, your project needs to be part of an to access unlimited team collaborations.
The project ID is a unique numerical value that identifies your project. Whenever you have any issue with your project on the studio, you can always share your project ID on the for assistance from edge Impulse staff.
Organizational features are only available for enterprise customers. View our for more information.
A is basically a feature extractor. It consists of DSP (Digital Signal Processing) operations that are used to extract features that our model learns on. These operations vary depending on the type of data used in your project.
In the case where the available processing blocks aren't suitable for your application, you can and import into your project.
After adding your , it is now time to add a to make your impulse complete. A learning block is simply a neural network that is trained to learn on your data.
vary depending on what you want your model to do and the type of data in your training dataset. Algorithms include: , , , , or . You can also create your own (enterprise feature).
The Image block is dedicated to computer vision applications. It normalizes image data, and optionally reduce the color depth.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Color depth: Color depth to use (RGB or grayscale)
The Image performs normalization, converting each pixel's channel of the image to a float value between 0 and 1. If Grayscale is selected, each pixel is converted to a single value following the ITU-R BT.601 conversion (Y' component only).
The IMU Syntiant block rescales raw data to 8 bits values to match the NDP101 chip input requirements.
Scaling
Scale 16 bits to 8 bits: Scale data to 8-bits values in the [-1, 1] range, raw data is divided by 2G (2 * 9.80665). Using Edge Impulse official firmwares, this parameter should be enabled as raw data is not rescaled. If this parameter is disabled the data samples will not be rescaled, you should disable this parameter if your raw data samples are already normalized to the [-1, 1] range.
The IMU Syntiant block retrieves raw samples and applies the Scale 16 bits to 8 bits parameter.
The data sources page is actually much more than just adding data from external sources. It let you create complete automated data pipelines so you can work on your active learning strategies.
From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.
Click in + Add new data source and select where your data lives:
You can either use:
AWS S3 buckets
Google Cloud Storage
Any S3-compatible bucket
Upload portals (enterprise feature)
Transformation blocks (enterprise feature)
Don't import data (if you just need to create a pipeline)
Click on Next, provide credentials:
Click on Verify credentials:
Here, you have several options to automatically label your data:
In the example above, the structure of the folder is the following:
The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20
.
The samples present in an unlabeled/
folder will be kept unlabeled in Edge Impulse Studio.
Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:
When using this option, only the file name is taken into account. The part before the first .
will be used to set the label. E.g. cars.01741.jpg
will set the label to cars
.
All the data samples will be unlabeled, you will need to label them manually before using them.
Finally, click on Next, post-sync actions.
From this view, you can automate several actions:
Recreate data explorer
The data explorer gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.
Retrain model
If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.
Note: You will need to have trained your project at least once.
Create new version
Store all data, configuration, intermediate results and final models.
Create new deployment
Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.
Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline now.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline from code. This will display an overlay with curl
, Node.js
and Python
code samples.
You will need to create an API key to run the pipeline from code.
By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮
button and select Edit pipeline.
Free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.
Once the pipeline has successfully finish, you will receive an email like the following:
You can also define who can receive the email. The users have to be part of your project. See: Dashboard -> Collaboration.
Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:
As of today, if you want to update your pipeline, you need to edit the configuration json available in ⋮
-> Run pipeline from code.
Here is an example of what you can get if all the actions have been selected:
Free projects have only access to the above builtinTransformationBlock
.
If you are part of an organization, you can use your custom transformation jobs in the pipeline. In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.
Select Copy as pipeline step and paste it to the configuration json file.
The Audio MFCC blocks extracts coefficients from an audio signal. Similarly to the Audio MFE block, it uses a non-linear scale called Mel-scale. It is the reference block for speech recognition and can also performs well on some non-human voice use cases.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Mel Frequency Cepstral Coefficients
Number of coefficients: Number of cepstral coefficients to keep after applying Discrete Cosine Transform
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Window size: The size of sliding window for local cepstral mean normalization. Windows size must be odd.
Pre-emphasis
Coefficient: The pre-emphasizing coefficient to apply to the input signal (0 equals to no filtering)
Note: Shift has been removed and set to 1 for all future projects. Older & existing projects can still change this value or use an existing value.
The features' extractions adds one extra step to the MFE block resulting in a compressed representation of the filterbanks. A Discrete Cosine Transform is applied on each filterbank to extract cepstral coefficients. 13 coefficients are usually retained, the rest are discarded as they represent fast changes not useful for speech recognition.
The data explorer is a visual tool to explore your dataset, find outliers or mislabeled data, and to help label unlabeled data. The data explorer first tries to extract meaningful features from your data (through signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm to map these features to a 2D space. This gives you a one-look overview of your complete dataset.
To access the data explorer head to Data acquisition, click Data explorer, then select a way to generate the data explorer. Depending on you data you'll see three options:
Using a pre-trained model - here we use a large neural network trained on a varied dataset to generate the embeddings. This works very well if you don't have any labeled data yet, or want to look at new clusters of data. This option is available for keywords and for images.
Using your trained impulse - here we use the neural network block in your impulse to generate the embeddings. This typically creates even better visualizations, but will fail if you have completely new clusters of data as the neural network hasn't learned anything about them. This option is only available if you have a trained impulse.
Using the preprocessing blocks in your impulse - here we skip the embeddings, and just use your selected signal processing blocks to create the data explorer. This creates a similar visualization as the feature explorer but in a 2D space and with extra labeling tools. This is very useful if you don't have any labeled data yet, or if you have new clusters of data that your neural network hasn't learned yet.
Then click Generate data explorer to create the data explorer. If you want to make a different choice after creating the data explorer click ⋮ in the top right corner and select Clear data explorer.
Want to see examples of the same dataset visualized in different ways? Scroll down!
To view an item in your dataset just click on any of the dots (some basic information appears on hover). Information about the sample, and a preview of the data item appears at the bottom of the data explorer. You can click Set label (or l on your keyboard) to set a new label for the data item, or press Delete item (or d on your keyboard) to remove the data item. These changes are queued until you click Save labels (at the top of the data explorer).
The data explorer marks unlabeled data in gray (with an 'Unlabeled' label). To label this data you click on any gray dot. To then set a label by clicking the Set label button (or by pressing l
on your keyboard) and enter a label. Other unlabeled data in the vicinity of this item will automatically be labeled as well. This way you can quickly label clustered data.
To upload unlabeled data you can either:
Use the upload UI and select the 'Leave data unlabeled' option.
Select the items in your dataset under Data acquisition, select all relevant items, click Edit labels and set the label to an empty string.
When uploading data through the ingestion API, set the x-no-label
header to 1, and the x-label
to an empty string.
Or, if you want to start from scratch, click the three dots on top of the data explorer, and select Clear all labels
.
The data explorer uses a three-stage process:
It runs your data through an input and a DSP block - like any impulse.
It passes the result of 1) through part of a neural network. This forces the neural network to compress the DSP output even further, but to features that are highly specialized to distinguish the exact type of data in your dataset (called 'embeddings').
The embeddings are passed through t-SNE, a dimensionality reduction algorithm.
So what are these embeddings actually? Let's imagine you have the model from the Continuous motion recognition tutorial. Here we slice data up in 2-second windows and run a signal processing step to extract features. Then we use a neural network to classify between motions. This network consists of:
33 input features (from the signal processing step)
A layer with 20 neurons
A layer with 10 neurons
A layer with 4 neurons (the number of different classes)
While training the neural network we try to find the mathematical formula that best maps the input to the output. We do this by tweaking each neuron (each neuron is a parameter in our formula). The interesting part is that each layer of the neural network will start acting like a feature extracting step - just like our signal processing step - but highly tuned for your specific data. For example, in the first layer, it'll learn what features are correlated, in the second it derives new features, and in the final layer, it learns how to distinguish between classes of motions.
In the data explorer we now cut off the final layer of the neural network, and thus we get the derived features back - these are called "embeddings". Contrary to features we extract using signal processing we don't really know what these features are - they're specific to your data. In essence, they provide a peek into the brain of the neural network. Thus, if you see data in the data explorer that you can't easily separate, the neural network probably can't either - and that's a great way to spot outliers - or if there's unlabeled data close to a labeled cluster they're probably very similar - great for labeling unknown data!
Here's an example of using the data explorer to visualize a very complex computer vision dataset (distinguishing between the four cats of one of our infrastructure engineers).
For less complex datasets, or lower-dimensional data you'll typically see more separation, even without custom models.
If you have any questions about the data explorer or embeddings, we'd be happy to help on the forums or reach out to your solutions engineer. Excited? Talk to us to get access to the data explorer, and finally be able to label all that sensor data you've collected!
Similarly to the Spectrogram block, the Audio MFE processing block extracts time and frequency features from a signal. However it uses a non-linear scale in the frequency domain, called Mel-scale. It performs well on audio data, mostly for non-voice recognition use cases when sounds to be classified can be distinguished by human ear.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Normalization
Noise floor (dB): signal lower than this level will be dropped
The features' extractions is similar to the Spectrogram (Frame length, Frame stride, and FFT length parameters are the same) but it adds 2 extra steps.
After computing the spectrogram, triangular filters are applied on a Mel-scale to extract frequency bands. They are configured with parameters Filter number, Low frequency and High frequency to select the frequency band and the number of frequency features to be extracted. The Mel-scale is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The idea is to extract more features (more filter banks) in the lower frequencies, and less in the high frequencies, thus it performs well on sounds that can be distinguished by human ear.
The last step is to perform a local mean normalization of the signal, applying the Noise floor value to the power spectrum.
The Spectral features block extracts frequency and power characteristics of a signal. Low-pass and high-pass filters can also be applied to filter out unwanted frequencies. It is great for analyzing repetitive patterns in a signal, such as movements or vibrations from an accelerometer.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Prior to calculating the Fast Fourier Transform (FFT), the time-series data inside the window of your sample can be filtered, which often helps to smooth out the signal or drop unwanted artifacts. In the image above, a "window" is shown inside the white box; only the readings inside that box will be used for filtering and calculating the FFT.
Edge Impulse will slide the window over your sample, as given by the time series input block parameters during Impulse creation in order to generate several training/test samples from your longer time series sample.
Scale - Multiply all raw input values by this number
Type - The type of filter to apply to the raw data (low-pass, high-pass, or none)
Cut-off frequency - Cut-off frequency of the filter in hertz. Also, this will remove unwanted frequency bins from the generated features.
Order - Order of the Butterworth filter. Must be an even number. A higher order has a sharper cutoff at the expense of latency. You can also set to zero, in which case, the signal won't be filtered, but unwanted frequency bins will still be removed from the output.
Removing frequency bins beyond the cut off reduces model size, which saves resources, and also leads to models that train well with less data
After filtering via a Butterworth IIR filter (if enabled), the mean is subtracted from the signal. Several statistical features (RMS, skewness, kurtosis) are calculated from the filtered signal after the mean has been removed. This filtered signal is passed to the Spectral power section, which computes the FFT in order to compute the spectral features.
This section controls how the FFT is applied to each filtered window from your sample. If the window from your sample is larger than the FFT size, then the window will be broken into frames (or "sub-windows"), and the FFT is calculated from each frame.
FFT length - The FFT size. This determines the number of FFT bins as well as the resolution of frequency peaks that you can separate. A lower number means more signals will average together in the same FFT bin, but also reduces the number of features and model size. A higher number will separate more signals into separate bins, but generates a larger model.
Take log of spectrum? - When selected, log (base 10) will be applied to each FFT bin. This gives more range to (ie, captures more information about) low intensity signals at the expense of range for higher intensity signals. It is enabled by default and is generally a good choice, but it ultimately depends on the kind if signal sampled.
Overlap FFT frames? - Successive frames (sub-windows) overlap by 1/2 within the larger window (given by the white box in the image) if this is checked. If unchecked, frames will not overlap. This "sliding frame" method can prevent transient events from being missed if they happen to appear on a frame boundary. Enabled by default. Disabling improves latency. No impact on model size or RAM usage.
Note that a number of FFTs will be computed, depending on the settings. For example, if you have 100 readings for a single axis in your window and set the FFT length to 16 with no overlap, then 6 FFTs will be computed (for that single axis), as we have 6 full frames (each with 16 points) that will fully cover those 100 readings/points.
For each FFT bin (i.e. range of frequencies), the maximum value from all of the frames is kept as the feature. Continuing with the example above, we throw away 1/2 of every FFT (as it's simply a mirror image of the other half). We also throw away the bin at 0 Hz (as we filter out the DC bias anyway when we subtracted the mean), but we keep the Nyquist bin. As a result, we end up with 8 usable bins from each of our 16-point FFTs. For each bin, we find the maximum value from our 6 FFTs that we computed (in that particular bin). So, the number of features would be 8.
Note that you may see fewer spectral features if you enable filtering, as we throw away any frequency bins higher than the cutoff frequency (for the low-pass filter) or lower than the cutoff frequency (for the high-pass filter).
See this video to learn more about the FFT.
Filter response - If filtering is enabled, and order is non-zero, then the frequency response of the filter is shown. This shows how much attenuation there will be across the frequency spectrum.
After filter - Shows the current window after filtering is applied (in the time domain).
Spectral power - Shows power vs. frequency as computed by the chosen FFT size. Power is either linear or log based on settings.
The spectral analysis block generates 2 types of features per axis/channel:
Statistical features
RMS
Skewness
Kurtosis
Spectral features
Maximum value from FFT frames for each bin that was not filtered out
Note that the standard deviation is not calculated because when the mean is subtracted from a signal, the RMS equals the standard deviation.
The total number of features will change, depending on how you set the filter and FFT parameters.
Let's consider an input signal sampled at 62.5 Hz with 3 axis and the following parameters:
Low-pass filter
Filter cutoff set to 3 Hz
The number of generated features per axis is:
3 values for statistics (RMS, Skewness, Kurtosis)
1 value for the FFT bin capturing 1.95 to 5.86 Hz
With 3 axes/channels, that gives us a total of 12 features are generated in total for the input signal.
Solving regression problems is one of the most common applications for machine learning models, especially in supervised machine learning. Models are trained to understand the relationship between independent variables and an outcome or dependent variable. The model can then be leveraged to predict the outcome of new and unseen input data, or to fill a gap in missing data.
To build a regression model you collect data as usual, but rather than setting the label to a text value, you set it to a numeric value.
You can use any of the built-in signal processing blocks to pre-process your vibration, audio or image data, or use custom processing blocks to extract novel features from other types of sensor data.
You have full freedom in modifying your neural network architecture - whether visually or through writing Keras code.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
If you want to see the accuracy of your model across your test dataset, go to Model testing. You can adjust the Maximum error percentage by clicking on "⋮" button.
Neural networks are great, but they have one big flaw. They're terrible at dealing with data they have never seen before (like a new gesture). Neural networks cannot judge this, as they are only aware of the training data. If you give it something unlike anything it has seen before it'll still classify as one of the four classes.
Tutorial
Want to see the Anomaly Detection in action? Check out our tutorial.
K-means clustering
This method looks at the data points in a dataset and groups those that are similar into a predefined number K of clusters. A threshold value can be added to detect anomalies: if the distance between a data point and its nearest centroid is greater than the threshold value, then it is an anomaly.
The main difficulty resides in choosing K, since data in a time series is always changing and different values of K might be ideal at different times. Besides, in more complex scenarios where there are both local and global outliers, many outliers might pass under the radar and be assigned to a cluster.
In most of your DSP blocks, you have an option to calculate the feature importance. Edge Impulse Studio will then output a Feature Importance graphic that will help you determine which axes and values generated from your DSP block are most significant to analyze when you want to do anomaly detection.
This process of generating features and determining the most important features of your data will further reduce the amount of signal analysis needed on the device with new and unseen data.
In your anomaly detection block, you can click on the Select suggested axes button to harness the value of the feature importance output.
Here is the process in the background:
Create X number of clusters and group all the data.
For each of these clusters we store the center and the size of the cluster.
During inference we calculate the closest cluster for a new data point, and show the distance from the edge of the cluster. If it’s within a cluster (no anomaly)you thus get a value below 0.
In the above picture, known clusters in are in blue, new classified data in orange. It's clearly outside of any known clusters and can thus be tagged as an anomaly.
is available for everyone but has to be self-hosted. If you want to host it on Edge Impulse infrastructures, you can do that within your organization interface.
In this tutorial, you'll learn how to use to push your custom DSP block to your organisation and how to make this processing block available in the Studio for all users in the organization.
The Custom Processing block we are using for this tutorial can be found here: . It is written in Python. Please note that one of the beauties with custom blocks is that you can write them in any language as we will host a Docker container and we are not tied to a specific runtime.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The . If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
installed on your machine. Custom blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
A running with Docker.
Inside your Custom DSP block folder, run the following command:
The output will look like this:
Modify or update your custom code if needed and run the following command:
The output will look similar to this:
That's it, now your custom DSP block is hosted on your organization. To make sure it is up and running, in your organisation, go to Custom blocks->DSP and you will see the following screen:
To use your DSP block, simply add it as a processing block in the Create impulse view:
When creating an impulse to solve an image classification problem, you will most likely want to use transfer learning. This is particularly true when working with a relatively small dataset.
Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks.
To choose transfer learning as your learning block, go to create impulse and click on Add a Learning Block, and select Transfer Learning.
To choose your preferred pre-trained network, go to Transfer learning on the left side of your screen and click choose a different model. A pop up will appear on your screen with a list of models to choose from as shown in the image below.
Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on an ImageNet dataset as it's pre-trained network for you to fine-tune for your specific application. The pre-trained networks comes with varying input blocks ranging from 96x96 to 320x320 and both RGB & Grayscale images for you to choose from depending on your application & target deployment hardware.
Before you start training your model, you need to set the following neural network configurations:
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%.
You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.
The preset configurations just don't work for your model? No worries, Expert Mode is for you! Expert Mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.
You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.
Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks. With Edge Impulse's transfer learning block for audio keyword spotting, we take the same transfer learning technique classically used for image classification and apply it to audio data. This allows you to fine-tune a pre-trained keyword spotting model on your data and achieve even better performance than using a , even with a relatively small keyword dataset.
Excited? Train your first keyword spotting model in under 5 minutes with the !
To choose transfer learning as your learning block, go to create impulse and click on Add a Learning Block, and select Transfer Learning (Keyword Spotting).
To choose your preferred pre-trained network, select the Transfer learning tab on the left side of your screen and click choose a different model. A pop up will appear on your screen with a list of models to choose from as shown in the image below.
Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on an ImageNet dataset as it's pre-trained network for you to fine-tune for your specific application.
Before you start training your model, you need to set the following neural network configurations:
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%.
You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.
The preset configurations just don't work for your model? No worries, Expert Mode is for you! Expert Mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.
You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.
The two most common image processing problems are and object detection.
Image classification takes an image as an input and outputs what type of object is in the image. This technique works great, even on microcontrollers, as long as we only need to detect a single object in the image.
On the other hand, object detection takes an image and outputs information about the class and number of objects, position, (and, eventually, size) in the image.
Edge Impulse provides two different methods to perform object detection:
Using
Using
Specifications | MobileNetV2 SSD FPN | FOMO |
---|
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio), but they might not be suitable for all applications. Perhaps you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing. In this tutorial you'll learn how to support these use cases by adding custom processing blocks to the studio.
There is also a complete video covering how to implement your custom DSP block:
Development flow
This creates a copy of the example project locally. Then, you can run the example either through Docker or locally via:
Docker
Locally
Install the ngrok binary for your platform.
Get a URL to access the processing block from the outside world via:
This yields a public URL for your block under Forwarding
. Note down the address that includes https://
.
Now that the custom processing block was created, and you've made it accessible to the outside world, you can add this block to Edge Impulse. In a project, go to Create Impulse, click Add a processing block, choose Add custom block (in the bottom left corner of the modal), and paste in the public URL of the block:
After you click Add block the block will show like any other processing block.
Add a learning bloc, then click Save impulse to store the impulse.
Processing blocks have configuration options which are rendered on the block parameter page. These could be filter configurations, scaling options, or control which visualizations are loaded. These options are defined in the parameters.json
file. Let's add an option to smooth raw data. Open example-custom-processing-block-python/parameters.json
and add a new section under parameters
:
Then, open example-custom-processing-block-python/dsp.py
and replace its contents with:
Restart the Python script, and then click Custom block in the studio (in the navigation bar). You now have a new option 'Smooth'. Every time an option changes we'll re-run the block, but as we have not written any code to respond to these changes nothing will happen.
We support a number of different types for configuration fields. These are:
int
- renders a numeric textbox that expects integers.
float
- renders a numeric textbox that expects floating point numbers.
string
- renders a textbox that expects a string.
boolean
- renders a checkbox.
select
- renders a dropdown box. This also requires the parameter valid
which should be an array of valid values. E.g. this renders a dropdown box with options 'low', 'high' and 'none':
To show the user what is happening we can also draw visuals in the processing block. Right now we support graphs (linear and logarithmic) and arbitrary images. By showing a graph of the smoothed sample we can quickly identify what effect the smooth option has on the raw signal. Open dsp.py
and replace the content with the following script. It contains a very basic smoothing algorithm and draws a graph:
Restart the script, and click the Smooth toggle to observe the difference. Congratulations! You have just created your first custom processing block.
If you extract set features from the signal, like the mean, that you that return, you can also label these features. These labels will be used in the feature explorer. To do so, add a labels
array that contains strings that map back to the features you return (labels
and features
should have the same length).
In the previous step we drew a linear graph, but you can also draw logarithmic graphs or even full images. This is done through the type
parameter:
This draws a graph with a logarithmic scale:
To show an image you should return the base64 encoded image and its MIME type. Here's how you draw a small PNG image:
If you output high-dimensional data (like a spectrogram or an image) you can enable dimensionality reduction for the feature explorer. This will run UMAP over the data to compress the features into three dimensions. To do so, set:
On the info
object in parameters.json
.
Your custom block behaves exactly the same as any of the built-in blocks. You can process all your data, train neural networks or anomaly blocks, and validate that your model works.
However, we cannot automatically generate optimized native code for the block, like we do for built-in processing blocks, but we try to help you write this code as much as possible.
In your custom DSP code, open the parameters.json
file, you should have something similar to the following:
The cppType
field is used to generate a function that you can implement on your custom C++ library that you get from the deployment page.
When you export your project to a C++ library we generate structures for all the configuration options in the model-parameters/dsp_blocks.h
header file. You only need to implement the extract_custom_block_features
function. It takes your {cppType}
and generates the following extract_{cppType}_features
function.
For example with the above cppType
parameter:
Implement your function in the main.cpp file (or somewhere else, just make sure it is referenced).
Also, please have a look at the video on the top of this page (around minute 25) where Jan explains how to how to implement your custom DSP block in with your C++ library.
With good feature extraction you can make your machine learning models smaller and more reliable, which are both very important when you want to deploy your model on embedded devices. With custom processing blocks you can now develop new feature extraction pipelines straight from Edge Impulse. Whether you're following the latest research, want to implement proprietary algorithms, or are just exploring data.
It's very hard to build a computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make building your model easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only re-training the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
Tutorial
Want to see MobileNetV2 SSD FPN-Lite models in action? Check out our tutorial.
To build your first object detection models using MobileNetV2 SSD FPN-Lite :
Create a new project in Edge Impulse.
Make sure to set your labelling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in
Resize your image to fit 320x320px
Add an 'Object Detection (Images)' block to your impulse.
Under Images, choose RGB.
Under Object detection, select 'Choose a different model' and select 'MobileNetV2 SSD FPN-Lite 320x320'
You can start your training with a learning rate of '0.15'
Click on 'Start training'
Here, we are using the MobileNetV2 SSD FPN-Lite 320x320 pre-trained model. The model has been trained on the COCO 2017 dataset with images scaled to 320x320 resolution.
In the MobileNetV2 SSD FPN-Lite, we have a base network (MobileNetV2), a detection network (Single Shot Detector or SSD) and a feature extractor (FPN-Lite).
Base network:
MobileNet, like VGG-Net, LeNet, AlexNet, and all others, are based on neural networks. The base network provides high-level features for classification or detection. If you use a fully connected layer and a softmax layer at the end of these networks, you have a classification.
But you can remove the fully connected and the softmax layers, and replace it with detection networks, like SSD, Faster R-CNN, and others to perform object detection.
Detection network:
The most common detection networks are SSD (Single Shot Detection) and RPN (Regional Proposal Network).
When using SSD, we only need to take one single shot to detect multiple objects within the image. On the other hand, regional proposal networks (RPN) based approaches, such as R-CNN series, need two shots, one for generating region proposals, one for detecting the object of each proposal.
As a consequence, SSD is much faster compared with RPN-based approaches but often trades accuracy with real-time processing speed. They also tend to have issues in detecting objects that are too close or too small.
Feature Pyramid Network:
Detecting objects in different scales is challenging in particular for small objects. Feature Pyramid Network (FPN) is a feature extractor designed with feature pyramid concept to improve accuracy and speed.
Tutorial:
Blog post:
Full instruction on how to build processing blocks:
Blog post:
When running edge-impulse-blocks init
for hosting a custom DSP block, ensure you log into an Edge Impulse account that is a member of an . If you are logged into a personal account, you will be presented with the following CLI output:
Make sure you followed the tutorial, and have a trained impulse.
This tutorial shows you the development flow of building custom processing blocks, and requires you to run the processing block on your own machine or server. Enterprise customers can share processing blocks within their organization, and run these on our infrastructure. See for more details.
Processing blocks take data and configuration parameters in, and return features and visualizations like graphs or images. To communicate to custom processing blocks, Edge Impulse studio will make HTTP calls to the block, and then use the response both in the UI, while generating features, or when training a machine learning model. Thus, to load a custom processing block we'll need to run a small server that responds to these HTTP calls. You can write this in any language, but we have created in Python. To load this example, open a terminal and run:
Then go to and you should be shown some information about the block.
As this block is running locally the studio cannot reach the block. To resolve this we can use which can make a local port accessible from a public URL. After you've finished development you can move the processing block to a server with a publicly accessible address (or run it on our infrastructure through your enterprise account). To set up a tunnel:
Sign up for .
For all options that you can return in a graph, see the return types in the API documentation.
An example of this function for the spectral analysis block is listed in the .
Blog post:
For inspiration we have published all our own blocks here: . If you've made an interesting block that you think is valuable for the community, please let us know on the or by opening a pull request. We'd be happy to help write efficient native code for the block, and then publish it as a standard block!
MobileNetV2 SSD FPN-Lite 320x320 is available with
Labelling method | Bounding boxes | Bounding Boxes |
Input size | 320x320 | Square (any size) |
Image format | RGB | Greyscale & RGB |
Output | Bounding boxes | Centroids |
MCU | ❌ | ✅ |
CPU/GPU | ✅ | ✅ |
Limitations | - Works best with big objects - Models use high compute resources (in the edge computing world) - Image size is fixed | - Works best when objects have similar sizes & shapes - The size of the objects are not available - Objects should not be too close to each other |
Your Edge Impulse organization helps your team with the full lifecycle of your TinyML deployment. It contains tools to collect and maintain large datasets, allows your data scientists to quickly access relevant data through their familiar tools, adds versioning and traceability to your machine learning models, and lets you quickly create new Edge Impulse projects for on-device deployment.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
To get started, follow these tutorials:
User management - to add collaborators with different access rights.
Upload portals - to allow external parties to securely contribute data to your datasets.
Custom blocks - to match any specific use cases using dedicated cloud jobs.
Research data - to explain how to deal with such complex data infrastructure.
Performance calibration allows you to test, fine-tune, and simulate running event detection models using continuous real-world or synthetically generated streams of data. It is designed to provide an immediate understanding of how your model is expected to perform in the field.
Currently only available for Audio data projects
Performance calibration is currently only available for projects that contain audio data. It's designed for use with projects that are detecting specific events (such as spoken keywords), as opposed to classifying ambient conditions. Please stay tuned for future information on support for other types of sensor data!
Performance Calibration is a tool for testing and configuring embedded machine learning pipelines for event detection. It provides insight into how your pipeline will perform on streaming data, which is what your application will encounter in the real world. It works within Studio, and does not require you to deploy to a physical device.
After testing is complete, you can use Performance Calibration to configure a post-processing algorithm that will interpret the output of your ML pipeline, transforming it into a stream of actionable events. The results of testing are used to help guide selection of the optimal post-processing algorithm for your use case.
For example, a developer working on a keyword spotting application could use Performance Calibration to understand how well their ML pipeline detects keywords in a sample of real world audio, and to select the post-processing algorithm that provides the best quality output.
Performance Calibration gives you an accurate prediction of how your ML pipeline will perform when it is deployed in the real world. Analyzing real world performance before deployment in the field allows you to iterate on your pipeline much more quickly, helping you identify and solve common performance issues much earlier in the process.
Interpreting the output of an ML pipeline on streaming data requires a post-processing algorithm, which edge ML developers have traditionally had to write and tune by hand, balancing the trade-off between false positives and false negatives to fit their particular use case. By quantifying and automating this process, Performance Calibration gives developers precise control over the trade-offs they select for their application.
Performance can be measured using either recordings of real-world data, or with realistic synthetic recordings generated using samples from your test dataset. This allows you to easily test your model’s performance under various scenarios, such as varying levels of background noise, or with different environmental sounds that might occur in your deployment environment.
When Performance Calibration runs, your ML pipeline is run across the input data with the same latency as is predicted for the target selected on the Dashboard page of your project. This results in a set of raw predictions which must be filtered by a post-processing algorithm to produce a signal every time a particular event class is detected.
The post-processing algorithm has configurable parameters that determine the overall performance of the pipeline. These parameters can be adjusted to control the trade-off between false acceptance rate (how often an event is mistakenly detected) and false rejection rate (how often an event is mistakenly ignored). This allows you to determine how sensitive your application is to inputs.
False positives and false negatives
No ML model is perfect, so developers using ML for event detection always need to pick a trade-off between false positives and false negatives. The appropriate trade-off depends on the application. For example, if you're attempting to detect a dangerous situation in an industrial facility, it may be important to minimize false negatives. On the other hand, if you're concerned about annoying users with unintentional activations of a smart home device, you may wish to minimize false positives.
The following page walks through the process of using Performance Calibration with an example project. Check out our blog post for more information!
First, make sure you have an audio project in your Edge Impulse account. No projects yet? Follow one of our tutorials to get started:
Or, clone the "Bird sound classifier" project that is used in this documentation to your Edge Impulse account: https://studio.edgeimpulse.com/public/16060/latest
Once you've trained your impulse, select the Performance calibration tab and set your testing configuration settings:
Select noise labels. Which label is used to represent generic background noise or "silence"?
Select any other labels that should be ignored by your application, i.e. other classes that are equivalent to background noise or "silence".
Choose an audio sample type: simulated real world audio or upload your own in a zip file.
Then, click Run test.
Simulated real world audio is a synthetically generated audio stream consisting of samples taken from your testing dataset and layered on top of artificial background noise. For free Edge Impulse projects, you can choose to generate either 10 minutes or 30 minutes of simulated real world audio.
Already have a long, real-world recording of background noise which includes your target model's classes? Upload your own audio sample (.wav) in a zip file, along with its Label Tracks in Audacity format (.txt).
Your impulse can be configured with a post-processing algorithm that will minimize either false activations or false rejections. The chart shows a range of suggested configs. If you save one, it will be used when your impulse is deployed.
Selecting from the various "Suggested config" icons on the FRR/FAR chart will update the Selected config information. Click on Save selected config to use the selected FAR and FRR trade off when your impulse is deployed. This config information is also accessible in the deployed Edge Impulse library.
Mean FAR: The mean False Acceptance Rate. Measures how often labels are mistakenly detected. Does not include statistics for noise labels.
Mean FRR: The mean False Rejection Rate. Measures how often events are mistakenly missed. Does not include statistics for noise labels.
Averaging window duration (ms): The raw inference results are averaged across this length of time.
Detection threshold (ms): A class is considered a positive match when it exceeds this threshold.
Suppression period (ms): Matches are ignored for this length of time following a positive result.
Shows the performance statistics for each label.
FAR: False Acceptance Rate. Measures how often a label is mistakenly detected.
FRR: False Rejection Rate. Measures how often a label is mistakenly missed.
True Positives: The number of times each label was correctly triggered.
False Positives: The number of times each label was incorrectly triggered.
True Negatives: The number of times each label was correctly not triggered.
False Negatives: The number of times each label was incorrectly not triggered.
False acceptance rate and false rejection rate
FAR is also sometimes known as the False Positive Rate, and FRR as the False Negative Rate. These industry-standard metrics are calculated as follows:
Shows any errors your impulse makes on a sample of data, with a table of results.
Error: False positives are displayed in red while false negatives are displayed in blue.
Type: Spurious match, incorrect match, duplicate match, or blank.
Label: The data label the model predicted in the audio stream.
Start time: The timestamp starting location of the selected error in the audio data stream.
Play button: Preview the audio stream at the error's start time.
What we refer to as "Ground Truth" in this context is the sound/label association that the synthetically generated audio contains at a given time.
Incorrect match: A detection matches the wrong ground truth
Spurious match: This match detection has not been associated with any ground truth.
Duplicate match: The same ground truth was detected more than once. The first correct detection is considered a true positive but subsequent detections are considered false positives.
One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.
In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
You'll need:
The Edge Impulse CLI.
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
Docker desktop installed on your machine.
Then, create a new folder on your computer named custom-deploy-block
.
When a user deploys with a custom deployment block two things happen:
A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite and SavedModel formats, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.
This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.
To obtain this package go to your project's Dashboard, look for Administrative zone, enable Custom deploys, and click Save.
If you now go to the Deployment page, a new option appears under 'Create library':
Once you click Build you'll receive a ZIP file containing five items:
deployment-metadata.json
- this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: Deployment metadata spec.
trained.tflite
- if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the int8
optimization, otherwise this is the float32
model.
trained.savedmodel.zip
- if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.
edge-impulse-sdk
- a copy of the latest Inferencing SDK.
model-parameters
- impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.
tflite-model
- neural network as source code in a way that can be used by the SDK to quickly run your impulse.
Store the unzipped file under custom-deploy-block/input
.
With the basic information in place we can create a new deployment block. Here we'll build a standalone application that runs our impulse on Linux, very useful when running your impulse on a gateway or desktop computer. First, open a command prompt or terminal window, navigate to the custom-deploy-block
folder (that you created under 1.), and run:
This will prompt you to log in, and enter the details for your block.
Next, we'll add the application. The base application can be found at edgeimpulse/example-standalone-inferencing.
Unzip under custom-deploy-block/app
.
To build this application we need to combine the application with the edge-impulse-sdk
, model-parameters
and tflite-model
folder, and invoke the (already included) Makefile.
To build the application we use Docker, a virtualization technique which lets developers package up an application with all dependencies in a single package. In this container we'll place the build tools required for this application, and scripts to combine the trained impulse with the base application.
First, let's create a small build script. As a parameter you'll receive --metadata
which points to the deployment information. In here you'll also get information on the input and output folders where you need to read from and write to.
Create a new file called custom-deploy-block/build.py
and add:
build.py
Next, we need to create a Dockerfile, which contains all dependencies for the build. These include GNU Make, a compiler, and both the build script and the base application.
Create a new file called custom-deploy-block/Dockerfile
and add:
Dockerfile
To test the build script we first build the container, then invoke it with the files from the input
directory. Open a command prompt or terminal, navigate to the custom-deploy-block
folder and:
Build the container:
Invoke the build script - this mounts the current directory in the container under /home
, and then passes the downloaded metadata script to the container:
Voila. You now have an output
folder which contains a ZIP file. Unzip output/deploy.zip
and now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see Running your impulse locally):
Or if you run Windows or macOS, you can use Docker to run this application:
With the deployment block ready you can make it available in Edge Impulse. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization. The transformation block is now available in Edge Impulse under Deployment blocks. You can go here to set the logo, update the description, and set extra command line parameters.
Deployment blocks do not have access to the internet by default. If you need this, or if you need to pull additional information from the project (e.g. access to DSP blocks) you can set the 'privileged' flag on a deployment block. This will enable outside internet access, and will pass in the project.apiKey
parameter in the metadata (if a development API key is set) that you can use to authenticate with the Edge Impulse API.
The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and you'll find a new section 'Custom targets'. Select your new deployment target and click Build.
And now you'll have a freshly built binary from your own deployment block!
Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.
You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating data ingestion from your cloud services, transforming raw data into ML-suitable data, training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features, let us know!
Live classification lets you validate your model with data captured directly from any device or supported development board. This gives you a picture on how your model will perform with real world data. To achieve this, go to Live classification and connect the device or development board you want to capture data from.
All of your connected devices and sensors will appear under Devices as shown below. The devices can be connected through the Edge Impulse CLI or WebUSB:
To perform live classification using your phone, go to Devices and click Connect a new device then select "Use your mobile phone". Scan the QR code using your phone then click Switch to classification mode and start sampling.
To perform live classification using your computer, go to Devices and click Connect a new device then select "Use your computer". Give permissions on your computer then click Switch to classification mode and start sampling.
If you have selected the Classification learning block in the Create impulse page, a NN Classifier page will show up in the menu on the left. This page becomes available after you've extracted your features from your DSP block.
Tutorials
Want to see the Classification block in action? Check out our tutorials:
The basic idea is that a neural network classifier will take some input data, and output a probability score that indicates how likely it is that the input data belongs to a particular class.
So how does a neural network know what to predict? The neural network consists of a number of layers, each of which is made up of a number of neurons. The neurons in the first layer are connected to the neurons in the second layer, and so on. The weight of a connection between two neurons in a layer is randomly determined at the beginning of the training process. The neural network is then given a set of training data, which is a set of examples that it is supposed to predict. The network's output is compared to the correct answer and, based on the results, the weights of the connections between the neurons in the layer are adjusted. This process is repeated a number of times, until the network has learned to predict the correct answer for the training data.
A particular arrangement of layers is referred to as an architecture, and different architectures are useful for different tasks. This way, after a lot of iterations, the neural network learns; and will eventually become much better at predicting new data.
On this page, you can configure the model and the training process and, have an overview of your model performances.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
Depending on your project type, we may offer to choose between different architecture presets to help you get started.
The neural network architecture takes as inputs your extracted features, and pass the features to each layer of your architecture. In the classification case, the last used layer is a softmax layer. It is this last layer that gives the probability of belonging to one of the classes.
From the visual (simple) mode, you can add the following layers:
If have advanced knowledge in machine learning and Keras, you can switch to the Expert Mode and access the full Keras API to use custom architectures:
This panel displays the output logs during the training. The previous training logs can also be retrieved from the Jobs tab in the Dashboard page (enterprise feature).
This section gives an overview of your model performances and helps you evaluate your model. It can help you determine if the model is capable of meeting your needs or if you need to test other hyper parameters and architectures.
From the Last training performances you can retrieve your validation accuracy and loss.
The Confusion matrix is one of most useful tool to evaluate a model. it tabulates all of the correct and incorrect responses a model produces given a set of data. The labels on the side correspond to the actual labels in each sample, and the labels on the top correspond to the predicted labels from the model.
The features explorer, like in the processing block views, indicated the spatial distribution of your input features. In this page, you can visualize which ones have been correctly classified and which ones have not.
On-device performance: Based on the target you chose in the Dashboard page, we will output estimations for the inferencing time, peak RAM usage and flash usage. This will help you validate that your model will be able to run on your device based on its constraints.
Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5.
Tutorials
Want to see FOMO in action? Check out our Detect objects with centroids (FOMO) tutorial.
For example, FOMO lets you do 60 fps object detection on a Raspberry Pi 4:
And here's FOMO doing 30 fps object detection on an Arduino Nicla Vision (Cortex-M7 MCU), using 245K RAM.
You can find the complete Edge Impulse project with the beers vs. cans model, including all data and configuration here: https://studio.edgeimpulse.com/public/89078/latest.
So how does that work? First, a small primer. Let's say you want to detect whether you see a face in front of your sensor. You can approach this in two ways. You can train a simple binary classifier, which says either "face" or "no face", or you can train a complex object detection model which tells you "I see a face at this x, y point and of this size". Object detection is thus great when you need to know the exact location of something, or if you want to count multiple things (the simple classifier cannot do that) - but it's computationally much more intensive, and you typically need much more data for it.
The design goal for FOMO was to get the best of both worlds: the computational power required for simple image classification, but with the additional information on location and object count that object detection gives us.
The first thing to realize is that while the output of the image classifier is "face" / "no face" (and thus no locality is preserved in the outcome) the underlying neural network architecture consists of a number of convolutional layers. A way to think about these layers is that every layer creates a diffused lower-resolution image of the previous layer. E.g. if you have a 16x16 image the width/height of the layers may be:
16x16
4x4
1x1
Each 'pixel' in the second layer maps roughly to a 4x4 block of pixels in the input layer, and the interesting part is that locality is somewhat preserved. The 'pixel' in layer 2 at (0, 0) will roughly map back to the top left corner of the input image. The deeper you go in a normal image classification network, the less of this locality (or "receptive field") is preserved until you finally have just 1 outcome.
FOMO uses the same architecture, but cuts off the last layers of a standard image classification model and replaces this layer with a per-region class probability map (e.g. a 4x4 map in the example above). It then has a custom loss function which forces the network to fully preserve the locality in the final layer. This essentially gives you a heatmap of where the objects are.
The resolution of the heat map is determined by where you cut off the layers of the network. For the FOMO model trained above (on the beer bottles) we do this when the size of the heat map is 8x smaller than the input image (input image of 160x160 will yield a 20x20 heat map), but this is configurable. When you set this to 1:1 this actually gives you pixel-level segmentation and the ability to count a lot of small objects.
A difference between FOMO and other object detection algorithms is that it does not output bounding boxes, but it's easy to go from heat map to bounding boxes. Just draw a box around a highlighted area.
However, when working with early customers we realized that bounding boxes are merely an implementation detail of other object detection networks, and are not a typical requirement. Very often the size of objects is not important as cameras are in fixed locations (and objects thus fixed size), but rather you just want the location and the count of objects.
Thus, we now train on the centroids of objects. This makes it much easier to count objects that are close (every activation in the heat map is an object), and the convolutional nature of the neural network ensures we look around the centroid for the object anyway.
A downside of the heat map is that each cell acts as its own classifier. E.g. if your classes are "lamp", "plant" and "background" each cell will be either lamp, plant, or background. It's thus not possible to detect objects with overlapping centroids. You can see this in the Raspberry Pi 4 video above at 00:18 where the beer bottles are too close together. This can be solved by using a higher resolution heat map.
A really cool benefit of FOMO is that it's fully convolutional. If you set an image:heat map factor of 8 you can throw in a 96x96 image (outputs 12x12 heat map), a 320x320 image (outputs 40x40 heat map), or even a 1024x1024 image (outputs 128x128 heat map). This makes FOMO incredibly flexible, and useful even if you have very large images that need to be analyzed (e.g. in fault detection where the faults might be very, very small). You can even train on smaller patches, and then scale up during inference.
Additionally FOMO is compatible with any MobileNetV2 model. Depending on where the model needs to run you can pick a model with a higher or lower alpha, and transfer learning also works (although you need to train your base models specifically with FOMO in mind). This makes it easy for end customers to use their existing models and fine-tune them with FOMO to also add locality (e.g. we have customers with large transfer learning models for wildlife detection).
Together this gives FOMO the capabilities to scale from the smallest microcontrollers all the way to full gateways or GPUs. Just some numbers:
The video on the top classifies 60 times / second on a stock Raspberry Pi 4 (160x160 grayscale input, MobileNetV2 0.1 alpha). This is 20x faster than MobileNet SSD which does ~3 frames/second.
The second video on the top classifies 30 times / second on an Arduino Nicla Vision board with a Cortex-M7 MCU running at 480MHz) in ~240K of RAM (96x96 grayscale input, MobileNetV2 0.35 alpha).
During Edge Impulse Imagine we demonstrated a FOMO model running on a Himax WE-I Plus doing 14 frames per second on a DSP (video). This model ran in under 150KB of RAM (96x96 grayscale input, MobileNetV2 0.1 alpha). [1]
The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps. on a Cortex-M4F at 80MHz. [1]
[1] Models compiled using EON Compiler.
To build your first FOMO models:
Create a new project in Edge Impulse.
Make sure to set your labeling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in Object detection
Add an 'Object Detection (Images)' block to your impulse.
Under Images, select 'Grayscale'
Under Object detection, select 'Choose a different model' and select one of the FOMO models.
Make sure to lower the learning rate to 0.001 to start.
FOMO is currently compatible with all fully-supported development boards that have a camera, and with Edge Impulse for Linux (any client). Of course, you can export your model as a C++ Library and integrate it as usual on any device or development board, the output format of models is compatible with normal object detection models; and our SDK runs on almost anything under the sun (see Running your impulse locally for an overview) from RTOS's to bare-metal to special accelerators and GPUs.
Additional configuration for FOMO can be accessed via expert mode.
FOMO is sensitive to the ratio of objects to background cells in the labelled data. By default the configuration is to weight object output cells x100 in the loss function, object_weight=100
, as a way of balancing what is usually a majority of background. This value was chosen as a sweet spot for a number of example use cases. In scenarios where the objects to detect are relatively rare this value can be increased, e.g. to 1000, to have the model focus even more on object detection (at the expense of potentially more false detections).
FOMO uses MobileNetV2 as a base model for its trunk and by default does a spatial reduction of 1/8th from input to output (e.g. a 96x96
input results in a 12x12
output). This is implemented by cutting MobileNet off at the intermediate layer block_6_expand_relu
Choosing a different cut_point
results in a different spatial reduction; e.g. if we cut higher at block_3_expand_relu
FOMO will instead only do a spatial reduction of 1/4 (i.e. a 96x96
input results in a 24x24
output)
Note though; this means taking much less of the MobileNet backbone and results in a model with only 1/2 the params. Switching to a higher alpha may counteract this parameter reduction. Later FOMO releases will counter this parameter reduction with a UNet style architecture.
FOMO can be thought of logically as the first section of MobileNetV2 followed by a standard classifier where the classifier is applied in a fully convolutional fashion.
In the default configuration this FOMO classifier is equivalent to a single dense layer with 32 nodes followed by a classifier with num_classes
outputs.
For a three way classifier, using the default cut point, the result is a classifier head with ~3200 parameters.
We have the option of increasing the capacity of this classifier head by either 1) increasing the number of filters in the Conv2D
layer, 2) adding additional layers or 3) doing both.
For example we might change the number of filters from 32 to 16, as well as adding another convolutional layer, as follows.
For some problems an additional layer can improve performance, and in this case actually uses less parameters. It can though potentially take longer to train and require more data. In future releases the tuning of this aspect of FOMO can be handled by the EON Tuner.
Just like the rest of our Neural Network-based learning blocks, FOMO is delivered as a set of basic math routines free of runtime dependencies. This means that there are virtually no limitations to running FOMO, other than:
Making sure the model itself can fit into the target's memory (flash/RAM), and
making sure the target also has enough memory to hold the image buffer (flash/RAM)in addition to your application logic
In all, we have seen buffer, model and app logic (including wireless stack) fit in as little as 200KB for 64x64 pixel images. But we would definitely recommend a target with at least 512KB so that you can take advantage of larger image sizes and a wider range of model optimizations.
With regards to latency, the speed of the target will determine the maximum number of frames that can be processed in a given interval (fps). This will of course be influenced by any other tasks the CPU may need to complete, but we have consistently seen MCUs running @ 80MHz complete a full pass on a 64x64 pixel image in under one second, which should translate to just under 1fps once you add the rest of your app logic. Keep in mind that frame throughput can increase dramatically at higher speeds or when tensor acceleration is available. We have measured 40-60 fps consistently on a Raspberry Pi 4 and ~15 fps on unaccelerated 480MHz targets. The table below summarizes this trade-off:
Training and deploying high performing ML models is usually considered as a continuous process rather than a one time exercise. When you are validating your model and discover an overfit, you might consider adding some more diverse data then perform model retraining while maintaining the initially set DSP and Neural Network block configurations.
Also during inference If you find that the data distribution has drifted significantly from the initial training distribution, it is usually a good common practice to retrain your model on the newer data distribution to keep up with the high model performance.
The Retrain model feature in the Edge Impulse Studio is useful when adding new data to your project. It uses already known parameters from your selected DSP and ML blocks then uses them to automatically regenerate new features and retrain the Neural Network model in one single step. You can consider this a shortcut for retraining your model since you don’t need to go through all the blocks in your impulse one by one again.
To retrain your model after adding some data, navigate to the Retrain model tab and click Train model.
The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.
EON Tuner Search Space
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, by your customers or by your internal knowledge.
For example, you can be constrained to use a grayscale camera, your engineers have already worked on a dedicated digital signal processing method to pre-process your sensor data or you just have the feeling that a particular neural network architecture will be more suited for a project.
In those cases, you can use the EON Tuner Search Space to define the scope of your project.
First, make sure you have an audio, motion, or image classification project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:
Log in to the Edge Impulse Studio and open a project.
Select the EON Tuner tab.
Click the Configure target button to select your model’s dataset category, target device, and time per inference (in ms).
Click on the Dataset category dropdown and select the use case unique to your motion, audio, or image classification project.
Click Save and then select Start EON Tuner
Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:
Now you’re ready to deploy your automatically configured Edge Impulse model to your target edge device!
The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.
The Tuner can directly analyze the performance on any device fully supported by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.
The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or dataset categories.
The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.
Depending on the selected dataset category, the EON Tuner considers a variety of Processing blocks when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.
Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance
During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.
Detailed logs of the run are also available. To view them, click on the button next to Target shown below.
As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.
While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.
View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32
, or the quantized int8
, version of the neural network.
Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.
The selected sorting criteria will be shown in the top left corner of each result.
When collecting data, we split the dataset into training and testing sets. The model was trained with only the training set, and the testing set is used to validate how well the model will perform on un-seen data. This will ensure that the model has not learned to overfit the training data, which is a common occurrence.
To test your model, go to Model testing, and click Test all. The model will classify all of the test set samples and give you an overall accuracy of how your model performed.
This is also accompanied by a confusion matrix to show you how your models performs for each class.
To see a classification in detail, go to the individual sample you are want to evaluate and click the three dots next to it, then just select show classification. This will open a new window that will display the expected outcome, and the predicted output of your model with its accuracy. This detailed view can also give you a hint on why an item has been misclassified.
Every learning block has a threshold. This can be the minimum confidence that a neural network needs to have, or the maximum anomaly score before a sample is tagged as an anomaly. You can configure these thresholds to tweak the sensitivity of these learning blocks. This affects both live classification and model testing.
Since the creation of Edge Impulse, we have been helping customers to deal with complex data pipelines, complex data transformation methods and complex clinical validation studies.
In most cases, before even thinking about machine learning algorithms, researchers need to build quality datasets from real-world data. These data come from various devices (prototype devices being developed vs clinical/industrial-grade reference devices), have different formats (excel sheets, images, csv, json, etc...), and be stored in various places (researcher computer, Dropbox folder, Google Drive, S3 buckets, etc...).
Dealing with such complex data infrastructure is time-consuming and expensive to develop and maintain. With this Research data section, we want to help you understand how to create a full research data pipeline by:
We have built a health reference design that describes an end-to-end ML workflow for building a wearable health product using Edge Impulse. It covers an activity study in a research lab, where data is recorded from the wearable end device (PPG + accelerometer), a reference device (Polar H10 HR monitor), plus labels (e.g. sitting, running, biking). The data is collected and validated, then written to a research dataset in an Edge Impulse organization, and finally imported into an Edge Impulse project where we train a classifier.
It handles data coming from multiple sources, data alignment, and a multi-stage pipeline before the data is imported into an Edge Impulse project. We won't cover in detail all the code snippets, our solution engineers can help you set this end-to-end ML workflow.
Upload portals are a secure way to let external parties upload data to your datasets. Through an upload portal they get an easy user interface to add data, but they have no access to the content of the dataset, nor can they delete any files. Data that is uploaded through the portal can be stored on-premise or in your own cloud infrastructure.
In this tutorial we'll set up an upload portal, show you how to add new data, and how to show this data in Edge Impulse for further processing.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure.
With your storage bucket configured you're ready to set up your first upload portal. In your organization go to Data > Upload portals and choose Create new upload portal. Here, select a name, a description, the storage bucket, and a path in the storage bucket.
Note: You'll need to enable CORS headers on the bucket. If these are not configured you'll get prompted with instructions. Talk to your user success engineer (when your data is hosted by Edge Impulse), or your system administrator to configure this.
After your portal is created a link is shown. This link contains an authentication token, and can be shared directly with the third party.
Click the link to open the portal. If you ever forget the link: no worries. Click the ⋮
next to your portal, and choose View portal.
To upload data you can now drag & drop files or folders to the drop zone on the right, or use Create new folder to first create a folder structure. There's no limit to the amount of files you can upload here, and all files are hashed, so if you upload a file that's already present the file will be skipped.
Note: Files with the same name but with a different hash are overwritten.
Mount the portal directly into a transformation block via Custom blocks > Transformation blocks > Edit block, and select the portal under mount points.
Here's a Python script which uploads, lists and downloads data to a portal. To upload data you'll need to authenticate with a JWT token, see below this script for more info.
And here's a script to generate JWT tokens:
Custom blocks are cloud jobs that can be hosted and used on Edge Impulse. They serve a dedicated task, are extremely flexible, let you customize your experience and fasten your time-to-market.
- to fetch, sort, validate, combine and transform existing data into robust datasets that can be imported into your projects.
- to create custom deployment targets for your products.
- to create and host your custom signal processing techniques and use them directly in your projects.
- to use your custom models and load pre-trained weights with PyTorch, Keras or scikit-learn.
Organizational datasets contain a powerful query system which lets you explore and slice data. You control the query system through the 'Filter' text box, and you use a language which is very similar to SQL ().
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
For example, here are some queries that you can make:
dataset like '%AMS Activity Study%'
- returns all items and files from the study.
bucket_name = 'edge-impulse-health-reference-design' AND --labels sitting,walking
- returns data whose label is 'sitting' and 'walking, and that is stored in the 'edge-impulse-health-reference-design' bucket.
metadata->ei_check = 0
- return data that have a metadata field 'ei_check' which is '0'.
created > DATE('2022-08-01')
- returns all data that was created after Aug 1, 2022.
After you've created a filter, you can select one or more data items, and select Actions...>Download selected to create a ZIP file with the data files. The file count reflects the number of files returned by the filter.
The previous queries all returned all files for a data item. But you can also query files through the same filter. In that case the data item will be returned, but only with the files selected. For example:
file_name LIKE '%.png'
- returns all files that end with .png
.
If you have an interesting query that you'd like to share with your colleagues, you can just share the URL. The query is already added to it automatically.
These are all the available fields in the query interface:
dataset
- Dataset.
bucket_id
- Bucket ID.
bucket_name
- Bucket name.
bucket_path
- Path of the data item within the bucket.
id
- Data item ID.
name
- Data item name.
total_file_count
- Number of files for the data item.
total_file_size
- Total size of all files for the data item.
created
- When the data item was created.
metadata->key
- Any item listed under 'metadata'.
file_name
- Name of a file.
file_names
- All filenames in the data item, that you can use in conjunction with CONTAINS
. E.g. find all items with file X, but not file Y: file_names CONTAINS 'x' AND not file_names CONTAINS 'y'
.
After training and validating your model, you can now deploy it to any device. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption.
The Deployment page consists of a variety of deploy options to choose from depending on your target device. Regardless of whether you are using a or not, Edge Impulse provides deploy options through C++ library in which you can use to deploy your model on any targets (as long as the target has enough compute can handle the task).
The following are the 5 main categories of deploy options currently supported by Edge Impulse:
Deploy as a customizable library
Deploy as a pre-built firmware - for fully supported development boards
Run directly on your phone or computer
Use Edge Impulse for Linux for Linux targets
Create a (Enterprise feature)
This deploy option lets you turn your impulse into a fully optimized source code that can be further customized and integrated with your application. This option supports the following libraries:
You can run your impulse locally as an Arduino library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package.
To deploy as an Arduino library, select Arduino library on the Deployment page and click Build to create the library. Download the .ZIP file and import it as a sketch in your Arduino IDE then run your application.
You can run your Impulse as a C++ library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package that can be easily ported to your custom applications.
If you want to deploy your impulse to an STM32 MCU, you can use the Cube.MX CMSIS-PACK. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in any STM32 project with a single function call.
When you want to deploy your impulse to a web app you can use the WebAssembly library.This packages all your signal processing blocks, configuration and learning blocks up into a single package that can run without any compilation.
For this option, you can use a ready-to-go binary for your development board that bundles signal processing blocks, configuration and learning blocks up into a single package. This option is currently only available for fully supported development boards as shown in the image below:
To deploy your model using ready to go binaries, select your target device and click "build". Flash the downloaded firmware to your device then run the following command:
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
If you are developing for Linux based devices, you can use Edge Impulse for Linux for deployment. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
You can run your impulse directly on your computer/mobile phone without the need of additional app. To run on your computer, you simply just need to select "computer" then click "Switch to classification mode". To run on your mobile phone, select 'Mobile Phone' then scan the QR code and click 'switch to classification mode".
To activate the EON Compiler, select you preferred deployment option then go to Enable EON™ Compiler then enable it and click 'Build' to build your impulse for deployment.
To have a peek of how your impulse would utilize compute resources of your target device, Edge Impulse also gives an estimate of latency, flash, RAM to be consumed by your target device even before deploying your impulse locally. This can really save you a lot of engineering time costs incurred by recurring iterations and experiments.
You can also select whether to run the unquantized float32 or the quantized int8 models as shown in the image below.
The above confusion matrix is only based on the test data to help you know how your model performs on unseen real world data. It can also help you know whether your model has learned to overfit on your training data which is a common occurrence.
Transformation blocks take raw data from your and convert the data into a different dataset or files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.
In this tutorial we build a Python-based transformation block that loads Parquet files, calculates features from the Parquet file, and then writes a new file back to your dataset. If you haven't done so, go through first.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
The file which you can use to test the transformation block. This contains some data from the dataset in Parquet format.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
1.1 - Parquet schema
This is the Parquet schema for the gestures.parquet
file which we'll transform:
To build a transformation block open a command prompt or terminal window, create a new folder, and run:
This will prompt you to log in, and enter the details for your block. E.g.:
Then, create the following files in this directory:
2.1 - Dockerfile
We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt
) and which script to run (transform.py
).
Note: Do not use a WORKDIR
under /home
! The /home
path will be mounted in by Edge Impulse, making your files inaccessible.
ENTRYPOINT vs RUN / CMD
If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
2.2 - requirements.txt
This file describes the dependencies for the block. We'll be using pandas
and pyarrow
to parse the Parquet file, and numpy
to do some calculations.
2.3 - transform.py
This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):
--in-file
or --in-directory
- A file (if the block operates on a file), or a directory (if the block operates on a data item) from the organizational dataset. In this case the gestures.parquet
file.
--out-directory
- Directory to write files to.
--hmac-key
- You can use this HMAC key to sign the output files. This is not used in this tutorial.
--metadata
- Key/value pairs containing the metadata for the data item, plus additional metadata about the data item in the dataItemInfo
key. E.g.:
{ "subject": "AAA001", "ei_check": "1", "dataItemInfo": { "id": 101, "dataset": "Human Activity 2022", "bucketName": "edge-impulse-tutorial", "bucketPath": "janjongboom/human_activity/AAA001/", "created": "2022-03-07T09:20:59.772Z", "totalFileCount": 14, "totalFileSize": 6347421 } }
Add the following content. This takes in the Parquet file, groups data by their label, and then calculates the RMS over the X, Y and Z axes of the accelerometer.
2.4 - Building and testing the container
On your local machine
To test the transformation block locally, if you have Python and all dependencies installed, just run:
Docker
You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.
To build the container and test the block, open a command prompt or terminal window and navigate to the source directory. First, build the container:
Then, run the container (make sure gestures.parquet
is in the same directory):
Seeing the output
This process has generated a new Parquet file in the out/
directory containing the RMS of the X, Y and Z axes. If you inspect the content of the file (e.g. using parquet-tools) you'll see the output:
Success!
With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.
The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.
If you make any changes to the block, just re-run edge-impulse-blocks push
and the block will be updated.
Next, upload the gestures.parquet
file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.
This makes the gestures.parquet
file available from the Data page.
With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'
.
Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page select 'Import data into Dataset'. Under 'output dataset', select 'Same dataset as source', and under 'Transformation block' select the new transformation block.
Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally uploads the data back to your dataset. If you have multiple files selected the transformations will also run in parallel.
You can now find the transformed file back in your dataset:
Updating metadata from a transformation block
You can update the metadata of blocks directly from a transformation block by creating a ei-metadata.json
file in the output directory. The metadata is then applied to the new data item automatically when the transform job finishes. The ei-metadata.json
file has the following structure:
Some notes:
If action
is set to add
the metadata keys are added to the data item. If action
is set to replace
all existing metadata keys are removed.
Environmental variables
Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:
EI_API_KEY
- an API key with 'member' privileges for the organization.
EI_ORGANIZATION_ID
- the organization ID that the block runs in.
EI_API_ENDPOINT
- the API endpoint (default: https://studio.edgeimpulse.com/v1).
Within an you can work on one or more projects with multiple people. These can be colleagues, outside researchers, or even members of the community. They will only get access to the specific data in the project, and not to any of the raw data in your organizational datasets.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
To invite a user in an organization, click on the "Add user button, enter the email address and select the role:
Each one of the users can have different roles:
Admins have full rights on the organization
Members have full access on the datasets, custom blocks but cannot join a project without being invited
Guests have only limited access to the selected datasets
To give someone access, go to your project's dashboard, and find the "Collaborators" widget. Click the '+' icon, and type the username or e-mail address of the other user. This user needs to have an Edge Impulse account already.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You can optionally show a check mark in the list of data items, and show a check list for data items. This can be used to quickly view which data items are complete (if you need to capture data from multiple sources) or whether items are in the right format.
Checklists look trivial, but are actually very powerful as they give quick insights in dataset issues. Missing these issues until after the study is done can be super expensive.
Checklists are written to ei-metadata.json
and are automatically being picked up by the UI.
Checklists are driven by the metadata for a data item. Set the ei_check
metadata item to either 0
or 1
to show a check mark in the list. Set an ei_check_KEYNAME
metadata item to 0
or 1
to show the item in the check list.
To query for items with or without a check mark, use a filter in the form of:
For the reference design described and used in the previous pages, the combiner takes in a data item, and writes out:
A checklist, e.g.:
✔ - PPG file present
✔ - Accelerometer file present
✘ - Correlation between Polar/PPG HR is at least 0.5
If the checklist is OK, a combined.parquet
file.
A hr.png
file with the correlation between HR found from PPG, and HR from the reference device. This is useful for two reasons:
If the correlation is too low we're looking at the wrong file, or data is missing.
Verify if the PPG => HR algorithm actually works.
This is the specification for the deployment-metadata.json
file from .
In this section, we will show how to synchronize research data with a bucket in your organizational dataset. The goal of this step is to gather data from different sources and sort them to obtain a sorted dataset (that we will then validate in the next section).
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
The reference design described in the consists of 10 subjects performing 1.5 - 2 hours of activities in a research lab. Participants have a study ID (e.g. AMS_001) that is used to refer to the participant. For each participant we have 4 CSV files:
accelerometer.csv
- data from the wearable end device.
ppg.csv
- data from the wearable end device.
polar_h10.csv
- reference data from a commercial reference device (Polar H10).
labels.csv
- labels of the activity, as recorded by the research lab.
We've mimicked a proper research study, and have split the data up into two locations.
accelerometer.csv
/ ppg.csv
- live in the company data lake in S3. The data lake uses an internal structure with non-human readable IDs for each participant (e.g. 2E93ZX
for anonymized data):
polar_h10.csv
/ labels.csv
are uploaded by the research partner to an . The files are prefixed with the study ID:
To create the mapping between the study ID and the internal data lake ID we use a study master sheet. It contains information about all participants, ID mapping, and metadata. E.g.:
Notes: This master sheet was made using a Google Sheet but can be anything. All data (data lake, portal, output) are hosted in an Edge Impulse S3 bucket but can be stored anywhere (see below).
With the storage bucket in place you can create your first dataset. Datasets in Edge Impulse have three layers:
The dataset, a larger set of data items, grouped together.
Data item, an item with metadata and files attached.
Data file, the actual files.
No required format for data files
There is no required format for data files. You can upload data in any format, whether it's CSV, Parquet, or a proprietary data format.
There are three ways of uploading data into your organization. You can either:
Upload data directly to the storage bucket (recommended method). In this case use Add data... > Add dataset from bucket and the data will be discovered automatically.
Creating a new structure in S3 like this:
Syncing the S3 folder with a research dataset in your Edge Impulse organization (like AMS Activity Study 2022
).
Updating the metadata with the metadata from the master sheet (Age
, BMI
, etc...).
With the data sorted we then:
Combine the data into a single Parquet file. This is essentially the contract we have for our dataset. By settling on a standard format (strong typed, same column names everywhere) this data is now ready to be used for ML, new algorithm development, etc. Because we also add metadata for each file here we're very quickly building up a valuable R&D datastore.
Building data pipelines is a very useful feature where you can stack several transformation blocks similar to the . They can be used in a standalone mode (just execute several transformation jobs in a pipeline), to feed a dataset or to feed a project.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
The examples in the screenshots below shows how to create and use a pipeline to create the 'AMS Activity 2022' dataset.
To create a new pipeline, click on '+Add a new pipeline:
In your organization workspace, go to Custom blocks -> Transformation and select Run job on the job you want to add.
Select Copy as pipeline step and paste it to the configuration json file.
You can then paste the copied step directly to the respected field.
Below, you have an option to feed the data to either a organisation dataset or an Edge Impulse project
By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮
button and select Edit pipeline.
Once the pipeline has successfully finished, it can send an email to the Users to notify.
Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline now.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline from code. This will display an overlay with curl
, Node.js
and Python
code samples.
You will need to create an API key to run the pipeline from code.
Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:
Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware
If you want to process data in a portal as part of a you can either:
Mount the bucket that the portal is in, as a transformation block. This will also give you access to all other data in the bucket, very useful if you need to sync other data (see ).
If the data in your portal is already in the right format you can also directly import the uploaded data to your project. In your project view, go to , **** select 'Upload portal' and follow the steps of the wizard:
If you need a secure way for external parties to contribute data to your datasets then upload portals are the way to go. They offer a friendly user interface, upload data directly into your storage buckets, and give you an easy way to use the data directly in Edge Impulse.
Any questions, or interested in the enterprise version of Edge Impulse? for more information.
For a full tutorial on how to run your impulse locally as an arduino library, have a look at .
Visit for a deep dive on how to deploy your impulse as a C++ library.
Have a look at for a deep dive on how to deploy your impulse on STM32 based targets using the Cube.MX CMSIS-PACK.
Have a look at fora deep dive on how you can run your impulse to classify sensor data in your Node.js application.
For a deep dive on how to deploy your impulse to linux targets using Edge Impulse for linux, you can visit the .
When building your impulse for deployment, Edge Impulse gives you the option of adding another layer of optimization to your impulse using the . The EON Compiler lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.
Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the so you can automatically convert new incoming data. If you're interested in transformation blocks or any of the other enterprise features,
To make it easy to create these lists on the fly you can set these metadata items directly from a .
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. If you choose to host the data yourself your infrastructure should be available through the , and you are responsible for setting up proper backups. To configure a new storage bucket, head to your organization, choose Data > Buckets, click Add new bucket, and fill in your access credentials. Our solution engineers are also here to help you set up the buckets for you.
Upload data through the .
Upload the files through the .
The sorter is the first step of the . It's job is to fetch the data from all locations (here: internal data lake, portal, metadata from study master sheet) and create a research dataset in Edge Impulse. It does this by:
Need to verify that the data is correct (see )
All these steps can be run through different and executed one after the other using .
Requirement
Minimum
Recommended
Memory Footprint (RAM)
256 KB 64x64 pixels (B&W, buffer included)
≥ 512 KB 96x96 pixels (B&W, buffer Included)
Latency (100% load)
80 MHz < 1 fps
> 80 MHz + acceleration ~15 fps @ 480MHz 40-60fps in RPi4
This is a list of development boards that are fully supported by Edge Impulse. These boards come with a special firmware which enables data collection from all their sensors, allows you to build new ready-to-go binaries that include your trained impulse, and come with examples on integrating your impulse with your custom firmware. These boards are the perfect way to start building machine learning solutions on real embedded hardware.
Different development board or custom PCB? No problem! You can upload data to Edge Impulse in a variety of ways, such as using the Data forwarder, the Edge Impulse for Linux SDK, or by uploading files directly (e.g. CSV, JPG, WAV).
From there, your trained model can be deployed as a C++ library. It requires some effort, but most build systems (for computers, smartphones, and microcontrollers) will work with our C++ library. This, of course, requires that your build system has a C++ compiler and that there is enough flash/RAM on your device to run the library/model. Also, if you feel like porting the official Edge Impulse firmware to your own board, use this porting guide.
Just want to experience Edge Impulse? You can also use your Mobile phone!
* Maybe not available on the board and require to attach additional sensors, see the dedicated page for more information.
** RAM used for the latency calculation, may differ from the datasheet.
*** ROM used for the latency calculation, may differ from the datasheet.
Different development board or different sensors? No problem, you can always collect data using the Data forwarder or the Edge Impulse for Linux SDK, and deploy your model back to the device with the Running your impulse locally tutorials. Also, if you feel like porting your board, use this Porting guide.
The Nicla Vision is a ready-to-use, standalone camera for analyzing and processing images on the Edge. Thanks to its 2MP color camera, smart 6-axis motion sensor, integrated microphone, and distance sensor, it is suitable for asset tracking, object recognition, and predictive maintenance. Some of its key features include:
Powerful microcontroller equipped with a 2MP color camera
Tiny form factor of 22.86 x 22.86 mm
Integrated microphone, distance sensor, and intelligent 6-axis motion sensor
Onboard Wi-Fi and Bluetooth® Low Energy connectivity
Standalone when battery-powered
Expand existing projects with sensing capabilities
Enable fast Machine Vision prototyping
Compatible with Nicla, Portenta, and MKR products
Its exceptional capabilities are supported by a powerful STMicroelectronics STM32H747AII6 Dual ARM® Cortex® processor, combining an M7 core up to 480 Mhz and an M4 core up to 240 Mhz. Despite its industrial strength, it keeps energy consumption low for battery-powered standalone applications.
The Arduino Nicla Vision is available for around 95 EUR from the Arduino Store.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
There are two ways to connect the Nicla Vision to Edge Impulse:
Using the official Edge Impulse firmware - it supports all onboard sensors, including camera.
Using an ingestion script. This supports analog, IMU, proximity sensors and microphone (limited to 8 kHz), but not the camera. It is only recommended if you want to modify the ingestion flow for third-party sensors.
Use a micro-USB cable to connect the development board to your computer. Under normal circumstances, flash process should work without entering the bootloader manually. However if run into difficulties flashing the board, you can enter the bootloader by pressing RESET twice. The onboard LED should start pulsating to indicate this.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse ingestion sketches and unzip the file.
Open the nicla_vision_ingestion.ino
(for IMU/proximity sensor) or nicla_vision_ingestion_mic.ino
(for microphone) sketch in a text editor or the Arduino IDE.
For IMU/proximity sensor data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select the desired sample frequency (in Hz). For example, for the accelerometer sensor:
For microphone data ingestion, you do not need to change the default parameters in the nicla_vision_ingestion_mic.ino
sketch.
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Vision using the Arduino CLI:
Alternatively if you open the sketch in the Arduino IDE, you can compile and upload the sketch from there.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_vision_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the table below for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Vision.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. You will also name your sensor axes - in the case of the microphone, you need to enter audio
. If you want to switch projects/sensors run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
The above screenshots are for Edge Impulse Ingestion scripts and Data forwarder. If you use the official Edge Impulse firmware for the Nicla Vision, the content will be slightly different.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Use the nicla_vision_ingestion.ino
sketch and the Edge Impulse data forwarder to easily send data from any sensor on the Nicla Vision into your Edge Impulse project.
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Vision. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
Use the Running your impulse locally: On your Arduino tutorial and select one of the Nicla Vision examples.
The Himax WE-I Plus is a tiny development board with a camera, a microphone, an accelerometer and a very fast DSP - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 65 USD from Sparkfun.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-himax-we-i-plus.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you export to the Himax WE-I Plus you could receive the error: "All licenses are in use by other developers.". Unfortunately we have a limited number of licenses for the MetaWare compiler and these are shared between all Studio users. Try again in a little bit, or export your project as a C++ Library, add it to the edgeimpulse/firmware-himax-we-i-plus project and compile locally.
If no device shows up in your OS (ie: COMxx, /dev/tty.usbxx) after connecting the board and your USB cable supports data transfer, you may need to install FTDI VCP driver.
The Portenta H7 is a powerful development board from Arduino with both a Cortex-M7 microcontroller and a Cortex-M4 microcontroller, a BLE/WiFi radio, and an extension slot to connect the Portenta vision shield - which adds a camera and dual microphones. At the moment the Portenta H7 is partially supported by Edge Impulse, letting you collect data from the camera, build computer vision models, and deploy trained machine learning models back to the development board. The Portenta H7 and the vision shield are available directly from Arduino for ~$150 in total.
There are two versions of the vision shield: one that has an Ethernet connection and one with a LoRa radio. Both of these can be used with Edge Impulse.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-portenta-h7.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Using the vision shield using two edge connectors on the back Portenta H7.
Use a USB-C cable to connect the development board to your computer. Then, double-tap the RESET button to put the device into bootloader mode. You should see the green LED on the front pulsating.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Double press on the RESET button on your board to put it in the bootloader mode.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Download your custom firmware from the Deployment tab in the Studio and install the firmware with the same method as in the "Update the firmware" section and run the edge-impulse-run-impulse
command:
Note that it may take up to 10 minutes to compile the firmware for the Arduino Portenta H7
Use the Running your impulse locally: On your Arduino tutorial and select one of the portenta examples:
For an end-to-end example that classifies data and then sends the result over LoRaWAN. Please see the example-portenta-lorawan example.
If you come across this issue:
You probably forgot to double press the RESET button before running the flash script.
The Nicla Sense ME is a tiny, low-power tool that sets a new standard for intelligent sensing solutions. With the simplicity of integration and scalability of the Arduino ecosystem, the board combines four state-of-the-art sensors from Bosch Sensortec:
BHI260AP motion sensor system with integrated AI.
BMM150 magnetometer.
BMP390 pressure sensor.
BME688 4-in-1 gas sensor with AI and integrated high-linearity, as well as high-accuracy pressure, humidity and temperature sensors.
Designed to easily analyze motion and the surrounding environment – hence the “M” and “E” in the name – it measures rotation, acceleration, pressure, humidity, temperature, air quality and CO2 levels by introducing completely new Bosch Sensortec sensors on the market.
Its tiny size and robust design make it suitable for projects that need to combine sensor fusion and AI capabilities on the edge, thanks to a strong computational power and low-consumption combination that can even lead to standalone applications when battery operated.
The Arduino Nicla Sense ME is available for around 55 USD from the Arduino Store.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the nicla_sense_ingestion.ino
sketch in a text editor or the Arduino IDE.
For data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the Environmental sensors:
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Sense using the Arduino CLI:
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_sense_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the table below for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required to run the Edge Impulse Arduino library deployment example applications for the Nicla Sense without any changes.
Else, when deploying the model, you will see an error like the following:
If your axis names are different, when using the generated Arduino Library for the inference, you can modify the eiSensors nicla_sensors[]
(near line 70) in the sketch example to add your custom names. e.g.:
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with the Edge Impulse continuous motion recognition tutorial.
Looking to connect different sensors? Use the nicla_sense_ingestion
sketch and the Edge Impulse Data forwarder to easily send data from any sensor on the Nicla Sense into your Edge Impulse project.
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Sense ME. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
Use the Running your impulse locally: On your Arduino tutorial and select one of the Nicla Sense examples.
The Arduino Nano 33 BLE Sense is a tiny development board with a Cortex-M4 microcontroller, motion sensors, a microphone and BLE - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 30 USD from Arduino and a wide range of distributors.
You can also use the Arduino Tiny Machine Learning Kit to run image classification models on the edge with the Arduino Nano and attached OV7675 camera module (or connect the hardware together via jumper wire and a breadboard if purchased separately).
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-nano-33-ble-sense.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. Then press RESET twice to launch into the bootloader. The on-board LED should start pulsating to indicate this.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
You will need the following hardware:
Arduino Nano 33 BLE Sense board with headers.
OV7675 camera module.
Micro-USB cable.
Solderless breadboard and female-to-male jumper wires.
First, slot the Arduino Nano 33 BLE Sense board into a solderless breadboard:
With female-to-male jumper wire, use the following wiring diagram, pinout diagrams, and connection table to link the OV7675 camera module to the microcontroller board via the solderless breadboard:
Download the full pinout diagram of the Arduino Nano 33 BLE Sense here.
Finally, use a micro-USB cable to connect the Arduino Nano 33 BLE Sense development board to your computer.
Now build & train your own image classification model and deploy to the Arduino Nano 33 BLE Sense with Edge Impulse!
Espressif ESP-EYE (ESP32) is a compact development board based on Espressif's ESP32 chip, equipped with a 2-Megapixel camera and a microphone. ESP-EYE also offers plenty of storage, with 8 MB PSRAM and 4 MB SPI flash - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 22 USD from Mouser and a wide range of distributors.
There are plenty of other boards built with ESP32 chip - and of course there are custom designs utilizing ESP32 SoM. Edge Impulse firmware was tested with ESP-EYE and ESP FireBeetle boards, but there is a possibility to modify the firmware to use it with other ESP32 designs. Read more on that in Using with other boards section of this documentation.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-espressif-esp32.
To set this device up in Edge Impulse, you will need to install the following software:
Python 3.
The ESP documentation website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The standard firmware supports the following sensors:
Camera: OV2640, OV3660, OV5640 modules from Omnivision
Microphone: I2S microphone on ESP-EYE (MIC8-4X3-1P0)
LIS3DHTR module connected to I2C (SCL pin 22, SDA pin 21)
Any analog sensor, connected to A0
The analog sensor and LIS3DHTR module were tested on ESP32 FireBeetle board and Grove LIS3DHTR module.
ESP32 is a very popular chip both in a community projects and in industry, due to its high performance, low price and large amount of documentation/support available. There are other camera enabled development boards based on ESP32, which can use Edge Impulse firmware after applying certain changes, e.g.
AI-Thinker ESP-CAM
M5STACK ESP32 PSRAM Timer Camera X (OV3660)
M5STACK ESP32 Camera Module Development Board (OV2640)
The pins used for camera connection on different development boards are not the same, therefore you will need to change the #define here to fit your development board, compile and flash the firmware. Specifically for AI-Thinker ESP-CAM, since this board needs an external USB to TTL Serial Cable to upload the code/communicate with the board, the data transfer baud rate must be changed to 115200 here.
The analog sensor and LIS3DH accelerometer can be used on any other development board without changes, as long as the interface pins are not changed. If I2C/ADC pins that accelerometer/analog sensor are connected to are different, from described in Sensors available section, you will need to change the values in LIS3DHTR component for ESP32, compile and flash it to your board.
Additionally, since Edge Impulse firmware is open-source and available to public, if you have made modifications/added new sensors capabilities, we encourage you to make a PR in firmware repository!
To deploy your impulse on your ESP32 board, please see:
Generate an Edge Impulse firmware (ESP32-EYE only)
Download a C++ library (using ESP-IDF)
Download an Arduino library
CY8CKIT-062S2 Pioneer Kit and CY8CKIT-028-SENSE expansion kit required
This guide assumes you have the IoT sense expansion kit (CY8CKIT-028-SENSE) attached to a PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit
The Infineon Semiconductor PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit (Cypress CY8CKIT-062S2) enables the evaluation and development of applications using the PSoC 62 Series MCU. This low-cost hardware platform enables the design and debug of the PSoC 62 MCU and the Murata 1LV Module (CYW43012 Wi-Fi + Bluetooth Combo Chip). The PSoC 6 MCU is Infineon' latest, ultra-low-power PSoC specifically designed for wearables and IoT products. The board features a PSoC 6 MCU, and a CYW43012 Wi-Fi/Bluetooth combo module. Infineon CYW43012 is a 28nm, ultra-low-power device that supports single-stream, dual-band IEEE 802.11n-compliant Wi-Fi MAC/baseband/radio and Bluetooth 5.0 BR/EDR/LE. When paired with the IoT sense expansion kit, the PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit can be used to easily interface a variety of sensors with the PSoC™ 6 MCU platform, specifically targeted for audio and machine learning applications which are fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models to your PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit, directly from the Edge Impulse Studio.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-infineon-cy8ckit-062s2.
To set this device up with Edge Impulse, you will need to install the following software:
Infineon CyProgrammer. A utility program we will use to flash firmware images onto the target.
The Edge Impulse CLI which will enable you to connect your CY8CKIT-062S2 Pioneer Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Edge Impulse Studio can collect data directly from your CY8CKIT-062S2 Pioneer Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your CY8CKIT-062S2 Pioneer Kit you first need to flash it with our base firmware image.
Download the latest Edge Impulse firmware, and unzip the file. Once downloaded, unzip it to obtain the firmware-infineon-cy8ckit-062s2.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the CY8CKIT-062S2 Pioneer Kit to your development computer (where you downloaded and installed Infineon CyProgrammer).
You can use Infineon CyProgrammer to flash your CY8CKIT-062S2 Pioneer Kit with our base firmware image. To do this, first select your board from the dropdown list on the top left corner. Make sure to select the item that starts with CY8CKIT-062S2-43012
:
Then select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-infineon-cy8ckit-062s2.hex
). You can now press the Connect
button to connect to the board, and finally the Program
button to load the base firmware image onto the CY8CKIT-062S2 Pioneer Kit.
Keep Infineon CyProgrammer Handy
Infineon CyProgrammer will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
With all the software in place, it's time to connect the CY8CKIT-062S2 Pioneer Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices on the left sidebar. The device will be listed there:
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Nordic Semiconductor nRF9160 DK is a development board with an nRF9160 SIP incorporating a Cortex M-33 for your application, a full LTE-M/NB-IoT modem with GPS along with 1 MB of flash and 256 KB RAM. It also includes an nRF52840 board controller with Bluetooth Low Energy connectivity. The Development Kit is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF9160 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF9160 DK is available for around 150 USD from a variety of distributors including Digikey.
If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf-91.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF9160 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications. You can also remove the shield before flashing the board.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Install the nRF Command Line Tools.
Flash the board controller, you only need to do this once. Go to step 4 if you've performed this step before.
Ensure that the PROG/DEBUG
switch is in nRF52
position.
Copy board-controller.bin
to the JLINK
mass storage device.
Flash the application:
Ensure that the PROG/DEBUG
switch is in nRF91
position.
Run the flash script for your Operating System.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF9160 DK exposes multiple UARTs. If prompted, choose the top one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The OpenMV Cam is a small and low-power development board with a Cortex-M7 microcontroller supporting MicroPython, a μSD card socket and a camera module capable of taking 5MP images - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models through the studio and the OpenMV IDE. It is available for 80 USD directly from OpenMV.
To set this device up in Edge Impulse, you will need to install the following software:
Problems installing the CLI?
See the installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse. To make this easy we've put some tutorials together which takes you through all the steps to acquire data, train a model, and deploy this model back to your device.
Adding sight to your sensors - end-to-end tutorial.
Collecting image data with the OpenMV Cam H7 Plus - collecting datasets using the OpenMV IDE.
Running your impulse on your OpenMV camera - run your trained impulse on the OpenMV Cam H7 Plus.
Grove - Vision AI Module is a thumb-sized board based on Himax HX6537-A processor which is equipped with a 2-Megapixel OV2640 camera, microphone, 3-axis accelerometer and 3-axis gyroscope. It offers storage with 32 MB SPI flash, comes pre-installed with ML algorithms for face recognition and people detection and supports customized models as well. It is compatible with the XIAO ecosystem and Arduino, all of which makes it perfect for getting started with AI-powered camera projects!
It is fully supported by Edge Impulse which means you will be able to sample raw data from the camera, build models, and deploy trained machine learning models to the module directly from the studio without any programming required. Grove - Vision AI Module is available for purchase directly from Seeed Studio Bazaar.
Quick links access:
Firmware source code: Github repository
Pre-compiled firmware: seeed-grove-vision-ai.zip
To set this board up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the Edge Impulse CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the board to Edge Impulse.
BL702 is the USB-UART chip which enables the communication between the PC and the Himax chip. You need to update this firmware in order for the Edge Impulse firmware to work properly.
Download BL702-firmware-grove-vision-ai.zip and extract it to obtain tinyuf2-grove_vision_ai.bin file
Connect the board to the PC via a USB Type-C cable while holding down the Boot button on the board
Open previously installed Bouffalo Lab Dev Cube software, select BL702/704/706, and then click Finish
Go to MCU tab. Under Image file, click Browse and select the firmware you just downloaded.
Click Refresh, choose the Port related to the connected board, set Chip Erase to True, click Open UART, click Create & Download and wait for the process to be completed .
You will see the output as All Success if it went well.
Note: If the flashing throws an error, try to click Create & Download multiple times until you see the All Success message.
The board does not come with the right Edge Impulse firmware yet. To update the firmware:
Download the latest Edge Impulse firmware and extract it to obtain firmware.uf2 file
Connect the board again to the PC via USB Type-C cable and double-click the Boot button on the board to enter mass storage mode
After this you will see a new storage drive shown on your file explorer as GROVEAI. Drag and drop the firmware.uf2 file to GROVEAI drive
Once the copying is finished GROVEAI drive will disappear. This is how we can check whether the copying is successful or not.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build and run your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
After building the machine learning model and downloading the Edge Impulse firmware from Edge Impulse Studio, deploy the model uf2 to Grove - Vision AI by following steps 1 and 2 under Update Edge Impulse firmware section.
If you want to compile the Edge Impulse firmware from source code, you can visit this GitHub repo and follow the instructions included in the README. The model used for the official firmware can be found in this public project.
The Nordic Semiconductor nRF52840 DK is a development board with a Cortex-M4 microcontroller, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF52840 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF52840 DK is available for around 50 USD from a variety of distributors including Digikey.
If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf52840-5340.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF52840 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
If this is not the case, see No JLINK drive at the bottom of this page.
Drag the nrf52840-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you don't see the JLINK
drive show up when you connect your nRF52840 DK you'll have to update the interface firmware.
Set the power switch to 'off'.
Hold BOOT/RESET while you set the power switch to 'on'.
Your development board should be mounted as BOOTLOADER
.
Download the latest Interface MCU firmware and drag the .bin
file onto the BOOTLOADER
drive.
After 20 seconds disconnect the USB cable, and plug the cable back in.
The development board should now be mounted as JLINK
.
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Install the nRF Command Line Tools.
Flash new firmware via:
The Nordic Thingy:53™ is an easy-to-use prototyping platform, it makes it possible to create prototypes and proof-of-concepts without the need to build custom hardware. Thingy:53 is built around the nRF5340 SoC. The capacity of its dual Arm Cortex-M33 processors enables it to do embedded machine learning (ML), both collecting data and running trained ML models on the device. The Bluetooth Low Energy radio allows it to connect to smart phones, tablets, laptops and similar devices, without the need for a wired connection. Other protocols like Thread, Zigbee and proprietary 2.4 GHz protocols are also supported by the radio. It also includes a well of different integrated sensors, an NFC antenna, and has two buttons and one RGB LED that simplifies input and output.
Nordic's Thingy:53 is fully supported by Edge Impulse and every Thingy:53 is shipped with Edge Impulse firmware already flashed. You'll be able to sample raw data, build models, and deploy trained machine learning models directly out-of-the-box via the Edge Impulse Studio or the Nordic nRF Edge Impulse iPhone and Android apps over BLE connection. The Thingy:53 is available for around 120 USD from a variety of distributors.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nordic-thingy53.
To set this device up in Edge Impulse via USB serial or external debug probe, you will need to install the following software:
nRF Connect for Desktop v3.11.1 (only needed to update device firmware through USB or external debug probe).
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Brand new Thingy:53 devices will work out-of-the-box with the Edge Impulse Studio and the Nordic nRF Edge Impulse iPhone and Android apps. However, if your device has been flashed with some other firmware, then follow the steps below to update your device to the latest Edge Impulse firmware.
Use a USB cable to connect the development board to your computer. Then, set the power switch to 'on'.
Download the latest Edge Impulse firmware:
Edge Impulse firmware: nordic-thingy53-full.zip
*-full.zip
contains HEX files to upgrade the device through the external probe.
Edge Impulse firmware: nordic-thingy53-dfu.zip
*-dfu.zip
contains dfu_application.zip
package to upgrade the already flashed device through the Serial/USB bootloader.
Follow Nordic's instructions to update the firmware on the Thingy:53 through your choice of debugging connection:
See the section below on Connecting to the nRF Edge Impulse mobile application.
With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
If prompted to select a device, choose ZEPHYR
:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with this tutorial:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Now that you have created an Edge Impulse account and trained your first Edge Impulse machine learning model, using the Nordic nRF Edge Impulse app you can deploy your impulse to your Nordic Thingy:53 and acquire/upload new sensor data into your Edge Impulse projects.
Select the Devices tab to connect to your Thingy:53 device to your mobile phone:
To remove your connected Thingy:53 from your project, select the connected device name and scroll to the bottom of the device page to remove it.
To view existing data samples in your Edge Impulse project, select the Data Acquisition tab. To record and upload a new data sample into your project, click on the "+" button at the top right of the app. Select your sensor, type in the sample label, and choose a sample length and frequency, then select Start Sampling.
Build and deploy your Edge Impulse model to your Thingy:53 via the Deployment tab. Select your project from the top drop-down, select your connected Thingy:53 device, and click Build:
The app will start building your project and uploading the firmware to the connected Thingy:53:
If you encounter connection errors during deployment, please see Troubleshooting.
Every Thingy:53 is shipped with a default Edge Impulse model. This model is created from the Tutorial: Continuous motion recognition and it's corresponding Edge Impulse project.
Select the Inferencing tab to view the inferencing results of the model flashed to the connected Thingy:53:
Select the Settings tab to view your logged-in account information, BLE scanner settings, and application version. Click on your account name to view your Edge Impulse projects and logout of your account.
Lost BLE connection to device
Reconnect your device by selecting your device name on the Devices tab and click "Reconnect".
Make sure power cables are plugged in properly.
Do not use iPhone/Android app multitasking during data acquisition, firmware deployment, or inferencing tasks, as the BLE streaming connection will be closed.
The Nordic Semiconductor nRF5340 DK is a development board with dual Cortex-M33 microcontrollers, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF5340 DK does not have any built-in sensors we recommend you to pair this development board with the X-NUCLEO-IKS02A1 shield (with a MEMS accelerometer and a MEMS microphone). The nRF5340 DK is available for around 50 USD from a variety of distributors.
If you don't have the X-NUCLEO-IKS02A1 shield you can use the Data forwarder to capture data from any other sensor, and then follow the Running your impulse locally: On your Zephyr-based Nordic Semiconductor development board tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nrf52840-5340.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF5340 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Drag the nrf5340-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF5340 DK exposes multiple UARTs. If prompted, choose the bottom one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Install the nRF Command Line Tools.
Flash new firmware via:
The Synaptics Katana KA10000 board is a low-power AI evaluation kit from Synaptics that has the KA10000 AI Neural Network processor onboard. The evaluation kit is provided with a separate Himax HM01B0 QVGA monochrome camera module and 2 onboard zero power Vesper microphones. The board has an embedded STLIS2Dw12 accelerometer and an optional TI OPT3001 ambient light sensor. The connectivity to the board is provided with an IEEE 802.11n ultra low power WiFi module that is integrated with a Bluetooth 5.x, in addition to 4 Peripheral Modules (PMOD) connectors to provide I2C. UART, GPIO, I2S/SPI interfaces.
The package contains several accessories:
The Himax image sensor.
The PMOD-I2C USB firmware configuration board.
The PMOD-UART USB adapter.
2 AAA batteries
Enclosure.
The Edge Impulse firmware for this board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
In order to update the firmware, it is necessary to use the PMOD-I2C USB firmware configuration board. The PMOD-I2C board is connected to the Katana board on the north right PMOD-I2C interface (as shown in the image at the top of this page), then you need to use a USB C cable to connect the firmware configuration board to the host PC.
In addition to the PMOD-I2C configuration board. You need to connect the PMOD-UART extension to the Katana board which is located on the left side of the board. Then you need to use a Micro-USB cable to connect the board to your computer.
The board is shipped originally with a sound detection firmware by default. You can upload new firmware to the flash memory by following these instructions:
Verify that you have correctly connected the firmware configuration board.
Run the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials, and board-specific public projects:
is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio.
To get started with the Sony Spresense and Edge Impulse you'll need:
The - available for around 55 USD from a wide range of distributors.
The - to connect external sensors.
A micro-SD card to store samples.
In addition you'll want some sensors, these ones are fully supported (note that you can collect data from any sensor on the Spresense with the ):
For image models: the .
For accelerometer models: the .
For audio models: an electret microphone and a 2.2K Ohm resistor, wired to the extension board's audio channel A, following ().
Note: for audio models you must also have a FAT formatted SD card for the extension board, with the Spresense's DSP files included in a BIN
folder on the card, and a .
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Make sure the SD card is formatted as FAT before inserting it into the Spresense.
Use a micro-USB cable to connect the main development board (not the extension board) to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete. The on-board LEDs should stop blinking to indicate that the new firmware is running.
From a command prompt or terminal, run:
Mac: Device choice
If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.SLAB_USBtoUART or /dev/cu.usbserial-*
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
If you see:
Upgrade pyserial:
If the edge-impulse-daemon
or edge-impulse-run-impulse
commands do not start it might be because of an error interacting with the SD card or because your board has an old version of the bootloader. To see the debug logs, run:
And press the RESET button on the board. If you see Welcome to nash
you'll need to update the bootloader. To do so:
Install and launch the Arduino IDE.
Go to Preferences and under 'Additional Boards Manager URLs' add https://github.com/sonydevworld/spresense-arduino-compatible/releases/download/generic/package_spresense_index.json
(if there's already text in this text box, add a ,
before adding the new URL).
Then go to Tools > Boards > Board manager, search for 'Spresense' and click Install.
Select the right board via: Tools > Boards > Spresense boards > Spresense.
Select your serial port via: Tools > Port and selecting the serial port for the Spresense board.
Select the Spresense programmer via: Tools > Programmer > Spresense firmware updater.
Update the bootloader via Tools > Burn bootloader.
The is a development board equipped with the multiprotocol wireless CC1352P microcontroller. The Launchpad, when paired with the and booster packs, is fully supported by Edge Impulse, and is able to sample accelerometer & microphone data, build models, and deploy directly to the device without any programming required. The , , and boards are available for purchase directly from Texas Instruments.
If you don't have either booster pack or are using different sensing hardware, you can use the to capture data from any other sensor type, and then follow the tutorial to run your impulse. Or, you can clone and modify the open source project on GitHub.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
Add the installation directory to your PATH
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the Edge Impulse CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
To interface the Launchpad with sensor hardware, you will need to either connect the BOOSTXL-SENSORS to collect accelerometer data, or the CC3200AUDBOOST to collect audio data. Follow the guides below based on what data you want to collect.
Before you start
2. Connect the development board to your computer
Use a micro-USB cable to connect the development board to your computer.
3. Update the firmware
The development board does not come with the right firmware yet. To update the firmware:
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
Problems flashing firmware onto the Launchpad?
3. Setting keys
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Which device do you want to connect to?
The Launchpad enumerates two serial ports. The first is the Application/User UART, which the edge-impulse firmware communicates through. The other is an Auxiliary Data Port, which is unused.
When running the edge-impulse-daemon
you will be prompted on which serial port to connect to. On Mac & Linux, this will appear as:
Generally, select the lower numbered serial port. This usually corresponds with the Application/User UART. On Windows, the serial port may also be verified in the Device Manager
4. Verifying that the device is connected
With everything set up you can now build and run your first machine learning model with these tutorials:
Failed to flash
If the UniFlash CLI is not added to your PATH, the install scripts will fail. To fix this, add the installation directory of UniFlash (example /Applications/ti/uniflash_6.4.0
on macOS) to your PATH on:
If during flashing you encounter further issues, ensure:
The device is properly connected and/or the cable is not damaged.
You have the proper permissions to access the USB device and run scripts. On macOS you can manually approve blocked scripts via System Preferences->Security Settings->Unlock Icon
If on Linux you may want to try copying tools/71-ti-permissions.rules to /etc/udev/rules.d/. Then re-attach the USB cable and try again.
The Silicon Labs Thunderboard Sense 2 is a complete development board with a Cortex-M4 microcontroller, a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio - and even stream your machine learning results over BLE to a phone. It's available for around 20 USD directly from .
The Edge Impulse firmware for this development board is open source and hosted on on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. The development board should mount as a USB mass-storage device (like a USB flash drive), with the name TB004
. Make sure you can see this drive.
The development board does not come with the right firmware yet. To update the firmware:
Drag the silabs-thunderboard-sense2.bin
file to the TB004
drive.
Wait 30 seconds.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To fix this error, install the Simplicity Studio 5 IDE and flash the binary through the IDE's built in "Upload application..." menu under "Debug Adapters", and select your Edge Impulse firmware to flash:
Your Edge Impulse inferencing application should then run successfully with edge-impulse-run-impulse
.
The is the debut microcontroller from Raspberry Pi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around $4 from Raspberry Pi foundation and a wide range of distributors.
To get started with the Raspberry Pi RP2040 and Edge Impulse you'll need:
A . The pre-built firmware and Edge Impulse Studio exported binary are tailored for , but with a few simple steps you can collect the data and run your models with other RP2040-based boards, such as . For more details, check out .
(Optional) If you are using the , the makes it easier to connect external sensors for data collection/inference.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
If you'd like to interact with the board using a set of pre-defined AT commands (not necessary for standard ML workflow), you will need to also install a serial communication program, for example minicom
, picocom
or use Serial Monitor from Arduino IDE (if installed).
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place, it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer while holding down the BOOTSEL button, forcing the Raspberry Pi Pico into USB Mass Storage Mode.
The development board does not come with the right firmware yet. To update the firmware:
Drag the ei_rp2040_firmware.uf2
file from the folder to the USB Mass Storage device.
Wait until flashing is complete, unplug and replug in your board to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model. Since Raspberry Pi Pico does not have any built-in sensors, we decided to add the following ones to be supported out of the box, with a pre-built firmware:
Analog signal sensor (pin A0).
Once you have the compatible sensors, you can then follow these tutorials:
Support for Arduino RP2040 Connect was added to the official RP2040 firmware for Edge Impulse. That includes data acquisition and model inference support for:
onboard MP34DT05 microphone
onboard ST LSM6DSOX 6-axis IMU
the sensors described above still can be connected
While RP2040 is a relatively new microcontroller, it was already utilized to build several boards:
The official Raspberry Pi Pico RP2040
Arducam Pico4ML (Camera, screen and microphone)
Seeed Studio XIAO RP2040 (extremely small footprint)
Black Adafruit Feather RP2040 (built-in LiPoly charger)
And others. While pre-built Edge Impulse firmware is mainly tested with Pico board, it is compatible with other boards, with the exception of I2C sensors and microphone - different boards use different pins for peripherals, so if you’d like to use LSM6DS3/LSM6DSOX accelerometer & gyroscope modules or microphone, you will need to change pin values in Edge Impulse RP2040 firmware source code, recompile it and upload it to the board.
You can use your Linux x86_64 device or computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a webcam and microphone plugged into your system, they are automatically detected and can be used to build models.
Instruction set architectures
If you are not sure about your instruction set architectures, use:
To set this device up in Edge Impulse, run the following commands:
Ubuntu/Debian:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally run on your Linux platform:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The Nordic Semiconductor Thingy:91 is an easy-to-use battery-operated prototyping platform for cellular IoT using LTE-M, NB-IoT and GPS. It is ideal for creating Proof-of-Concept (PoC), demos and initial prototypes in your cIoT development phase. Thingy:91 is built around the and is certified for a broad range of LTE bands globally, meaning the Nordic Thingy:91 can be used just about anywhere in the world. There is an multiprotocol SoC on the Thingy:91. This offers the option of adding Bluetooth Low Energy connectivity to your project.
Nordic's Thingy:91 is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. Thingy:91 is available for around 120 USD from a .
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
Before you start a new project, you need to update the Thingy:91 firmware to our latest build.
Use a micro-USB cable to connect the development board to your computer. Then, set the power switch to 'on'.
firmware.hex
: the Edge Impulse firmware image for the nRF9160 SoC, and
connectivity-bridge.hex
: a connectivity application for the nRF52840 that you only need on older boards (hardware version < 1.4)
Open nRF Connect for Desktop and launch the Programmer application.
Scroll down in the menu on the right and make sure Enable MCUboot is selected.
Switch off the Nordic Thingy:91.
Press the multi-function button (SW3) while switching SW1 to the ON position.
In the Programmer navigation bar, click Select device.
In the menu on the right, click Add HEX file > Browse, and select the firmware.hex file from the firmware previously downloaded.
Scroll down in the menu on the right to Device and click Write:
In the MCUboot DFU window, click Write. When the update is complete, a Completed successfully message appears.
You can now disconnect the board.
Thingy:91 hardware version < 1.4.0
Updating the firmware with older hardware versions may fail. Moreover, even if the update works, the device may later fail to connect to Edge Impulse Studio:
With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The Thingy:91 exposes multiple UARTs. If prompted, choose the first one:
With everything set up you can now build your first machine learning model with this tutorial:
The Silicon Labs xG24 Dev Kit (xG24-DK2601B) is a compact, feature-packed development platform built for the EFR32MG24 Cortex-M33 microcontroller. It provides the fastest path to develop and prototype wireless IoT products. This development platform supports up to +10 dBm output power and includes support for the 20-bit ADC as well as the xG24's AI/ML hardware accelerator. The platform also features a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models directly from the Edge Impulse Studio - and even stream your machine learning results over BLE to a phone.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up with Edge Impulse, you will need to install the following software:
Problems installing the CLI?
Edge Impulse Studio can collect data directly from your xG24 Dev Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your xG24 Dev Kit you first need to flash it with our base firmware image.
Then go to the "Flash" section on the left sidebar, and select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-xg24.hex
). You can now press the Flash
button to load the base firmware image onto the xG24 Dev Kit.
Keep Simplicity Commander Handy
Simplicity Commander will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
With all the software in place, it's time to connect the xG24 Dev Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
The Raspberry Pi 4 is a versatile Linux development board with a quad-core processor running at 1.5GHz, a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Raspberry Pi 4 is available from 35 USD from a wide range of distributors, including .
In addition to the Raspberry Pi 4 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the work fine on the development board out of the box.
You can set up your Raspberry Pi without a screen. To do so:
Raspberry Pi OS - Bullseye release
Last release of the Raspberry Pi OS requires Edge Impulse Linux CLI version >= 1.3.0.
After flashing the OS, find the boot
mass-storage device on your computer, and create a new file called wpa_supplicant.conf in the boot
drive. Add the following code:
(Replace the fields marked with <>
with your WiFi credentials)
Next, create a new file called ssh
into the boot
drive. You can leave this file empty.
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router, or by scanning your network. E.g. on macOS and Linux via:
Here 192.168.1.19
is your IP address.
Connect to the Raspberry Pi over SSH. Open a terminal window and run:
Log in with password raspberry
.
If you have a screen and a keyboard / mouse attached to your Raspberry Pi:
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Connect to your WiFi network.
Click the 'Terminal' icon in the top bar of the Raspberry Pi.
To set this device up in Edge Impulse, run the following commands:
If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:
Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera. Then reboot the Raspberry.
If you want to install Edge Impulse on your Raspberry Pi using Docker you can run the following commands:
Once on the Docker container, run:
and
You should now be able to run Edge Impulse CLI tools from the container running on your Raspberry.
Note that this will only work using an external USB camera
With all software set up, connect your camera or microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally, just connect to your Raspberry Pi again, and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The Jetson Nano is an embedded Linux dev kit featuring a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Jetson Nano is available from 59 USD from a wide range of distributors, including , .
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
Powering your Jetson
Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via a 5V, 4A power supply on the DC barrel connector. Don't forget to change the jumper! is an example power supply for sale.
An added bonus to powering via the DC barrel plug, you can carry out your first boot w/o an external monitor or keyboard.
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally, just connect to your Jetson again, and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
If you see an error similar to this when running Linux C++ SDK examples with GPU acceleration,
The TinyML Board is a with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor™, featuring ultra low-power consumption, a fully connected neural network architecture, and fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance audio interfaces.
The Edge Impulse firmware for this development board is open source and hosted on .
IMU data acquisition - SD Card
An SD Card is required to use IMU data acquisition as the internal RAM of the MCU is too small. You don't need the SD Card for inferencing only or for audio projects.
To set this device up in Edge Impulse, you will need to install the following software:
Select one of the 2 firmwares below for audio or IMU projects:
Insert SD Card if you need IMU data acquisition and connect the USB cable to your computer. Double-click on the script for your OS. The script will flash the Arduino firmware and a default model on the NDP101 chip.
Flashing issues
0x000000: read 0x04 != expected 0x01
Some flashing issues can occur on the Serial Flash. In this case, open a Serial Terminal on the TinyML board and send the command: :F. This will erase the Serial Flash and should fix the flashing issue.
Connect the Syntiant TinyML Board directly to your computer's USB port. Linux, Mac OS, and Windows 10 platforms are supported.
Audio - USB microphone (macOS/Linux only)
Check that the Syntiant TinyML enumerates as "TinyML" or "Arduino MKRZero". For example, in Mac OS you'll find it under System Preferences/Sound:
Audio acquisition - Windows OS
Using the Syntiant TinyML board as an external microphone for data collection doesn't currently work on Windows OS.
IMU
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model and evaluate it using the Syntiant TinyML Board with this tutorial:
How to label my classes? The NDP101 chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise
and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset
(we prefix it with 'z' in order to get this class last in the list).
The ST IoT Discovery Kit (also known as the B-L475E-IOT01A) is a development board with a Cortex-M4 microcontroller, MEMS motion sensors, a microphone and WiFi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 50 USD from a variety of distributors including .
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
Two variants of this board
There are two variants of this board, the B-L475E-IOT01A1 (US region) and the B-L475E-IOT01A2 (EU region) - the only difference is the sub-GHz radio. Both are usable in Edge Impulse.
To set this device up in Edge Impulse, you will need to install the following software:
On Windows:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?"
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one the furthest from the buttons.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name DIS_L4IOT
. Make sure you can see this drive.
Drag the DISCO-L475VG-IOT01A.bin
file to the DIS_L4IOT
drive.
Wait until the LED stops flashing red and green.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, choose an Edge Impulse project, and set up your WiFi network. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
If you experience the following error when attempting to connect to a WiFi network:
If the LED does not flash red and green when you copy the .bin
file to the device and instead is a solid red color, and you are unable to connect the device with Edge Impulse, there may be an issue with your device's native firmware.
To restore functionality, use the following tool from ST to update your board to the latest version:
You might need to set up udev rules on Linux before being able to talk to the device. Create a file named /etc/udev/rules.d/50-stlink.rules
and add the following content:
Then unplug the development board and plug it back in.
Community board
This is a community board by Arducam, and it's not maintained by Edge Impulse. For support head to the .
The Arducam Pico4ML TinyML Dev Kit is a development board from Arducam with a RP2040 microcontroller, QVGA camera, bluetooth module (depending on your version), LCD screen, onboard microphone, accelerometer, gyroscope, and compass. Arducam has created in depth tutorials on how to get started using the Pico4ML Dev Kit with Edge Impulse, including how to collect new data and how to train and deploy your Edge Impulse models to the Pico4ML. The Arducam Pico4ML TinyML Dev Kit has two versions, is available for 55 USD and is available for 50 USD.
Development Boards | Officially Supported Sensors | Memory** | Storage*** | Architecture |
---|---|---|---|---|
Open the app and login with your edgeimpulse.com credentials:
Select your Thingy:53 project from the drop-down menu at the top:
.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Eggs AI:
Tutorial: Adding sight to your sensors (Synaptics KA10000):
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
.
See the guide.
Install .
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Then update the firmware again (from ).
.
Install the desktop version for your operating system
See for more details
See the guide.
The Launchpad jumper connections should be in their original configuration out of the box. If you have already modified the jumper connections, see the Launchpad's for the original configuration.
You will need five extra to connect the CC3200AUDBOOST to the Launchpad, as described in the .
The CC3200AUDBOOST board requires modifications to interface properly with the CC1352P series of Launchpads. The full documentation regarding these modifications is available from Texas Instruments in their , and a summary of the steps to configure the board are shown below.
The pin connections shown below are required by TI to interface between the two boards. Connect the pins by using jumper wires and following the diagram. For more information see the CC3200AUDBOOST and
Perform all modifications to the Launchpad and audio booster pack described in the
, and unzip the file.
See the section for more information.
If a selected serial port fails to connect. Test the other port before checking for other common issues.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse, and you can with custom firmware or sensor data.
Alternatively, the gcc/build/edge-impulse-standalone.out
binary file may be flashed to the Launchpad using the UniFlash GUI or web-app. See the for more info.
.
See the guide.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Did you know? You can also stream the results of your impulse over BLE to a nearby phone or gateway: see .
When dragging and dropping an Edge Impulse pre-built .bin firmware file, the binary seems to flash, but when the device reconnects a FAIL.TXT file appears with the contents "Error while connecting to CPU" and the following errors appear from the :
.
See the guide.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
(GP16; pin D16 on Grove Shield for Pi Pico).
(GP18; pin D18 on Grove Shield for Pi Pico).
(I2C0).
There is a vast variety of analog signal sensors, that can take advantage of RP2040 10-bit ADC (Analog to Digital Converter), from common ones, such as Light sensor, Sound level sensor to more specialized ones, e.g. , or even an .
.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
- install exactly version 3.7.1, please follow the below instructions to downgrade or newly install v3.71:
.
See the guide.
. The extracted archive contains the following files:
In these cases, you will also need to flash the connectivity-bridge.hex
onto the nRF52840 in the Thingy:91. Follow the with the connectivity-bridge.hex
file through USB.
If this method doesn't work, you will need to ."
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
. A utility program we will use to flash firmware images onto the target.
The which will enable you to connect your xG24 Dev Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
See the guide.
, and unzip the file. Once downloaded, unzip it to obtain the firmware-xg24.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the xG24 Dev Kit to your development computer (where you downloaded and installed ).
You can use to flash your xG24 Dev Kit with our . To do this, first select your board from the dropdown list on the top left corner:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices on the left sidebar. The device will be listed there:
.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Flash the image to an SD card.
Flash the image to an SD card.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Depending on your hardware, follow NVIDIA's setup instructions ( or ) for both "Write Image to SD Card" and "Setup and First Boot." Do not use the latest SD card image, but rather, download the 4.5.1 version for your respective board from . When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Due to some incompatibilities, we don't run models on the GPU by default. You can enable this by following the in the C++ SDK.
then please download and use the SD card image version 4.6.1 for your respective board from . The error is likely caused by an incompatible version of nVidia's GPU libraries - or the absence of these libraries. If you must use older JetPack version (4.5.1 is the earliest supported), then you need to rename libei_debug7.a located in tflite/linux-jetson-nano/ to libei_debug.a and recompile your application code.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
How to use Arduino-CLI with macOS M1 chip? You will need to install Rosetta2 to run the Arduino-CLI. See details on .
).
- drivers for the development board. Run dpinst_amd64
on 64-bits Windows, or dpinst_x86
on 32-bits Windows.
See the guide.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
You have hit a with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.
To set up your Arducam Pico4ML TinyML Dev Kit, follow this guide: .
With everything set up you can now build your first machine learning model with the .
Or you can follow Arducam's tutorial on .
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Arducam Pico4ML TinyML Dev Kit. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Arducam's tutorial for more information on deploying your model onto the device.
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_PROXIMITY
cm
Microphone
Camera*
4MB
4MB
Cortex-M55 400MHz + U55-256MACC
Accelerometer
Microphone
Environmental
Camera*
256KB
1MB
Cortex-M4F 64MHz
Accelerometer
Environmental
64KB
512KB
Cortex-M4 64MHz
Accelerometer
Microphone
Camera*
128KB
1MB
Cortex-M7 480MHz
Camera
Microphone*
512KB
2MB
Cortex-M7 480MHz
Camera
Accelerometer*
Microphone*
GPIO*
4MB
4MB
ESP32 240MHz
Camera
Accelerometer
Microphone
2MB
2MB
ARC DSP 400MHz
Accelerometer
Microphone
1MB
2MB
Cortex-M4 150MHz + Cortex-M0+ 100MHz
Accelerometer*
Microphone*
256KB
1MB
Cortex-M4F 64MHz
Accelerometer*
Microphone*
512KB
1MB
Cortex-M33 128MHz
Accelerometer*
Microphone*
256KB
1MB
Cortex-M33 64MHz
Accelerometer
Environmental
512KB
1MB
Cortex-M33 128MHz
Accelerometer
Environmental
256KB
1MB
Cortex-M33 64MHz
Camera
32MB SDRAM / 1MB SRAM
32MB external / 2MB internal
Cortex-M7 480MHz
Camera
2MB
2MB
ARC DSP 400MHz
Accelerometer
Microphone
256KB
1MB
Cortex-M4F 40MHz
Accelerometer
Microphone
Camera*
256KB
1.5MB
Cortex-M33 78MHz
Accelerometer
Microphone
Camera
1.5MB
8MB
Cortex-M4F 156MHz
Accelerometer
Microphone
128KB
1MB
Cortex-M4F 80MHz
Himax QVGA Camera
Microphone x2
Accelerometer
Ambient Light Sensor
Microphone
32KB
256KB
SAMD21 Cortex-M0+
Accelerometer
Microphone
80KB
352KB
Cortex-M4F 48MHz
Accelerometer*
Microphone*
Environmental*
GPIO*
256KB
2MB
Cortex-M0+ 133MHz
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_ORIENTATION
heading, pitch, roll
#define SAMPLE_ENVIRONMENTAL
temperature, barometer, humidity, gas
#define SAMPLE_ROTATION_VECTOR
rotX, rotY, rotZ, rotW
reComputer for Jetson series are compact edge computers built with NVIDIA advanced AI embedded systems: Jetson-10 (Nano) and Jetson-20 (Xavier NX). With rich extension modules, industrial peripherals, thermal management combined with decades of Seeed’s hardware expertise, reComputer for Jetson is ready to help you accelerate and scale the next-gen AI product emerging in diverse AI scenarios.
You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. Currently, four versions have been launched. See reComputer Series Getting Started web page.
This guide has only been tested with the reComputer J1020.
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
You will also need the following equipment to complete your first boot.
A monitor with HDMI interface. (For the A206 carrier board, a DP interface monitor can also be used.)
A set of mouse and keyboard.
An ethernet cable or an external WiFi adapter (there is no WiFi on the Jetson)
The reComputer is shipped with the an operating system burned in. Before we use it, it is required to complete some necessary configuration steps: Follow reComputer Series Getting Started web page. When completed, open a new Terminal by pressing CTRL + Alt + T. It will look as shown:
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With all software set up, connect your camera or microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Jetson again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Due to some incompatibilities we don't run models on the GPU by default. You can enable this by following the TensorRT instructions in the C++ SDK.
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
Hackster.io tutorial: Train an embedded Machine Learning model based on Edge Impulse to detect hard hat and deploy it to the reComputer J1010 for Jetson Nano.
Community board
This is a community board by Seeed Studios, and it's not maintained by Edge Impulse. For support head to the Seeed Forum.
The Seeed Wio Terminal is a development board from Seeed Studios with a Cortex-M4 microcontroller, motion sensors, an LCD display, and Grove connectors to easily connect external sensors. Seeed Studio has added support for this development board to Edge Impulse, so you can sample raw data and build machine learning models from the studio. The board is available for 29 USD directly from Seeed.
To set up your Seeed Wio Terminal, follow this guide: Getting started with Edge Impulse - Seeed Wiki.
With everything set up you can now build your first machine learning model with this full end-to-end course from Seeed's EDU team: TinyML with Wio Terminal Course.
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Wio Terminal. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single library that you can run on your development board.
The easiest way to deploy your impulse to the Seeed Wio Terminal is via an Arduino library. See Running your impulse locally on your Arduino for more information.
The SK-TDA4VM is a Linux enabled development kit from Texas Instruments with a focus on smart cameras, robots, and ADAS that need multiple connectivity options and ML acceleration. The TDA4VM processor has 8 TOPS of hardware-accelerated AI combined with low power capabilities to make this device capable of many applications.
In order to take full advantage of the TDA4VM's AI hardware acceleration Edge Impulse has integrated TI Deep Learning Library and TDA4VM optimized EdgeAI models for low-to-no-code training and deployments from Edge Impulse Studio.
First, one needs to follow the TDA4VM Getting Started Guide to install the Linux distribution to the SD card of the device.
To set this device up in Edge Impulse, run the following commands on the SK-TDA4VM:
With all software set up, connect your camera or microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Counting objects using FOMO Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally run on your Linux platform:
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Texas Instruments provides number of models that are optimized to run on the TDA4VM. Those that have Edge Impulse support are found in the links below. Each Github repository has instructions on installation to your Edge Impulse project. The original source of these optimized models are found at Texas Instruments EdgeAI Model Zoo.
The Renesas RZ/V2L is the latest state-of-the-art general-purpose 64-bit Linux MPU with a dual-core ARM Cortex-A55 processor running at 1.2GHz and ARM Mali-G31 3D graphic engine.
The RZ/V2L EVK consists of a SMARC SOM module and an I/O carrier board that provides a USB serial interface, 2 channel Ethernet interfaces, a camera and an HDMI display interface, in addition to many other interfaces (PMOD, microphone, audio output, etc.). The RZ/V2L EVK can be acquired directly through the Renesas website. Since the RZ/V2L is intended for vision AI, the EVK already contains the Google Coral Camera Module.
The Renesas RZ/V2L board realizes hardware acceleration through the DRP-AI IP that consists of a Dynamically Configurable Processor (DRP), and Multiply and Accumulate unit (AI-MAC). The DRP-AI IP is designed to process the entire neural network plus the required pre- and post-processing steps. Additional optimization techniques reduce power consumption and increase processing performance. This leads to high power efficiency and allows using the MPU without a heat sink.
Note that, the DRP-AI is designed for feed-forward neural networks that are usually in vision-based architectures. For more information about the DRP-AI, please refer to the white paper published by the Renesas team.
The Renesas tool “DRP-AI translator” is used to translate machine learning models and optimize the processing for DRP-AI. The tool is fully supported by Edge Impulse. This means that machine learning models downloaded from the studio can be directly deployed to the RZ/V2L board.
For more technical information about RZ/V2L, please refer to the Renesas RZ/V2L documentation.
Renesas provides Yocto build system to build all the necessary packages and create the Linux image. In this section, we will build the Linux image with the Edge Impulse CLI tools. However, Renesas recommends using the Ubuntu 20.04 Linux distribution to build the Linux image. Therefore, we recommend building the image inside a Docker container if you are not using Ubuntu 20.04.
Install Docker Desktop for macOS and Windows. For Linux, please refer to the Ubuntu installation instructions.
This guide assumes that the user does not have any experience in Yocto. The objective is to provide the user with the necessary configurations to build the Linux image and Edge Impulse CLI. For further details about Yocto please refer to this page.
In order to build the Yocto Image, please download the latest version of RZ/V2L Verified Linux Package (v3.0.0) from the Renesas download section (can be found here). Please create an account on Renesas' website to be able to download the package. Once the package is downloaded, please copy the package to the docker container using this command
In addition to the Verified Linux Package (VLP) v3.0.0, you need to download the DRP-AI
package from Renesas' website as well. Please consult this link.
Decompress the package using the unzip
command. Inside the package, you will find a directory that contains several PDF files. Please refer to the file that ends with rz-v2l-linux.pdf
to have a look at the build instruction. You need also to do a similar thing for the DRP-AI package. The idea is to extract the Yocto layers for the Linux image and the DRP-AI. Please follow the Renesas documentation to see how to compile these two layers together.
Note that, it is recommended to add the Mali GPU support layer and the codec layer to get the advantage of the GPU hardware acceleration. Installation instructions for GPU and codec layers can be found on Renesas' website.
Note: The Renesas documentation might refer to adding additional layers such as the ISP layer. Please do not add this layer for now (current version VLP 3.0.0), as the SW setup isn’t compatible yet. This is going to change with the next release b/o 2023.
Please build the Weston image instead of building the minimal image when going through the instructions.
If you are a root user inside a docker container, you will need to disable the security check in order to allow for bitbake
to start the build process. This can be done by commenting out the sanity check in poky/meta/conf/sanity.conf
as follows:
In addition, it is required to modify the BSP layer as follows in order to be able to start the build:
Yocto configurations without Firefox
Once you finish the build instructions, we need to add Edge Impulse CLI packages to the Yocto build. Edge Impulse CLI requires to have nodejs
and npm
packages installed in addition to upgrading the glibc
version from 2.28 to 2.31. To do this, we need to add the following configurations to Yocto build configurations page at the end of local.conf
file that is located inside the build directory build/conf/local.conf
.
Yocto configurations with Firefox
This step is optional, but it allows adding support for the Firefox browser. First, you need to follow the above instructions on installing nodejs
and upgrading glibc
. Second, you need to follow the instructions on Adding the HTML5 website from Renesas' website.
Once the image has been built you will see the images
subdirectory inside the build/tmp/deploy
directory. To flash the image to an SD card, Renesas has published a guide on how to do this on their renesas.info
website. Please go to this page and go to section 4 on that page.
If you are inside a docker container, you will need to copy the build directory from the container to the host. Use this command to do so, you need to specify the path on the container and the path on the host:
If you are not using the docker container then it should be straightforward as described above.
screen
The easiest way is to connect through serial to the RZ/V2L board using the USB mini b port.
After connecting the board with a USB-C cable, please power the board with the red power button.
Please install screen
to the host machine and then execute the following command from Linux to access the board:
You will see the boot process, then you will be asked to log in:
Log in with username root
.
There is no password
Note that, it should be possible to use an Ethernet cable and log in via SSH if the daemon is installed on the image. However, for simplicity purposes, we do not refer to this one here.
Once you have logged in to the board, please run the following command to install Edge Impulse Linux CLI
With all software set up, connect your google coral camera to your Renesas board (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
Currently, all Edge Impulse models can run on the RZ/V2L CPU which is a dedicated Cortex A55. In addition, you can bring your own model to Edge Impulse and use it on the device. However, if you would like to benefit from the DRP-AI hardware acceleration support including higher performance and power efficiency, please use one of the following models:
For object detection:
Yolov5 (v5)
FOMO (Faster objects More Objects)
For Image classification:
MobileNet v1, v2
It supports as well models built within the studio using the available layers on the training page.
Note that, on the training page you have to select the target before starting the training in order to tell the studio that you are training the model for the RZ/V2L. This can be done on the top right in the training page.
If you would like to do object detection with Yolov5 (v5) you need to fix the image resolution in the impulse design to 320x320, otherwise, you might risk that the training fails.
With everything set up you can now build your first machine learning model with these tutorials:
If you are interested in using the EON tuner in order to improve the accuracy of the model this is possible only for image classification for now. EON tuner supports for object detection is arriving soon.
If you use the EON tuner with image classification, you need to filter the int8
models since they are not supported by the DRP-AI. Also, you need to filter the grayscale models as well. Note that if you leave the EON tuner page, the filter will reset to the default settings, which means you need to re-filter the above models.
To run your impulse locally, just connect to your Renesas RZ/V2L and run:
This will automatically compile your model with full hardware acceleration and download the model to your Renesas board, and then start classifying.
Or you can select the RZ/V2L board from the deployment page, this will download an eim
model that you can use with the above runner as follows:
Go to the deployment page and select:
Then run the following on the RZ/V2L:
You will see the model inferencing results in the terminal also we stream the results to the local network. This allows you to see the output of the model in real-time in your web browser. Open the URL shown when you start the runner
and you will see both the camera feed and the classification results.
Since the RZ/V2L benefits from hardware acceleration using the DRP-AI, we provide you with the drp-ai
library that uses our C++ Edge Impulse SDK and models headers that run on the hardware accelerator. If you would like to integrate the model source code into your applications and benefit from the drp-ai
then you need to select the drp-ai
library.
We have an example showing how to use the drp-ai
library that can be found in Deploy your model as a DRP-AI library.
You can use any smartphone with a modern browser as a fully-supported client for Edge Impulse. You'll be able to sample raw data (from the accelerometer, microphone and camera), build models, and deploy machine learning models directly from the studio. Your phone will behave like any other device, and data and models that you create using your mobile phone can also be deployed to embedded devices.
The mobile client is open source and hosted on GitHub: edgeimpulse/mobile-client. As there are thousands of different phones and operating system versions we'd love to hear from you there if something is amiss.
There's also a video version of this tutorial:
To connect your mobile phone to Edge Impulse, go to the your Edge Impulse project, and head to the Devices page. Then click Connect a new device.
Select Mobile phone, and a QR code will appear. Either scan the QR code with the camera of your phone - many phones will automatically recognize the code and offer to open a browser window - or click on the link above the QR code to open the mobile client.
This opens the mobile client, and registers the device directly. On your phone you see a Connected message.
That's all! Your device is now connected to Edge Impulse. If you return to the Devices page in the studio, your phone now shows as connected. You can change the name of your device by clicking on ⋮
.
With everything set up you can now build your first machine learning model with these tutorials:
Your phone will show up like any other device in Edge Impulse, and will automatically ask permission to use sensors.
You might need to enable motion sensors in the Chrome settings via Settings > Site settings > Motion sensors.
With the impulse designed, trained and verified you can deploy this model back to your phone. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single WebAssembly package that you can straight from the browser.
To do so, just click Switch to classification mode at the bottom of the mobile client. This will first build the impulse, and then samples data from the sensor, run the signal processing code, and then classify the data:
Victory! You're now running your machine learning model locally in your browser - you can even turn on airplane mode and the model will continue running. You can also download the WebAssembly package to include in your own website or Node.js application. 🚀
The AKD1000-powered PCIe boards can be plugged into a developer’s existing linux system to unlock capabilities for a wide array of edge AI applications, including Smart City, Smart Health, Smart Home and Smart Transportation. Linux machines with the AKD1000 are supported by Edge Impulse so that you can sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance ML applications.
To learn more about BrainChip technology please visit BrainChip's website: https://brainchip.com/products/
To enable this device for Edge Impulse deployments you must install the following dependencies on your Linux target that has an Akida PCIe board attached.
Python 3.8: Python 3.8 is required for deployments via the Edge Impulse CLI or AKD1000 deployment blocks because the binary file that is generated is reliant on specific paths generated for the combination of Python 3.8 and Python Akida™ Library 2.2.2 installations. Alternatively, if you intend to write your own code with the Python Akida™ Library or the Edge Impulse SDK via the BrainChip MetaTF Deployment Block option you may use Python 3.7 - 3.10.
Python Akida™ Library 2.2.2: A python package for quick and easy model development, testing, simulation, and deployment for BrainChip devices
Akida™ PCIe drivers: This will build and install the driver on your system to communicate with the above AKD1000 reference PCIe board
Edge Impulse Linux: This will enable you to connect your development system directly to Edge Impulse Studio
With all software set up, connect your camera or microphone to your operating system and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
After adding data via Data acquisition starting an Impulse Design you can add BrainChip Akida™ Learning Block. The type of Learning Blocks visible depend on the type of data collected. Using BrainChip Akida™ Learning Blocks will ensure that models generated for deployment will be compatible with BrainChip Akida™ devices.
In the Learning Block of the Impulse Design one can compare between Float, Quantized, and Akida™ versions of a model. If you added a Processing Block to your Impulse Design you will need to generate features before you can train your model. If the project uses a transfer learning block you may be able to select a base model from BrainChip’s Model zoo to transfer learn from. More models will be available in the future, but if you have a specific request please let us know via the Edge Impulse forums.
In order to achieve full hardware acceleration models must be converted from their original format to run on an AKD1000. This can be done by selecting the BrainChip MetaTF Block from the Deployment Screen. This will generate a .zip file with models that can be used in your application for the AKD1000. The block uses the CNN2SNN toolkit to convert quantized models to SNN models compatible for the AKD1000. One can then develop an application using the Akida™ python package that will call the Akida™ formatted model found inside the .zip file.
Alternatively, you can use the AKD1000 Block to generate a pre-built binary that can be used by the Edge Impulse Linux CLI to run on your Linux installation with a AKD1000 Mini PCIe present.
The output from this Block is an .eim file that, one saved, can be run with the following command:
We have multiple projects that are available to clone immediately to quickly train and deploy models for the AKD1000.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
akida
library)It is mainly related to initialization of the Akida™ NSoC and model and is could be caused by lack of Akida Python libraries. Please check if you have an Akida™ Python library installed:
Example output:
If you don't have the library (WARNING: Package(s) not found: akida
) then install it:
If you have the library, then check if the EIM artifact is looking for the library in the correct place. First, download your EIM model using Edge Impulse Linux CLI tools:
Then run the EIM model with debug
option:
Now check if your Location
directory from pip show akida
command is listed in your sys.path
output. If not (usually it happens if you are using Python virtual environments), then export PYTHONPATH
:
And try to run the model with edge-impulse-linux-runner
once again.
If the previous step didn't help, try to get additional debug data. With your EIM model downloaded, open one terminal window and do:
Then in another terminal:
This should give you additional info in the first terminal about the possible root of your issue.
This error could mean that your camera is in use by another process. Check if you don't have any application open that is using the camera. This error could all exists when your previous attempt to run edge-impulse-linux-runner
failed with exception. In that case, check if you have a gst-launch-1.0
process running. For example:
In this case, the first number (here 5615
) is a process ID. Kill the process:
And try to run the model with edge-impulse-linux-runner
once again.
The Audio Syntiant processing block extracts time and frequency features from a signal. It is similar to the but performs additional processing specific to the Syntiant NDP101 chip. This block can be used only with Syntiant targets.
Log Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number (fixed): The number of triangular filters applied to the spectrogram
FFT length (fixed): The FFT size
Low frequency (fixed): Lowest band edge of Mel-scale filterbanks
High frequency (fixed): Highest band edge of Mel-scale filterbanks
Coefficient: Pre-emphasis coefficient
Sampling frequency
The Audio Syntiant block only supports a 16 kHz frequency. You can adjust the sampling frequency in the "Create Impulse" section.
The Spectrogram processing block extracts time and frequency features from a signal. It performs well on audio data for non-voice recognition use cases, or on any sensor data with continuous frequencies.
GitHub repository containing all DSP block code: .
Spectrogram
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
FFT size: The size of the FFT for each frame. Will zero pad or clip if frame length in samples does not equal FFT size.
Normalization
Noise floor (dB): signal lower than this level will be dropped
It first divides the window in multiple overlapping frames. The size and number of frames can be adjusted with the parameters Frame length and Frame stride. For example with a window of 1 second, frame length of 0.02s and stride of 0.01s, it will create 99 time frames.
An FFT is then calculated for each frame. The number of frequency features for each frame is equal to the FFT size parameter divided by 2 plus 1. We recommend keeping the FFT size a power of 2 for performances purpose. Finally the Noise floor value is applied to the power spectrum.
The features generated by the Spectrogram block are equal to the number of generated time frames times the number of frequency features.
Frequency bands and frame length
There is a connection between the FFT size parameter and the frame length. The frame length will be cropped or padded to the FFT size value before applying the FFT. For example, with a 8kHz sampling frequency and a time frame of 0.02s, each time frame contains 160 samples (8k * 0.02). If your FFT size is set 128, time frames will be cropped to 128 samples. If your FFT size is set to 256, time frames will be padded with zeros.
After extracting meaningful features from the raw signal using signal processing, you can now train your model using a learning block. We provide a number of pre-defined learning blocks:
.
.
.
.
.
.
.
Miss an architecture? You can , written in PyTorch, Keras or scikit-learn.
For most of the learning blocks (except K-means Anomaly Detection), you can use the Switch to expert mode button to access the full Keras API for custom architectures, , and more.
The features' extractions is a proprietary algorithm from Syntiant. However parameters are very close to the . Pre-emphasis coefficient is applied first to amplify higher frequencies. The signal is then divided in overlapping frames, defined by the Frame length and Frame stride to extract speech features.
Product
SKU
110061362
110061361
110061363
110061401
Side View
Equipped Module
Jetson Nano 4GB
Jetson Nano 4GB
Jetson Xavier NX 8GB
Jetson Xavier NX 16GB
Operating carrier Board
J1010 Carrier Board
Jetson A206
Jetson A206
Jetson A206
Power Interface
Type-C connector
DC power adapter
DC power adapter
DC power adapter
Want to use a novel ML architecture, or load your own transfer learning models into Edge Impulse? Bring your own model! It's easy to bring in any training pipeline into the Studio, as long as you can output TFLite or ONNX files. We have end-to-end examples of doing this in Keras, PyTorch and scikit-learn.
If you just want to modify the neural network architecture or loss function, you can also use expert mode directly in the Studio, without having to bring your own model. Go to any ML block, select three dots, and select Switch to Keras (expert) mode.
This page describes the input and output formats if you want to bring your own model, but a good way to start building a custom learning block is by modifying one of the following example repositories:
YOLOv5 - wraps the Ultralytics YOLOv5 repository (trained with PyTorch) to train a custom transfer learning model.
EfficientNet - a Keras implementation of transfer learning with EfficientNet B0.
Keras - a basic multi-layer perceptron in Keras and TensorFlow.
PyTorch - a basic multi-layer perceptron in PyTorch.
Scikit-learn - trains a logistic regression model using scikit-learn, then outputs a TFLite file for inferencing using jax.
Any built-in block in the Edge Impulse Studio (e.g. classifiers, regression models or FOMO blocks) can be edited locally, and then pushed back as a custom block. This is great if you want to make heavy modifications to these training pipelines, for example to do custom data augmentation. To download a block, go to any ML block in your project, click the three dots, select Edit block locally, and follow the instructions in the README.
Training pipelines in Edge Impulse are built on top of Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. To train your own model you'll need to wrap all the required packages, your scripts, and (if you use transfer learning) your pre-trained weights into this container. When running in Edge Impulse the container does not have network access, so make sure you don't download dependencies while running (fine when building the container).
A typical Dockerfile
might look like (see the example repositories for more information):
Important: ENTRYPOINT
It's important to create an ENTRYPOINT
at the end of the Dockerfile to specify which file to run.
GPU Support
If you want to have GPU support (only for enterprise customers), you'll need cuda packages installed. If you export a learn block from the Studio these will already have the right base packages, so use that Dockerfile as a starting point.
The entrypoint (see above in the Dockerfile) will be called with these four parameters:
--data-directory
- where you can find the data (see below for the input/output formats).
--epochs
- number of epochs to train for (set by the user in the UI).
--learning-rate
- learning rate to train with (set by the user in the UI).
--out-directory
- where to write the TFLite or ONNX files (see below for the input/output formats).
We realise that not every ML model requires setting epochs and learning rate, and we also realise that you might want to add extra options to the UI. Longer term we'll implement a parameter system similar to what custom processing blocks use.
The data directory contains your dataset, after running any DSP blocks, and already split in a train/validation set:
X_split_train.npy
Y_split_train.npy
X_split_test.npy
Y_split_train.npy
The X_*.npy
files are float32 Numpy arrays, already in the right shape (e.g. if you're training on 96x96 RGB images this will be of shape (n, 96, 96, 3)
). You can typically load these without any modification into your training pipeline (see the notes after this section for caveats).
The Y_*.npy
files are either:
int32 Numpy arrays, with four columns (label_index
, sample_id
, sample_slice_start_ms
, sample_slice_end_ms
).
A JSON array in the form of:
[{ "sampleId": 234731, "boundingBoxes": [{ "label": 1, "x": 260, "y": 313, "w": 234, "h": 261 }] } ]
2) is sent if your dataset has bounding boxes, in all other cases 1) is sent.
To get new data for your project, just run (requires Edge Impulse CLI v1.16 or higher):
This regenerates features (if necessary) and then downloads the updated dataset.
The input features for vision models are a 3D vector of shape (WIDTH, HEIGHT, CHANNELS)
, where the channel data is in RGB
format and each pixel is scaled 0..1
.
If the input to your model is different (e.g. BGR
, or scaled 0..255
) you'll need to transform the input. This needs to happen as part of your neural network, as the input will always be as stated above. Here's how you can do that:
If you have a model that requires the input to be scaled 0..255
(e.g. EfficientNet) you can inject a Mul
layer that multiplies the input by 255 before passing it to the first hidden layer of your network.
In Keras you do this by adding a Rescaling
layer after training your model. Here's a Keras example using EfficientNet.
For PyTorch you do this by first converting the trained model to ONNX, then injecting a Mul operator to the trained ONNX file. Example.
If you have a model that requires BGR
input, rather than RGB
input (e.g. Resnet50) you'll need to transpose the first and last channels.
In Keras you do this by adding a lambda layer. Example using Resnet50.
For PyTorch you do this by first converting the trained model to ONNX, then transposing using scc4onnx.
If you have a model that requires input to be scaled differently (e.g. Resnet50) you can typically do a matrix subtract or matrix multiplication layer. Here's an example in Keras for Resnet50.
An end-to-end example showing how to move and verify normalization code from a Python function to a neural network graph (using Resnet50 in Keras) can be found in this gist.
Internally in Edge Impulse vision models require the input shape to be (n, Height, Width, Channels
(NHWC). PyTorch uses (n, Channels, Height, Width)
(NCHW) internally, and thus this needs to be converted when you train a model. We do this automatically when you output an ONNX file in NCHW format, but this is done by injecting a ton of Transpose
layers (which lowers performance). If your training pipeline natively supports outputting TFLite / SavedModel files in NHWC format then please do that (f.e. Ultralytics YOLOv5 does this in their tf.py file).
The training pipeline can output either TFLite or ONNX files:
If you output TFLite files
model.tflite
- a TFLite file with float32 inputs and outputs.
model_quantized_int8_io.tflite
- a quantized TFLite file with int8 inputs and outputs.
saved_model.zip
- a TensorFlow saved model (optional).
At least one of the TFLite files is required.
If you output ONNX files
model.onnx
- An ONNX file with float16 or float32 inputs and outputs.
We automatically convert this file to both unquantized and quantized TFLite files after training.
I'm using scikit-learn, I don't have TFLite or ONNX files...
If you have a training pipeline that cannot output TFLite files by default (e.g. scikit-learn), you can use jax to implement the inference function; and compile that to TFLite. See our example repository. If there's any TFLite ops in your final model that are not supported by the EON Compiler (so you cannot run on device), then please let us know on the forums.
Host your block directly within Edge Impulse with the Edge Impulse CLI:
To edit the block, go to:
Enterprise: go to your organization, Custom blocks > Machine learning.
Developers: click on your photo on the top right corner, select Custom blocks > Machine learning.
The block is now available from inside any of your Edge Impulse projects. Depending on the data your block operates on, you can add it via:
Object Detection: Create impulse > Add learning block > Object Detection (Images), then select the block via 'Choose a different model' on the 'Object detection' page.
Image classification: Create impulse > Add learning block > Transfer learning (Images), then select the block via 'Choose a different model' on the 'Transfer learning' page.
Audio classification: Create impulse > Add learning block > Transfer Learning (Keyword Spotting), then select the block via 'Choose a different model' on the 'Transfer learning' page.
Other (classification): Create impulse > Add learning block > Custom classification, then select the block via 'Choose a different model' on the 'Machine learning' page.
Other (regression): Create impulse > Add learning block > Custom regression, then select the block via 'Choose a different model' on the 'Regression' page.
Unfortunately object detection models typically don't have a standard way to go from neural network output layer to bounding boxes. Currently we support the following types of output layers:
MobileNet SSD
Edge Impulse FOMO
YOLOv5 (compatible with Ultralytics YOLOv5 v6)
YOLOv5 for Renesas DRP-AI
YOLOX
If you have an object detection model with a different output layer then please contact your user success engineer (enterprise) or let us know on the forums (free users) with an example on how to interpret the output, and we can add it.
When training locally you can use the profiling API to get latency, RAM and ROM estimates. This is very useful as you can immediately see whether your model will fit on device. Additionally, you can use this API as part your experiment tracking (f.e. in Weights & Biases or MLFlow) to wield out models that won't fit your latency or memory constraints.
The profiling API expects:
A TFLite file.
A reference device (for latency calculation) - you can get a list of all devices via getProjectInfo in the latencyDevices
object.
A reference model (which model is closest to your architecture) - you can choose between gestures-large-f32
, gestures-large-i8
, image-32-32-mobilenet-f32
, image-32-32-mobilenet-i8
, image-96-96-mobilenet-f32
, image-96-96-mobilenet-i8
, image-320-320-mobilenet-ssd-f32
, keywords-2d-f32
, keywords-2d-i8
. Make sure to use i8
models if you have quantized your model.
Here's how you invoke the API from Python:
For many projects, you will need to constrain the EON Tuner to use steps that are defined by your hardware, your customers, or your expertise.
For example:
Your project requires to use a grayscale camera because you already purchased the hardware.
Your engineers have already spent hours working on a dedicated digital signal processing method that has been proven to work with your sensor data.
You have the feeling that a particular neural network architecture will be more suited for your project.
This is why we developed an extension of the EON Tuner: the EON Tuner Search Space.
Please read first the EON Tuner documentation to configure your Target, Dataset category and desired Time per inference.
The Search Space works with templates. The templates can be considered as a config file where you define your constraints. Although templates may seem hard to use in the first place, once you understand the core concept, this tool is extremely powerful!
A blank template looks like the following:
To understand the core concepts, we recommend having a look at the available templates. We provide templates for different dataset categories as well as one for your current impulse if it has already been trained.
Elements inside an array are considered as parameters. This means, you can stack several combinations of inputBlocks|dspBlocks|learnBlocks
in your templates and each block
can contain several elements:
or
You can easily add pre-defined blocks using the + Add block section.
Example of a template where we constrained the search space to use 96x96 grayscale images to compare a neural network architecture with a transfer learning architecture using MobileNetv1 and v2:
Public project: Cars binary classifier - EON Tuner Search Space
Example of a template where we want to compare, on the one side, MFCC vs MFE pre-processing with a custom NN architecture and on the other side, keyword spotting transfer learning architecture:
Public Project: Keywords Detection - EON Tuner Search Space
Only available for enterprise customers
Support for custom DSP & ML blocks: the EON Tuner can now use custom organization DSP & ML blocks by adding them to the custom search space. This feature will only be available for enterprises.
Organizational features are only available for enterprise customers. View our pricing for more information.
The parameters set in the custom DSP block are automatically retrieved.
Example using a custom ToF (Time of Flight) pre-processing block:
Example using EfficientNet (available through a custom ML block) on a dataset containing images of 4 cats:
The series of fusion processors from utilize ARM's low power Cortex-M55 CPUs with dedicated Ethos-U55 microNPUs to run embedded ML workloads quickly and efficiently. The devices feature both 'High Power' cores designed for large model architectures, as well as 'High Efficiency' cores designed for low power continuous monitoring. The is fully supported by Edge Impulse, and features multiple core types, dual MEMS microphones, accelerometers, and a MIPI camera interface.
To get started with the Alif Ensemble E7 and Edge Impulse you'll need:
The
(Optional) A compatible MIPI Camera
A JTAG compatible debugger such as an , or with a
The Edge Impulse firmware for this development board is open source and hosted on GitHub:
To set this device up in Edge Impulse, you will need to install the following software:
A compatible flash programmer for your JTAG debugger of choice
With all the software in place it's time to connect the development board to Edge Impulse.
Pins GND, TXD, and RXD on the USB bridge should be connected to header pins 19, 7 and 6 on the baseboard, respectively. Then plug the USB bridge into your computer.
Connect to the 20-pin JTAG header on the baseboard, then connect the debugger USB to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Connect your flash programmer to your debugger of choice, and configure it to select
Select the app.axf
from the zip folder as the binary/ELF file to flash and run the Edge Impulse firmware on the device
From a command prompt or terminal, run:
Mac: Device choice
If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.FTDI_USBtoUART or /dev/cu.usbserial-*
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
Then once you've tested out deployment with the prebuilt Edge Impulse firmware, learn how to integrate Edge Impulse with your own custom Ensemble based application:
.
For , we recommend
For , we recommend or
Alif provides a guide for configuring the baseboard for MIPI camera support via . Follow this document to connect the camera.
and unzip the file.
For or , see Alif instructions in .
For , create a new project with the following device settings:
Alternatively, Alif provides a Secure Enclave
to manage secure firmware storage and bootup in production environments. Alif provides documentation on converting .axf files for use with their secure enclave, and then programming the resulting binary regions to the secure enclave in .
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Community board
This is a community board by Blues Wireless, and is not maintained by Edge Impulse. For support head to the Blues Wireless homepage.
The Blues Wireless Swan is a development board featuring a 120MHz ARM Cortex-M4 from STMicroelectronics with 2MB of flash and 640KB of RAM. Blues Wireless has created an in-depth tutorial on how to get started using the Swan with Edge Impulse, including how to collect new data from a triple axis accelerometer and how to train and deploy your Edge Impulse models to the Swan. For more details and ordering information, visit the Blues Wireless Swan product page.
To set up your Blues Wireless Swan, follow this complete guide: Using Swan with Edge Impulse.
The Blues Wireless Swan tutorial will guide you through how to create a simple classification model with an accelerometer designed to analyze movement over a brief period of time (2 seconds) and infer how the motion correlates to one of the following four states:
Idle (no motion)
Circle
Slash
An up-and-down motion in the shape of the letter "W"
For more insight into using a triple axis accelerometer to build an embedded machine learning model visit the Edge Impulse continuous motion recognition tutorial.
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Blues Wireless Swan. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Blues Wireless' [Using Swan with Edge Impulse] (https://dev.blues.io/swan/using-swan-with-edge-impulse) tutorial for more information on deploying your model onto the device.
Community board
This is a community board by RAKwireless and is not maintained by Edge Impulse. For support, head to the RAKwireless homepage or the RAKwireless forums.
The RAKwireless WisBlock is a modular development system that lets you combine different cores and sensors to easily construct your next Internet of Things (IoT) device. The following WisBlock cores work with Edge Impulse:
RAK11200 (ESP32)
RAK4631 (nRF52840)
RAK11310 (RP2040)
RAKwireless has created an in-depth tutorial on how to get started using the WisBlock with Edge Impulse, including collecting raw data from a 3-axis accelerometer or a microphone, training a machine learning, and deploying the model to the WisBlock core.
A WisBlock starter kit can be found in the RAKwireless store.
Install the following software:
Follow the guide for your particular core to collect data, train a machine learning model, and deploy it to your WisBlock:
By the end of the guide, you should have machine learning inference running locally on your WisBlock!
You can use your Intel or M1-based Mac computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a Macbook, the webcam and microphone of your system are automatically detected and can be used to build models.
To connect your Mac to Edge Impulse:
Last, install the Edge Impulse CLI:
Problems installing the CLI?
See the Installation and troubleshooting guide.
With the software installed, open a terminal window and run::
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your Mac is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just open a terminal and run:
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The AM62X Starter Kit from Texas Instruments is a development platform for the AM62X with quad-core Arm A53s at 1.4GHz. This general purpose microprocessor supports 1080p displays through HDMI, 5MP camera input through MIPI-CSI2, including the Raspberry Pi cam support, and multichannel audio. The Linux distribution for this device comes with Tensorflow-lite, ONNX Runtime, OpenCV, and gstreamer, all with python bindings and C++ libraries
Instructions
If you have a development board that is not officially supported by Edge Impulse, no problem. This guide contains information on connecting any device to Edge Impulse.
Edge Impulse can handle data from any device, whether it's coming from a new development board or from a device that has been in production for years. Just post your data to the and it will automatically show up in the studio. You can either do this directly from your device (if it has an IP connection) or through an intermediate protocol like a phone application. To deal with data that is already collected we have the tools, which can label and import data.
A quick way of getting data from devices is using the . This lets you forward data collected over a serial interface to the studio. This method only works on sensors with lower sampling frequencies (f.e. no audio), does not allow sensor selection, and does not sign data on device. It's however a really easy way to collect data from existing devices with just a few lines of code.
The enables you to run impulses locally and on-device. The SDK contains efficient native implementations of all processing and learning blocks. The SDK was written in portable C++11 with as little dependencies as possible, and the best way of testing out whether it works on your platform is through the Deployment page in the studio. From here you can export a library with all blocks, configuration and the SDK. See the tutorials.
If you need to make changes to the SDK to get it to run on your device we welcome . We also welcome contributions which add optimized code paths for your specific hardware. The SDK documentation has more information on where to add these.
Devices can be controlled from the studio through the . This is a service where devices connect to, either over a web socket or through a serial connection (with the help of the ). The studio lists these devices, and can instruct them to start sampling straight from the UI.
To add full support for your development board you'll need to implement the and (if your device has an IP connection) the . Alternatively you can also implement the web socket protocol through an intermediate layer (like a app). There are end-to-end integration tests available at which validate both the serial and websocket protocols on a development board.
Devices that connect through the data forwarder can be controlled by the studio, but have a limited integration. They don't support sensor or frequency selection.
Do you want help porting? Or want to get the best integration in Edge Impulse, including full studio support, and want to let users build binaries directly from the UI? Let us know at and we'll let you know the possibilities.