Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Welcome to Edge Impulse! We enable developers to create the next generation of intelligent device solutions with embedded Machine Learning. In the documentation you'll find user guides, tutorials and API documentation. For support, visit the forums.
If you're new to the idea of embedded machine learning, or machine learning in general, you may enjoy our quick guide: What is embedded ML, anyway?
Follow these three steps to build your first embedded Machine Learning model - no worries, you can use almost any device to get started.
You'll need some data:
If you have an existing development board or device, you can collect data with a few lines of code using the Data forwarder or the Edge Impulse for Linux SDK.
If you want to collect live data from a supported development kit, select your board from the list of fully supported development boards and follow the instructions to connect your board to edge impulse.
If you already have a dataset, you can upload it via the Uploader.
If you have a mobile phone you can use it as a sensor to collect data, see Mobile phone.
Try the tutorials on continuous motion recognition, responding to your voice, recognizing sounds from audio, adding sight to your sensors or object detection. These will let you build machine learning models that detect things in your home or office.
After training your model you can run your model on your device:
If you want to integrate the model with your own firmware or project you can export your complete model (including all signal processing code and machine learning models) to a C++ or Arduino library with no external dependencies (open source and royalty-free), see Running your impulse locally.
If you have a fully supported development board (or your mobile phone) you can build new firmware - which includes your model - directly from the UI. It doesn't get easier than that!
If you have a gateway, a computer or a web browser where you want to run your model, you can export to WebAssembly and run it anywhere you can run JavaScript.
We have some great tutorials, but you have full freedom in the models that you design in Edge Impulse. You can plug in new signal processing blocks, and completely new neural networks. See Building custom processing blocks, or click the three dots on a neural network page and select 'Switch to Keras (expert) mode'.
You can access any feature in the Edge Impulse Studio through the Edge Impulse API. We also have the Ingestion service if you want to send data directly, and we have an open Remote management protocol to control devices from the Studio.
For larger teams, and companies with lots of data we offer an enterprise version of Edge Impulse. The enterprise version offers team collaboration on projects, a dataset builder that makes your internal data available to your whole team, integrations with your cloud buckets, transformation blocks that let you extract ML features from thousands of files in one go, and custom processing and deployment blocks for your organization. You can find documentation under Organizations or contact us via hello@edgeimpulse.com for more information.
A gentle introduction to the exciting field of embedded machine learning.
Machine learning (ML) is a way of writing computer programs. Specifically, it’s a way of writing programs that process raw data and turn it into information that is meaningful at an application level.
For example, one ML program might be designed to determine when an industrial machine has broken down based on readings from its various sensors, so that it can alert the operator. Another ML program might take raw audio data from a microphone and determine if a word has been spoken, so it can activate a smart home device.
Unlike normal computer programs, the rules of ML programs are not determined by a developer. Instead, ML uses specialized algorithms to learn rules from data, in a process known as training.
In a traditional piece of software, an engineer designs an algorithm that takes an input, applies various rules, and returns an output. The algorithm’s internal operations are planned out by the engineer and implemented explicitly through lines of code. To predict breakdowns in an industrial machine, the engineer would need to understand which measurements in the data indicate a problem and write code that deliberately checks for them.
This approach works fine for many problems. For example, we know that water boils at 100°C at sea level, so it’s easy to write a program that can predict whether water is boiling based on its current temperature and altitude. But in many cases, it can be difficult to know the exact combination of factors that predicts a given state. To continue with our industrial machine example, there might be various different combinations of production rate, temperature, and vibration level that might indicate a problem but are not immediately obvious from looking at the data.
To create an ML program, an engineer first collects a substantial set of training data. They then feed this data into a special kind of algorithm, and let the algorithm discover the rules. This means that as ML engineers, we can create programs that make predictions based on complex data without having to understand all of the complexity ourselves.
Through the training process, the ML algorithm builds a model of the system based on the data we provide. We run data through this model to make predictions, in a process called inference.
There are many different types of machine learning algorithms, each with their own unique benefits and drawbacks. Edge Impulse helps engineers select the right algorithm for a given task.
Machine learning is an excellent tool for solving problems that involve pattern recognition, especially patterns that are complex and might be difficult for a human observer to identify. ML algorithms excel at turning messy, high-bandwidth raw data into usable signals, especially combined with conventional signal processing.
For example, the average person might struggle to recognize the signs of a machine failure given ten different streams of dense, noisy sensor data. However, a machine learning algorithm can often learn to spot the difference.
But ML is not always the best tool for the job. If the rules of a system are well defined and can be easily expressed with hard-coded logic, it’s usually more efficient to work that way.
Limitations of machine learning
Machine learning algorithms are powerful tools, but they can have the following drawbacks:
They output estimates and approximations, not exact answers
ML models can be computationally expensive to run
Training data can be time consuming and expensive to obtain
It can be tempting to try and apply ML everywhere—but if you can solve a problem without ML, it is usually better to do so.
Recent advances in microprocessor architecture and algorithm design have made it possible to run sophisticated machine learning workloads on even the smallest of microcontrollers. Embedded machine learning, also known as TinyML, is the field of machine learning when applied to embedded systems such as these.
Bandwidth—ML algorithms on edge devices can extract meaningful information from data that would otherwise be inaccessible due to bandwidth constraints.
Latency—On-device ML models can respond in real-time to inputs, enabling applications such as autonomous vehicles, which would not be viable if dependent on network latency.
Economics—By processing data on-device, embedded ML systems avoid the costs of transmitting data over a network and processing it in the cloud.
Reliability—Systems controlled by on-device models are inherently more reliable than those which depend on a connection to the cloud.
Privacy—When data is processed on an embedded system and is never transmitted to the cloud, user privacy is protected and there is less chance of abuse.
After creating your Edge Impulse Studio project, you will be directed to the project's dashboard. The dashboard gives a quick overview of your project such as your project ID, the number of devices connected, the amount of data collected, the preferred labeling method, among other editable properties. You can also enable some additional capabilities to your project such as collaboration, making your project public, and showcasing your public projects using Markdown READMEs as we will see.
The figure below shows the various sections and widgets of the dashboard that we will cover here.
The project README enables you to explain the details of your project in a short way. Using this feature, you can add visualizations such as images, GIFs, code snippets, and text to your project in order to bring your colleagues and project viewers up to speed with the important details of your project. In your README you might want to add things like:
What the project does
Why the project is useful
Motivations of the project
How to get started with the project
What sensors and target deployment devices you used
How you plan to improve your project
Where users can get help with your project
To create your first README, navigate to the "about this project" widget and click "add README"
For more README inspiration, check out the public Edge Impulse project tutorials below:
To share your private project with the world, and click Make this project public.
By doing this, all of your data, block configurations, intermediate results, and final models will be shared with the world. Your project will be publicly accessible and can be cloned with a single click with the provided URL:
To add a collaborator, go to your project's dashboard and find the "Collaborators" widget. Click the '+' icon and type the username or e-mail address of the other user. The user will be invited to create an Edge Impulse account if it doesn't exist.
The user will be automatically added to the project and will get an email notification inviting them to start contributing to your project. To remove a user, simply click on the three dots besides the user then tap ‘Delete’ and they will be automatically removed.
The project info widget shows the project's specifications such as the project ID, labeling method, and latency calculations for your target device.
On the labeling method dropdown, you need to specify the type of labeling your dataset and model expect. This can be either one label per data item or bounding boxes. Bounding boxes only work for object detection tasks in the studio. Note that if you interchange the labeling methods, learning blocks will appear to be hidden when building your impulse.
One of the amazing Edge Impulse superpowers is the latency calculation component. This is an approximate time in milliseconds that the trained model and DSP operations are going to take during inference based on the selected target device. This hardware in the loop approach ensures that the target deployment device compute resources are not underutilized or over-utilized. It also saves developers' time associated with numerous inference iterations back and forth the studio in search of optimum models.
In the Block Output section, you can download the results of the DSP and ML operations of your impulse.
The downloadable assets include the extracted features, Tensorflow SavedModel, and both quantized and unquantized TensorFlow lite models. This is particularly helpful when you want to perform other operations to the output blocks outside the Edge Impulse studio. For example, if you need a TensorflowJS model, you will just need to download the TensorFlow saved model from the dashboard and convert it to TensorFlowJS model format to be served on a browser.
Changing Performance Settings is only available for enterprise customers
This section consists of editable parameters that directly affect the performance of the studio when building your impulse. Depending on the selected or available settings, your jobs can either be fast or slow.
The use of GPU for training and Parallel DSP jobs is currently an internal experimental feature that will be soon released.
To bring even more flexibility in projects, the administrative zone gives developers the power to enable other additional features that are not found in edge impulse projects by default. Most of these features are usually advanced features intended for organizations or sometimes experimental features.
To activate these features you just need to check the boxes against the specific features you want to use and click save experiments.
The danger zone widget consists of irrevocable actions that let you to:
Delete your project. This action removes all devices, data, and impulses from your project.
Delete all data in this project.
Perform train/test split. This action re-balances your dataset by splitting all your data automatically between the training and testing set and resets the categories for all data
Launch the getting started wizard. This will remove all data, and clear out your impulse.
There are some major advantages to deploying ML on embedded devices. The key advantages are neatly expressed in the unfortunate acronym BLERP, . They are:
The best way to learn about embedded machine learning is to see it for yourself. To train your own model and deploy it to any device, including your mobile phone, follow our .
.
.
.
.
You can invite up to three collaborators to join and contribute to your project. To have unlimited collaborators, your project needs to be part of an to access unlimited team collaborations.
The project ID is a unique numerical value that identifies your project. Whenever you have any issue with your project on the studio, you can always share your project ID on the for assistance from edge Impulse staff.
Organizational features are only available for enterprise customers. View our for more information.
All collected data for each project can be viewed on the Data acquisition tab. You can see how your data has been split for train/test set as well as the data distribution for each class in your dataset. You can also send new sensor data to your project either by file upload, WebUSB, Edge Impulse API, or Edge Impulse CLI.
The panel on the right allows you to collect data directly from any fully supported platform:
Through WebUSB.
Using the Edge Impulse CLI daemon.
From the Edge Impulse for Linux CLI.
The WebUSB and the Edge Impulse daemon work with any fully supported device by flashing the pre-built Edge Impulse firmware to your board. See the list of fully supported boards.
When using the Edge Impulse for Linux CLI, run edge-impulse-linux --clean
and it will add your platform to the device list of your project. You will then will be able to interact with it from the Record new data panel.
Import from S3 buckets (Enterprise feature).
Upload portals (Enterprise feature).
The train/test split is a technique for training and evaluating the performance of a machine learning algorithms. It indicates how your data is split between training and testing samples. For example, an 80/20 split indicates that 80% of the dataset is used for model training purposes while 20% is used for model testing.
This section also shows how your data samples in each class are distributed to prevent imbalanced datasets which might introduce bias during model training.
Manually navigating to some categories of data can be time consuming, especially when dealing with a large dataset. The data acquisition filter enables the user to filter data samples based on some criteria of choice. This can be based on:
Label - class to which a sample represents.
Sample name - unique ID representing a sample.
Signature validity
Enabled and disabled samples
Length of sample - duration of a sample.
The filtered samples can then be manipulated by editing label, deleting, moving from trains set to test set and vise versa a shown in the image above.
The data manipulations above can also be applied at the data sample level just by simply navigating to the individual data sample then clicking ⋮ and selecting the type of action you might want to perform to the specific sample. This might be renaming , editing its label, disabling, cropping, splitting, downloading, and even deleting the sample when desired.
To crop a data sample, go to the sample you want to crop and click ⋮, then select Crop sample. You can specific a length, or drag the handles to resize the window, then move the window around to make your selection.
Made a wrong crop? No problem, just click Crop sample again and you can move your selection around. To undo the crop, just set the sample length to a high number, and the whole sample will be selected again.
Besides cropping you can also split data automatically. Here you can perform one motion repeatedly, or say a keyword over and over again, and the events are detected and can be stored as individual samples. This makes it easy to very quickly build a high-quality dataset of discrete events. To do so head to Data acquisition, record some new data, click, and select Split sample. You can set the window length, and all events are automatically detected. If you're splitting audio data you can also listen to events by clicking on the window, the audio player is automatically populated with that specific split.
Samples are automatically centered in the window, which might lead to problems on some models (the neural network could learn a shortcut where data in the middle of the window is always associated with a certain label), so you can select "Shift samples" to automatically move the data a little bit around.
Splitting data is - like cropping data - non-destructive. If you're not happy with a split just click Crop sample and you can move the selection around easily.
The labelling queue will only appear on your data acquisition page if you are dealing with an object detection tasks. The labelling queue shows a list of images that have been staged for annotation for your project.
If you are not dealing with an object detection task, you can simply disable the labelling queue bar by going to Dashboard > Project info > Labeling method and clicking the dropdown and selecting "one label per data item" as shown in the image below.
For more information about the labelling queue and how to perform data annotation using AI assisted labelling on Edge Impulse, you can have a look at our documentation here.
The enterprise version of Edge Impulse offers on projects, go to Dashboard, find the Collaborators section, and click the '+' icon. If you have an interesting research or community project we can enable collaboration on the free version of Edge Impulse as well, by emailing hello@edgeimpulse.com.
You can also create a public version of your Edge Impulse project. This makes your project available to the whole world - including your data, your impulse design, your models, and all intermediate information - and can easily be cloned by anyone in the community. To do so, go to Dashboard, and click Make this project public.
We use a wide variety of tools, depending on the machine learning model. For neural networks we typically use TensorFlow and Keras, for object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms we mainly use sklearn. For neural networks you can see (and modify) the Keras code by clicking ⋮
, and selecting Switch to expert mode.
By disabling EON we place the full neural network (architecture and weights) into ROM, and load it on demand. This increases memory usage, but you could just update this section of the ROM (or place the neural network in external flash, or on an SD card) to make it easier to update.
You cannot import a pretrained model, but you can import your model architecture and then retrain. Add a neural network block to your impulse, go to the block, click ⋮
, and select Switch to expert mode. You then have access to the full Keras API.
Yes. The enterprise version of Edge Impulse can integrate directly with your cloud service to access and transform data.
Simple answer: To get an indication of time per inference we show performance metrics in every DSP and ML block in the Studio. Multiply this by active power consumption of your MCU to get an indication of power cost per inference.
More complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low resolution data, and see if your sensor can use an interrupt to wake your MCU - rather than polling).
The minimum hardware requirements for the embedded device depends on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video, view our for more details.
Another big part of Edge Impulse are the processing blocks, as they clean up the data, and already extract important features from your data before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: (and you can build as well).
The compiles your neural networks to C++ source code, which then gets compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite) but you also lose some flexibility to update your neural networks in the field - as it is now part of your firmware.
Edge Impulse uses (a dimensionality reduction algorithm) to project high dimensionality input data into a 3 dimensional space. This even works for extremely high dimensionality data such as images.
See on the Edge Impulse for Linux pages.
Using the Edge Impulse Studio data acquisition tools (like the or ), you can collect data samples manually with a pre-defined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the , , , or . You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets.
If you are working on an object detection project, you will most likely see "labelling queue" bar on your data acquisition page. The labeling queue shows you all the data that has not been labelled in your dataset.
Can't see the labeling queue? Go to Dashboard, and under 'Project info > Labeling method' select 'Bounding boxes (object detection)'.
In object detection, labelling is the process of adding a bounding box around specific objects in an image so that your machine learning model can learn and infer from it. Edge impulse studio has an inbuilt data annotation tool with AI assisted labelling to assist you in your labelling workflows as we will see.
In the Edge Impulse studio, labelling your data is as easy as dragging a box around the object, then entering a label and saving as shown below.
However, as simple the manual labelling process might look like, it sometimes can become tedious and time consuming especially when dealing with huge datasets. To make your life easier, Edge Impulse studio has an inbuilt AI-assisted labelling feature to automatically assist you in your labelling workflows.
there are 3 ways you can use to perform AI assisted labelling on the Edge Impulse Studio:
Using yolov5
Using your own model
Using object tracking
By utilizing an existing library of pre-trained object detection models from YOLOv5 (trained with the COCO dataset), common objects in your images can quickly be identified and labeled in seconds without needing to write any code!
To label your objects with YOLOv5 classification, click the Label suggestions dropdown and select “Classify using YOLOv5.” If your object is more specific than what is auto-labeled by YOLOv5, e.g. “coffee” instead of the generic “cup” class, you can modify the auto-labels to the left of your image. These modifications will automatically apply to future images in your labeling queue.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes!
You can also use your own trained model to predict and label your new images. From an existing (trained) Edge Impulse object detection project, upload new unlabeled images from the Data Acquisition tab. Then, from the “Labeling queue”, click the Label suggestions dropdown and select “Classify using ”:
You can also upload a few samples to a new object detection project, train a model, then upload more samples to the Data Acquisition tab and use the AI-Assisted Labeling feature for the rest of your dataset. Classifying using your own trained model is especially useful for objects that are not in YOLOv5, such as industrial objects, etc.
Click Save labels to move on to your next raw image, and see your fully labeled dataset ready for training in minutes using your own pre-trained model!
If you have objects that are a similar size or common between images, you can also track your objects between frames within the Edge Impulse Labeling Queue, reducing the amount of time needed to re-label and re-draw bounding boxes over your entire dataset.
Draw your bounding boxes and label your images, then, after clicking Save labels, the objects will be tracked from frame to frame:
Now that your object detection project contains a fully labeled dataset, learn how to train and deploy your model to your edge device: check out our tutorial!
We are excited to see what you build with the AI-Assisted Labeling feature in Edge Impulse, please post your project on our forum or tag us on social media, @Edge Impulse!
The data explorer is a visual tool to explore your dataset, find outliers or mislabeled data, and to help label unlabeled data. The data explorer first tries to extract meaningful features from your data (through signal processing and neural network embeddings) and then uses a dimensionality reduction algorithm to map these features to a 2D space. This gives you a one-look overview of your complete dataset.
To access the data explorer head to Data acquisition, click Data explorer, then select a way to generate the data explorer. Depending on you data you'll see three options:
Using a pre-trained model - here we use a large neural network trained on a varied dataset to generate the embeddings. This works very well if you don't have any labeled data yet, or want to look at new clusters of data. This option is available for keywords and for images.
Using your trained impulse - here we use the neural network block in your impulse to generate the embeddings. This typically creates even better visualizations, but will fail if you have completely new clusters of data as the neural network hasn't learned anything about them. This option is only available if you have a trained impulse.
Using the preprocessing blocks in your impulse - here we skip the embeddings, and just use your selected signal processing blocks to create the data explorer. This creates a similar visualization as the feature explorer but in a 2D space and with extra labeling tools. This is very useful if you don't have any labeled data yet, or if you have new clusters of data that your neural network hasn't learned yet.
Then click Generate data explorer to create the data explorer. If you want to make a different choice after creating the data explorer click ⋮ in the top right corner and select Clear data explorer.
Want to see examples of the same dataset visualized in different ways? Scroll down!
To view an item in your dataset just click on any of the dots (some basic information appears on hover). Information about the sample, and a preview of the data item appears at the bottom of the data explorer. You can click Set label (or l on your keyboard) to set a new label for the data item, or press Delete item (or d on your keyboard) to remove the data item. These changes are queued until you click Save labels (at the top of the data explorer).
The data explorer marks unlabeled data in gray (with an 'Unlabeled' label). To label this data you click on any gray dot. To then set a label by clicking the Set label button (or by pressing l
on your keyboard) and enter a label. Other unlabeled data in the vicinity of this item will automatically be labeled as well. This way you can quickly label clustered data.
To upload unlabeled data you can either:
Use the upload UI and select the 'Leave data unlabeled' option.
Select the items in your dataset under Data acquisition, select all relevant items, click Edit labels and set the label to an empty string.
When uploading data through the ingestion API, set the x-no-label
header to 1, and the x-label
to an empty string.
Or, if you want to start from scratch, click the three dots on top of the data explorer, and select Clear all labels
.
The data explorer uses a three-stage process:
It runs your data through an input and a DSP block - like any impulse.
It passes the result of 1) through part of a neural network. This forces the neural network to compress the DSP output even further, but to features that are highly specialized to distinguish the exact type of data in your dataset (called 'embeddings').
The embeddings are passed through t-SNE, a dimensionality reduction algorithm.
So what are these embeddings actually? Let's imagine you have the model from the Continuous motion recognition tutorial. Here we slice data up in 2-second windows and run a signal processing step to extract features. Then we use a neural network to classify between motions. This network consists of:
33 input features (from the signal processing step)
A layer with 20 neurons
A layer with 10 neurons
A layer with 4 neurons (the number of different classes)
While training the neural network we try to find the mathematical formula that best maps the input to the output. We do this by tweaking each neuron (each neuron is a parameter in our formula). The interesting part is that each layer of the neural network will start acting like a feature extracting step - just like our signal processing step - but highly tuned for your specific data. For example, in the first layer, it'll learn what features are correlated, in the second it derives new features, and in the final layer, it learns how to distinguish between classes of motions.
In the data explorer we now cut off the final layer of the neural network, and thus we get the derived features back - these are called "embeddings". Contrary to features we extract using signal processing we don't really know what these features are - they're specific to your data. In essence, they provide a peek into the brain of the neural network. Thus, if you see data in the data explorer that you can't easily separate, the neural network probably can't either - and that's a great way to spot outliers - or if there's unlabeled data close to a labeled cluster they're probably very similar - great for labeling unknown data!
Here's an example of using the data explorer to visualize a very complex computer vision dataset (distinguishing between the four cats of one of our infrastructure engineers).
For less complex datasets, or lower-dimensional data you'll typically see more separation, even without custom models.
If you have any questions about the data explorer or embeddings, we'd be happy to help on the forums or reach out to your solutions engineer. Excited? Talk to us to get access to the data explorer, and finally be able to label all that sensor data you've collected!
After collecting data for your project, you can now create your Impulse. A complete Impulse will consist of 3 main building blocks: input block, processing block and a learning block.
This view is one of the most important, here you will build your own machine learning pipeline.
Impulse example for movement classification using accelerometer data:
Impulse example for object detection using images:
The input block indicates the type of input data you are training your model with. This can be time series (audio, vibration, movements) or images.
The input axes field lists all the axis referenced from your training dataset
The window size is the size of the raw features that is used for the training
The window increase is used to artificially create more features (and feed the learning block with more information)
The frequency is automatically calculated based on your training samples. You can modify this value but you currently cannot use values lower than 0.000016 (less than 1 sample every 60s).
Zero-pad data: Adds zero values when raw feature is missing
Below is a sketch to summarize the role of each parameters:
Axes: Images
Image width & height: Most of our pre-trained models work with square images.
Resize mode: You have three options, Squash, Fit to the shortest axis, Fit to the longest axis
A processing block is basically a feature extractor. It consists of DSP (Digital Signal Processing) operations that are used to extract features that our model learns on. These operations vary depending on the type of data used in your project.
You don't have much experience with DSP? No problem, Edge Impulse usually uses a star to indicate the most recommended processing block based on your input data as shown in the image below.
In the case where the available processing blocks aren't suitable for your application, you can build your own custom processing blocks and import into your project.
After adding your processing block, it is now time to add a learning block to make your impulse complete. A learning block is simply a neural network that is trained to learn on your data.
Learning blocks vary depending on what you want your model to do. It can be classification, regression, anomaly detection, image transfer learning or object detection. It can also be a custom transfer learning block (enterprise feature)
Learning blocks available with time-series projects:
Learning blocks available with image projects:
Learning blocks available with object detection projects:
The data sources page is actually much more than just adding data from external sources. It let you create complete automated data pipelines so you can work on your active learning strategies.
From there, you can import datasets from existing cloud storage buckets, automate and schedule the imports, and, trigger actions such as explore and label your new data, retrain your model, automatically build a new deployment task and more.
Click in + Add new data source and select where your data lives:
Click on Next, provide credentials:
Click on Verify credentials:
Here, you have several options to automatically label your data:
Infer from folder name
In the example above, the structure of the folder is the following:
The labels will be picked from the folder name and will be split between your training and testing set using the following ratio 80/20
.
Note that the samples present in an unlabeled/
folder will be kept unlabeled in Edge Impulse Studio.
Alternatively, you can also organize your folder using the following structure to automatically split your dataset between training and testing sets:
Infer from file name:
When using this option, only the file name is taken into account. The part before the first .
will be used to set the label. E.g. cars.01741.jpg
will set the label to cars
.
Keep the data unlabeled:
All the data samples will be unlabeled, you will need to label them manually before using them.
Finally, click on Next, post-sync actions.
From this view, you can automate several actions:
Recreate data explorer
Retrain model
If needed, will retrain your model with the same impulse. If you enable this you'll also get an email with the new validation and test set accuracy.
Note: You will need to have at least setup up and trained your project once.
Create new version
Store all data, configuration, intermediate results and final models.
Create new deployment
Builds a new library or binary with your updated model. Requires 'Retrain model' to also be enabled.
Once your pipeline is set, you can run it directly from the UI, from external sources or by scheduling the task.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline now.
To run your pipeline from Edge Impulse studio, click on the ⋮
button and select Run pipeline from code. This will display an overlay with curl
, Node.js
and Python
code samples.
Note that you will need to create an API key to run the pipeline from code.
By default, your pipeline will run every day. To schedule your pipeline jobs, click on the ⋮
button and select Edit pipeline.
Note that free users can only run the pipeline every 4 hours. If you are an enterprise customer, you can run this pipeline up to every minute.
Once the pipeline has successfully finish, you will receive an email like the following:
Another useful feature is to create a webhook to call a URL when the pipeline has ran. It will run a POST request containing the following information:
As of today, if you want to update your pipeline, you need to edit the configuration json available in ⋮
-> Run pipeline from code.
Here is an example of what you can get if all the actions have been selected:
Free projects have only access to the above builtinTransformationBlock
.
Select "Copy as pipeline step" and paste it to the configuration json file.
There is a wide variety of devices that you can connect to your Edge Impulse project. These devices can help you collect datasets for your project, test your trained ML model and even deploy your ML model directly to your development board with a pre-built binary application (for ).
On the Devices tab, you'll find a list of all your connected devices and a guide on how to connect new devices that are currently supported by Edge Impulse.
To connect a new device, click on the Connect a new device button on the top right of your screen.
You will get a pop-up with multiple options of devices you can connect to your Edge Impulse project. Available options include:
You can upload an already existing dataset to your project directly through the Edge impulse Studio. The data should be in the Data Acquisition Format (CBOR, JSON, CSV), or as WAV, JPG or PNG files.
To upload data using the uploader, go to the Data acquisition page and click on the uploader button as shown in the image below:
When uploading your data, you can choose the category you want your data to fall in i.e training set, testing set or automatically split the dataset between training and testing set. You can also choose whether to infer labels from files name or enter a label of which the files should automatically fall in.
The Raw Data block generates windows from data samples without any specific signal processing. It is great for signals that have already been pre-processed and if you just need to feed your data into the Neural Network block.
GitHub repository containing all DSP block code: .
Scaling
Scale axes: Multiplies each axis by this number. This can be used to normalize your data between 0 and 1.
The Raw Data block retrieves raw samples and applies the Scaling parameter.
The Spectrogram processing block extracts time and frequency features from a signal. It performs well on audio data for non-voice recognition use cases, or on any sensor data with continuous frequencies.
GitHub repository containing all DSP block code: .
Spectrogram
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Frequency bands: The FFT size
Normalization
Noise floor (dB): signal lower than this level will be dropped
It first divides the window in multiple overlapping frames. The size and number of frames can be adjusted with the parameters Frame length and Frame stride. For example with a window of 1 second, frame length of 0.02s and stride of 0.01s, it will create 99 time frames.
Each time frame is then divided in frequency bins using an FFT (Fast Fourier Transform) and we compute its power spectrum. The number of frequency bins equals to the Frequency bands parameter divided by 2 plus 1. We recommend keeping the Frequency bands (a.k.a. FFT size) value as a power of 2 for performances purpose. Finally the Noise floor value is applied to the power spectrum.
The features generated by the Spectrogram block are equal to the number of generated time frames times the number of frequency bins.
Frequency bands and frame length
There is a correlation between the Frequency bands (FFT size) parameter and the frame length. The frame length will be cropped or padded to the Frequency bands value while applying the FFT. For example, with a 8kHz sampling frequency and a time frame of 0.02s, each time frame contains 160 samples (8k * 0.02). If your FFT size is set 128, time frames will be cropped to 128 samples. If your FFT size is set to 256, time frames will be padded with zeros.
The Image block is dedicated to computer vision applications. It normalizes image data, and optionally reduce the color depth.
GitHub repository containing all DSP block code: .
Color depth: Color depth to use (RGB or grayscale)
The Image performs normalization, converting each pixel's channel of the image to a float value between 0 and 1. If Grayscale is selected, each pixel is converted to a single value following the (Y' component only).
The gives you a one-look view of your dataset, letting you quickly label unknown data. If you enable this you'll also get an email with a screenshot of the data explorer whenever there's new data.
Note that you can also define who can receive the email. The users have to be part of your project. See .
If you are part of an , you can use your custom transformation jobs in the pipeline. In your organization workspace, go to "Custom blocks -> Transformation" and select "Run job" on the job you want to add.
.
.
.
.
The Flatten block performs statistical analysis on the signal. It is useful for slow-moving averages like temperature data, in combination with other blocks.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Scaling
Scale axes: Multiplies axes by this number
Method
Average: Calculates the average value for the window
Minimum: Calculates the minimum value in the window
Maximum: Calculates the maximum value in the window
Root-mean square: Calculates the RMS value of the window
Standard deviation: Calculates the standard deviation of the window
Skewness: Calculates the skewness of the window
Kurtosis: Calculates the kurtosis of the window
The Flatten block first rescales axes of the signal if value is different than 1. Then statistical analysis is performed on each window, computing between 1 and 7 features for each axis, depending on the number of selected methods.
The IMU Syntiant block rescales raw data to 8 bits values to match the NDP101 chip input requirements.
Scaling
Scale 16 bits to 8 bits: Scale data to 8-bits values in the [-1, 1] range, raw data is divided by 2G (2 * 9.80665). Using Edge Impulse official firmwares, this parameter should be enabled as raw data is not rescaled. If this parameter is disabled the data samples will not be rescaled, you should disable this parameter if your raw data samples are already normalized to the [-1, 1] range.
The IMU Syntiant block retrieves raw samples and applies the Scale 16 bits to 8 bits parameter.
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio):
The source code of these blocks are available in the Edge Impulse processing blocks GitHub repository.
If you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing, follow our tutorial on Building custom processing blocks.
Similarly to the Spectrogram block, the Audio MFE processing block extracts time and frequency features from a signal. However it uses a non-linear scale in the frequency domain, called Mel-scale. It performs well on audio data, mostly for non-voice recognition use cases when sounds to be classified can be distinguished by human ear.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Normalization
Noise floor (dB): signal lower than this level will be dropped
The features' extractions is similar to the Spectrogram (Frame length, Frame stride, and FFT length parameters are the same) but it adds 2 extra steps.
After computing the spectrogram, triangular filters are applied on a Mel-scale to extract frequency bands. They are configured with parameters Filter number, Low frequency and High frequency to select the frequency band and the number of frequency features to be extracted. The Mel-scale is a perceptual scale of pitches judged by listeners to be equal in distance from one another. The idea is to extract more features (more filter banks) in the lower frequencies, and less in the high frequencies, thus it performs well on sounds that can be distinguished by human ear.
The last step is to perform a local mean normalization of the signal, applying the Noise floor value to the power spectrum.
After extracting meaningful features from the raw signal using signal processing, you can now train your model using a learning block. We provide a number of pre-defined learning blocks:
.
.
.
.
.
.
(Enterprise feature).
For most of the learning blocks (except K-means Anomaly Detection), you can use the Switch to expert mode button to access the full Keras API for custom architectures, , and more.
The two most common image processing problems are image classification and object detection.
Image classification takes an image as an input and outputs what type of object is in the image. This technique works great, even on microcontrollers, as long as we only need to detect a single object in the image.
On the other hand, object detection takes an image and outputs information about the class and number of objects, position, (and, eventually, size) in the image.
Edge Impulse provides two different methods to perform object detection:
Using MobileNetV2 SSD FPN
Using FOMO
Labelling method
Bounding boxes
Bounding Boxes
Input size
320x320
Square (any size)
Image format
RGB
Greyscale & RGB
Output
Bounding boxes
Centroids
MCU
❌
✅
CPU/GPU
✅
✅
Limitations
- Works best with big objects - Models use high compute resources (in the edge computing world) - Image size is fixed
- Works best when objects have similar sizes & shapes - The size of the objects are not available - Objects should not be too close to each other
Solving regression problems is one of the most common applications for machine learning models, especially in supervised machine learning. Models are trained to understand the relationship between independent variables and an outcome or dependent variable. The model can then be leveraged to predict the outcome of new and unseen input data, or to fill a gap in missing data.
To build a regression model you collect data as usual, but rather than setting the label to a text value, you set it to a numeric value.
You can use any of the built-in signal processing blocks to pre-process your vibration, audio or image data, or use custom processing blocks to extract novel features from other types of sensor data.
You have full freedom in modifying your neural network architecture - whether visually or through writing Keras code.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
If you want to see the accuracy of your model across your test dataset, go to Model testing. You can adjust the Maximum error percentage by clicking on "⋮" button.
If you have selected the Classification learning block in the Create impulse page, a NN Classifier page will show up in the menu on the left. This page becomes available after you've extracted your features from your DSP block.
Tutorials
Want to see the Classification block in action? Check out our tutorials:
The basic idea is that a neural network classifier will take some input data, and output a probability score that indicates how likely it is that the input data belongs to a particular class.
So how does a neural network know what to predict? The neural network consists of a number of layers, each of which is made up of a number of neurons. The neurons in the first layer are connected to the neurons in the second layer, and so on. The weight of a connection between two neurons in a layer is randomly determined at the beginning of the training process. The neural network is then given a set of training data, which is a set of examples that it is supposed to predict. The network's output is compared to the correct answer and, based on the results, the weights of the connections between the neurons in the layer are adjusted. This process is repeated a number of times, until the network has learned to predict the correct answer for the training data.
A particular arrangement of layers is referred to as an architecture, and different architectures are useful for different tasks. This way, after a lot of iterations, the neural network learns; and will eventually become much better at predicting new data.
On this page, you can configure the model and the training process and, have an overview of your model performances.
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%
Auto-balance dataset Mix in more copies of data from classes that are uncommon. Might help make the model more robust against overfitting if you have little data for some classes.
Depending on your project type, we may offer to choose between different architecture presets to help you get started.
The neural network architecture takes as inputs your extracted features, and pass the features to each layer of your architecture. In the classification case, the last used layer is a softmax layer. It is this last layer that gives the probability of belonging to one of the classes.
From the visual (simple) mode, you can add the following layers:
If have advanced knowledge in machine learning and Keras, you can switch to the Expert Mode and access the full Keras API to use custom architectures:
This panel displays the output logs during the training. The previous training logs can also be retrieved from the Jobs tab in the Dashboard page (enterprise feature).
This section gives an overview of your model performances and helps you evaluate your model. It can help you determine if the model is capable of meeting your needs or if you need to test other hyper parameters and architectures.
From the Last training performances you can retrieve your validation accuracy and loss.
The Confusion matrix is one of most useful tool to evaluate a model. it tabulates all of the correct and incorrect responses a model produces given a set of data. The labels on the side correspond to the actual labels in each sample, and the labels on the top correspond to the predicted labels from the model.
The features explorer, like in the processing block views, indicated the spatial distribution of your input features. In this page, you can visualize which ones have been correctly classified and which ones have not.
On-device performance: Based on the target you chose in the Dashboard page, we will output estimations for the inferencing time, peak RAM usage and flash usage. This will help you validate that your model will be able to run on your device based on its constraints.
Neural networks are great, but they have one big flaw. They're terrible at dealing with data they have never seen before (like a new gesture). Neural networks cannot judge this, as they are only aware of the training data. If you give it something unlike anything it has seen before it'll still classify as one of the four classes.
Tutorial
Want to see the Anomaly Detection in action? Check out our Continuous Motion Recognition tutorial.
K-means clustering
This method looks at the data points in a dataset and groups those that are similar into a predefined number K of clusters. A threshold value can be added to detect anomalies: if the distance between a data point and its nearest centroid is greater than the threshold value, then it is an anomaly.
The main difficulty resides in choosing K, since data in a time series is always changing and different values of K might be ideal at different times. Besides, in more complex scenarios where there are both local and global outliers, many outliers might pass under the radar and be assigned to a cluster.
In most of your DSP blocks, you have an option to calculate the feature importance. Edge Impulse Studio will then output a Feature Importance graphic that will help you determine which axes and values generated from your DSP block are most significant to analyze when you want to do anomaly detection.
This process of generating features and determining the most important features of your data will further reduce the amount of signal analysis needed on the device with new and unseen data.
In your anomaly detection block, you can click on the Select suggested axes button to harness the value of the feature importance output.
Here is the process in the background:
Create X number of clusters and group all the data.
For each of these clusters we store the center and the size of the cluster.
During inference we calculate the closest cluster for a new data point, and show the distance from the edge of the cluster. If it’s within a cluster (no anomaly)you thus get a value below 0.
In the above picture, known clusters in are in blue, new classified data in orange. It's clearly outside of any known clusters and can thus be tagged as an anomaly.
Tutorial: Continuous Motion Recognition
The Audio Syntiant processing block extracts time and frequency features from a signal. It is similar to the Audio MFE but performs additional processing specific to the Syntiant NDP101 chip. This block can be used only with Syntiant targets.
Log Mel-filterbank energy features
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number (fixed): The number of triangular filters applied to the spectrogram
FFT length (fixed): The FFT size
Low frequency (fixed): Lowest band edge of Mel-scale filterbanks
High frequency (fixed): Highest band edge of Mel-scale filterbanks
Coefficient: Pre-emphasis coefficient
The features' extractions is a proprietary algorithm from Syntiant. However parameters are very close to the Audio MFE. Pre-emphasis coefficient is applied first to amplify higher frequencies. The signal is then divided in overlapping frames, defined by the Frame length and Frame stride to extract speech features.
Sampling frequency
The Audio Syntiant block only supports a 16 kHz frequency. You can adjust the sampling frequency in the "Create Impulse" section.
The Audio MFCC blocks extracts coefficients from an audio signal. Similarly to the Audio MFE block, it uses a non-linear scale called Mel-scale. It is the reference block for speech recognition and can also performs well on some non-human voice use cases.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Mel Frequency Cepstral Coefficients
Number of coefficients: Number of cepstral coefficients to keep after applying Discrete Cosine Transform
Frame length: The length of each frame in seconds
Frame stride: The step between successive frame in seconds
Filter number: The number of triangular filters applied to the spectrogram
FFT length: The FFT size
Low frequency: Lowest band edge of Mel-scale filterbanks
High frequency: Highest band edge of Mel-scale filterbanks
Window size: The size of sliding window for local cepstral mean normalization. Windows size must be odd.
Pre-emphasis
Coefficient: The pre-emphasizing coefficient to apply to the input signal (0 equals to no filtering)
Note: Shift has been removed and set to 1 for all future projects. Older & existing projects can still change this value or use an existing value.
The features' extractions adds one extra step to the MFE block resulting in a compressed representation of the filterbanks. A Discrete Cosine Transform is applied on each filterbank to extract cepstral coefficients. 13 coefficients are usually retained, the rest are discarded as they represent fast changes not useful for speech recognition.
Training and deploying high performing ML models is usually considered as a continuous process rather than a one time exercise. When you are validating your model and discover an overfit, you might consider adding some more diverse data then perform model retraining while maintaining the initially set DSP and Neural Network block configurations.
Also during inference If you find that the data distribution has drifted significantly from the initial training distribution, it is usually a good common practice to retrain your model on the newer data distribution to keep up with the high model performance.
The “Retrain Model” feature in the Edge Impulse Studio is usually useful when adding new data to your project. It uses known parameters from your selected DSP and ML blocks then uses them to automatically regenerate new features and retrain the Neural Network model in one single step. You can consider this a shortcut for retraining your model since you don’t need to go through all the blocks in your impulse one by one again.
To retrain your model after adding some data, navigate to "Retrain Model" on the studio and click "Train model"
It's very hard to build a good working computer vision model from scratch, as you need a wide variety of input data to make the model generalize well, and training such models can take days on a GPU. To make this easier and faster we are using transfer learning. This lets you piggyback on a well-trained model, only retraining the upper layers of a neural network, leading to much more reliable models that train in a fraction of the time and work with substantially smaller datasets.
Tutorial
Want to see the MobileNetV2 SSD FPN-Lite models in action? Check out our Detect objects with bounding boxes tutorial.
To build your first object detection models using MobileNetV2 SSD FPN-Lite :
Create a new project in Edge Impulse.
Make sure to set your labelling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in Object detection
Resize your image to fit 320x320px
Add an 'Object Detection (Images)' block to your impulse.
Under Images, choose RGB.
Under Object detection, select 'Choose a different model' and select 'MobileNetV2 SSD FPN-Lite 320x320'
You can start your training with a learning rate of '0.15'
Click on 'Start training'
MobileNetV2 SSD FPN-Lite 320x320 is available with Edge Impulse for Linux
Here, we are using the MobileNetV2 SSD FPN-Lite 320x320 pre-trained model. The model has been trained on the COCO 2017 dataset with images scaled to 320x320 resolution.
In the MobileNetV2 SSD FPN-Lite, we have a base network (MobileNetV2), a detection network (Single Shot Detector or SSD) and a feature extractor (FPN-Lite).
Base network:
MobileNet, like VGG-Net, LeNet, AlexNet, and all others, are based on neural networks. The base network provides high-level features for classification or detection. If you use a fully connected layer and a softmax layer at the end of these networks, you have a classification.
But you can remove the fully connected and the softmax layers, and replace it with detection networks, like SSD, Faster R-CNN, and others to perform object detection.
Detection network:
The most common detection networks are SSD (Single Shot Detection) and RPN (Regional Proposal Network).
When using SSD, we only need to take one single shot to detect multiple objects within the image. On the other hand, regional proposal networks (RPN) based approaches, such as R-CNN series, need two shots, one for generating region proposals, one for detecting the object of each proposal.
As a consequence, SSD is much faster compared with RPN-based approaches but often trades accuracy with real-time processing speed. They also tend to have issues in detecting objects that are too close or too small.
Feature Pyramid Network:
Detecting objects in different scales is challenging in particular for small objects. Feature Pyramid Network (FPN) is a feature extractor designed with feature pyramid concept to improve accuracy and speed.
Extracting meaningful features from your data is crucial to building small and reliable machine learning models, and in Edge Impulse this is done through processing blocks. We ship a number of processing blocks for common sensor data (such as vibration and audio), but they might not be suitable for all applications. Perhaps you have a very specific sensor, want to apply custom filters, or are implementing the latest research in digital signal processing. In this tutorial you'll learn how to support these use cases by adding custom processing blocks to the studio.
Make sure you followed the Continuous motion recognition tutorial, and have a trained impulse.
Development flow
This tutorial shows you the development flow of building custom processing blocks, and requires you to run the processing block on your own machine or server. Enterprise customers can share processing blocks within their organization, and run these on our infrastructure. See Hosting custom DSP blocks for more details.
Processing blocks take data and configuration parameters in, and return features and visualizations like graphs or images. To communicate to custom processing blocks, Edge Impulse studio will make HTTP calls to the block, and then use the response both in the UI, while generating features, or when training a machine learning model. Thus, to load a custom processing block we'll need to run a small server that responds to these HTTP calls. You can write this in any language, but we have created an example in Python. To load this example, open a terminal and run:
This creates a copy of the example project locally. Then, you can run the example either through Docker or locally via:
Docker
Locally
Then go to http://localhost:4446 and you should be shown some information about the block.
As this block is running locally the studio cannot reach the block. To resolve this we can use ngrok which can make a local port accessible from a public URL. After you've finished development you can move the processing block to a server with a publicly accessible address (or run it on our infrastructure through your enterprise account). To set up a tunnel:
Sign up for ngrok.
Install the ngrok binary for your platform.
Get a URL to access the processing block from the outside world via:
This yields a public URL for your block under Forwarding
. Note down the address that includes https://
.
Now that the custom processing block was created, and you've made it accessible to the outside world, you can add this block to Edge Impulse. In a project, go to Create Impulse, click Add a processing block, choose Add custom block (in the bottom left corner of the modal), and paste in the public URL of the block:
After you click Add block the block will show like any other processing block.
Add a learning bloc, then click Save impulse to store the impulse.
Processing blocks have configuration options which are rendered on the block parameter page. These could be filter configurations, scaling options, or control which visualizations are loaded. These options are defined in the parameters.json
file. Let's add an option to smooth raw data. Open example-custom-processing-block-python/parameters.json
and add a new section under parameters
:
Then, open example-custom-processing-block-python/dsp.py
and replace its contents with:
Restart the Python script, and then click Custom block in the studio (in the navigation bar). You now have a new option 'Smooth'. Every time an option changes we'll re-run the block, but as we have not written any code to respond to these changes nothing will happen.
We support a number of different types for configuration fields. These are:
int
- renders a numeric textbox that expects integers.
float
- renders a numeric textbox that expects floating point numbers.
string
- renders a textbox that expects a string.
boolean
- renders a checkbox.
select
- renders a dropdown box. This also requires the parameter valid
which should be an array of valid values. E.g. this renders a dropdown box with options 'low', 'high' and 'none':
To show the user what is happening we can also draw visuals in the processing block. Right now we support graphs (linear and logarithmic) and arbitrary images. By showing a graph of the smoothed sample we can quickly identify what effect the smooth option has on the raw signal. Open dsp.py
and replace the content with the following script. It contains a very basic smoothing algorithm and draws a graph:
Restart the script, and click the Smooth toggle to observe the difference. Congratulations! You have just created your first custom processing block.
If you extract set features from the signal, like the mean, that you that return, you can also label these features. These labels will be used in the feature explorer. To do so, add a labels
array that contains strings that map back to the features you return (labels
and features
should have the same length).
In the previous step we drew a linear graph, but you can also draw logarithmic graphs or even full images. This is done through the type
parameter:
This draws a graph with a logarithmic scale:
To show an image you should return the base64 encoded image and its MIME type. Here's how you draw a small PNG image:
If you output high-dimensional data (like a spectrogram or an image) you can enable dimensionality reduction for the feature explorer. This will run UMAP over the data to compress the features into three dimensions. To do so, set:
On the info
object in parameters.json
.
For all options that you can return in a graph, see the Run DSP return types in the API documentation.
Your custom block behaves exactly the same as any of the built-in blocks. You can process all your data, train neural networks or anomaly blocks, and validate that your model works. However, we cannot automatically generate optimized native code for the block, like we do for built-in processing blocks, but we try to help you write this code as much as possible. When you export your project to a C++ library we generate struct's for all the configuration options, and you only need to implement the extract_custom_block_features
function (you can change this name through the cppType
parameter in parameters.json
).
An example of this function for the spectral analysis block is listed in the inferencing sdk.
Blog post: Utilize Custom Processing Blocks in Your Image ML Pipelines
With good feature extraction you can make your machine learning models smaller and more reliable, which are both very important when you want to deploy your model on embedded devices. With custom processing blocks you can now develop new feature extraction pipelines straight from Edge Impulse. Whether you're following the latest research, want to implement proprietary algorithms, or are just exploring data.
For inspiration we have published all our own blocks here: edgeimpulse/processing-blocks. If you've made an interesting block that you think is valuable for the community, please let us know on the forums or by opening a pull request. We'd be happy to help write efficient native code for the block, and then publish it as a standard block!
Your Edge Impulse organization helps your team with the full lifecycle of your TinyML deployment. It contains tools to collect and maintain large datasets, allows your data scientists to quickly access relevant data through their familiar tools, adds versioning and traceability to your machine learning models, and lets you quickly create new Edge Impulse projects for on-device deployment.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
To get started, follow these tutorials:
Collaborating on projects - to work with your colleagues on one project.
Building your first dataset - to build up a shared dataset for your organization.
Upload portals - to allow external parties to securely contribute data to your datasets.
Creating a transformation block - to quickly extract features from your dataset.
Building deployment blocks - to create custom deployment targets for your products.
Hosting custom DSP blocks - to create and host your custom signal processing techniques and use it directly in your projects.
Adding custom transfer learning models - to use your custom neural networks architectures and load pre-trained weights.
When creating an impulse to solve an image classification problem, you will most likely want to use 'transfer learning' as learning block. This is particularly true especially when working with a relatively small dataset.
Transfer learning is the process of taking features learned from one problem and leveraging it on a new but related problem. Most of the time these features are learned from large scale datasets with common objects hence making it faster & more accurate to tune and adapt to new tasks.
To choose transfer learning as your learning block, go to create impulse and click on 'Add a Learning Block', and select 'Transfer Learning'
To choose your preferred pretrained network, go to Transfer learning on the left side of your screen and click 'choose a different model'. A pop up will appear on your screen with a list of models to choose from as shown in the image below.
Edge Impulse uses state of the art MobileNetV1 & V2 architectures trained on ImageNet dataset as it's pretrained network for you to fine-tune for your specific application. The pretrained networks comes with varying input blocks ranging from 96x96 to 320x320 and both RGB & Grayscale images for you to choose from depending on your application & target deployment hardware.
Before you start training your model, you need to set the following neural network configurations:
Number of training cycles: Each time the training algorithm makes one complete pass through all of the training data with back-propagation and updates the model's parameters as it goes, it is known as an epoch or training cycle.
Learning rate: The learning rate controls how much the models internal parameters are updated during each step of the training process. Or you can also see it as how fast the neural network will learn. If the network overfits quickly, you can reduce the learning rate
Validation set size: The percentage of your training set held apart for validation, a good default is 20%.
You might also need to enable auto balance to prevent model bias or even enable data augmentation to increase the size of your dataset and have more diverse dataset to prevent overfitting.
The preset configurations just don't work for your model? No worries, Expert Mode is for you! The Expert mode gives you full control of your model so that you can configure it however you want. To enable the expert mode, just click on the "⋮" button and toggle the expert mode.
You can use the expert mode to change your loss function, optimizer, print your model architecture and even set an early stopping callback to prevent overfitting your model.
Live classification lets you validate your model with data captured directly from any device or supported development board. This gives you a picture on how your model will perform with real world data. To achieve this, go to Live classification and connect the device or development board you want to capture data from.
All of your connected devices and sensors will appear under Devices as shown below. The devices can be connected through the Edge Impulse CLI or WebUSB:
To perform live classification using your phone, go to Devices and click Connect a new device then select "Use your mobile phone". Scan the QR code using your phone then click Switch to classification mode and start sampling.
To perform live classification using your computer, go to Devices and click Connect a new device then select "Use your computer". Give permissions on your computer then click Switch to classification mode and start sampling.
When collecting data, we split the dataset into training and testing sets. The model was trained with only the training set, and the testing set is used to validate how well the model will perform on un-seen data. This will ensure that the model has not learned to overfit the training data, which is a common occurrence.
To test your model, go to Model testing, and click Test all. The model will classify all of the test set samples and give you an overall accuracy of how your model performed.
This is also accompanied by a confusion matrix to show you how your models performs for each class.
To see a classification in detail, go to the individual sample you are want to evaluate and click the three dots next to it, then just select show classification. This will open a new window that will display the expected outcome, and the predicted output of your model with its accuracy. This detailed view can also give you a hint on why an item has been misclassified.
Every learning block has a threshold. This can be the minimum confidence that a neural network needs to have, or the maximum anomaly score before a sample is tagged as an anomaly. You can configure these thresholds to tweak the sensitivity of these learning blocks. This affects both live classification and model testing.
The Spectral features block extracts frequency and power characteristics of a signal. Low-pass and high-pass filters can also be applied to filter out unwanted frequencies. It is great for analyzing repetitive patterns in a signal, such as movements or vibrations from an accelerometer.
GitHub repository containing all DSP block code: edgeimpulse/processing-blocks.
Scaling
Scale axes: Multiplies axes by this number to scale data from the sensor
Filter
Type: Type of filter to apply to the raw data (low-pass, high-pass, or none)
Cut-off frequency: Cut-off frequency of the filter in hertz
Order: Order of the Butterworth filter
Spectral power
FFT length: The FFT size
No. of peaks: Number of spectral power peaks to extract
Peaks threshold: Minimum threshold to extract a peak (frequency domain normalized to [0, 1])
Power edges: Splits the power spectral density in various buckets (V**2/Hz unit)
The spectral analysis block generates 3 types of features per axis:
The root mean square of the filter output (1 scalar value)
The frequency and height of spectral power peaks
The average power spectral density for each bucket
The raw signal is first scaled up or down based on the Scale axes value and offset by its mean value. A Butterworth filter is then applied (except if None selected); the order of the filter indicates how steep the slope is at the cut-off frequency.
At this point the root mean square of the filter output is added to the features' list.
The filter output is then used to extract:
Spectral power peaks: after performing the FFT, the No. of peaks peaks with the highest magnitude are stored in the features' list (frequency and height). Peaks threshold can be tuned to define a minimum value to extract a peak.
PSD for each bucket: after computing the power spectral density, the signal is divided in power buckets, defined by the Power edges parameter. Each sample frequency of the PSD is added to a power bucket. The power buckets are then averaged and added to the features' list. This process allows to extract a power representation of the signal.
Let's consider an input signal with 3 axis and the following parameters:
No. of peaks = 3
Power edges = 0.1, 0.5, 1.0, 2.0, 5.0
The number of generated features per axis is:
1 value for RMS of the filter output
6 values for power peaks (frequency & height for each peak)
4 values for power buckets (number of Power edges - 1)
33 features are generated in total for the input signal.
Edge Impulse FOMO (Faster Objects, More Objects) is a novel machine learning algorithm that brings object detection to highly constrained devices. It lets you count objects, find the location of objects in an image, and track multiple objects in real-time using up to 30x less processing power and memory than MobileNet SSD or YOLOv5.
Tutorial
Want to see the FOMO in action? Check out our Detect objects with centroids (FOMO) tutorial.
For example, FOMO lets you do 60 fps. object detection on a Raspberry Pi 4:
And here's FOMO doing 30 fps. object detection on an Arduino Nicla Vision (Cortex-M7 MCU), using 245K RAM.
You can find the complete Edge Impulse project with the beers vs. cans model, including all data and configuration here: https://studio.edgeimpulse.com/public/89078/latest.
So how does that work? First, a small primer. Let's say you want to detect whether you see a face in front of your sensor. You can approach this in two ways. You can train a simple binary classifier, which says either "face" or "no face", or you can train a complex object detection model which tells you "I see a face at this x, y point and of this size". Object detection is thus great when you need to know the exact location of something, or if you want to count multiple things (the simple classifier cannot do that) - but it's computationally much more intensive, and you typically need much more data for it.
The design goal for FOMO was to get the best of both worlds: the computational power required for simple image classification, but with the additional information on location and object count that object detection gives us.
The first thing to realize is that while the output of the image classifier is "face" / "no face" (and thus no locality is preserved in the outcome) the underlying neural network architecture consists of a number of convolutional layers. A way to think about these layers is that every layer creates a diffused lower-resolution image of the previous layer. E.g. if you have a 16x16 image the width/height of the layers may be:
16x16
4x4
1x1
Each 'pixel' in the second layer maps roughly to a 4x4 block of pixels in the input layer, and the interesting part is that locality is somewhat preserved. The 'pixel' in layer 2 at (0, 0) will roughly map back to the top left corner of the input image. The deeper you go in a normal image classification network, the less of this locality (or "receptive field") is preserved until you finally have just 1 outcome.
FOMO uses the same architecture, but cuts off the last layers of a standard image classification model and replaces this layer with a per-region class probability map (e.g. a 4x4 map in the example above). It then has a custom loss function which forces the network to fully preserve the locality in the final layer. This essentially gives you a heatmap of where the objects are.
The resolution of the heat map is determined by where you cut off the layers of the network. For the FOMO model trained above (on the beer bottles) we do this when the size of the heat map is 8x smaller than the input image (input image of 160x160 will yield a 20x20 heat map), but this is configurable. When you set this to 1:1 this actually gives you pixel-level segmentation and the ability to count a lot of small objects.
A difference between FOMO and other object detection algorithms is that it does not output bounding boxes, but it's easy to go from heat map to bounding boxes. Just draw a box around a highlighted area.
However, when working with early customers we realized that bounding boxes are merely an implementation detail of other object detection networks, and are not a typical requirement. Very often the size of objects is not important as cameras are in fixed locations (and objects thus fixed size), but rather you just want the location and the count of objects.
Thus, we now train on the centroids of objects. This makes it much easier to count objects that are close (every activation in the heat map is an object), and the convolutional nature of the neural network ensures we look around the centroid for the object anyway.
A downside of the heat map is that FOMO is that each cell acts as its own classifier. E.g. if your classes are "lamp", "plant" and "background" each cell will be either lamp, plant, or background. It's thus not possible to detect objects with overlapping centroids. You can see this in the Raspberry Pi 4 video above at 00:18 where the beer bottles are too close together. This can be solved by using a higher resolution heat map.
A really cool benefit of FOMO is that it's fully convolutional. If you set an image:heat map factor of 8 you can throw in a 96x96 image (outputs 12x12 heat map), a 320x320 image (outputs 40x40 heat map), or even a 1024x1024 image (outputs 128x128 heat map). This makes FOMO incredibly flexible, and useful even if you have very large images that need to be analyzed (e.g. in fault detection where the faults might be very, very small). You can even train on smaller patches, and then scale up during inference.
Additionally FOMO is compatible with any MobileNetV2 model. Depending on where the model needs to run you can pick a model with a higher or lower alpha, and transfer learning also works (although you need to train your base models specifically with FOMO in mind). This makes it easy for end customers to use their existing models and fine-tune them with FOMO to also add locality (e.g. we have customers with large transfer learning models for wildlife detection).
Together this gives FOMO the capabilities to scale from the smallest microcontrollers all the way to full gateways or GPUs. Just some numbers:
The video on the top classifies 60 times / second on a stock Raspberry Pi 4 (160x160 grayscale input, MobileNetV2 0.1 alpha). This is 20x faster than MobileNet SSD which does ~3 frames/second.
The second video on the top classifies 30 times / second on an Arduino Nicla Vision board with a Cortex-M7 MCU running at 480MHz) in ~240K of RAM (96x96 grayscale input, MobileNetV2 0.35 alpha).
During Edge Impulse Imagine we demonstrated a FOMO model running on a Himax WE-I Plus doing 14 frames per second on a DSP (video). This model ran in under 150KB of RAM (96x96 grayscale input, MobileNetV2 0.1 alpha). [1]
The smallest version of FOMO (96x96 grayscale input, MobileNetV2 0.05 alpha) runs in <100KB RAM and ~10 fps. on a Cortex-M4F at 80MHz. [1]
[1] Models compiled using EON Compiler.
To build your first FOMO models:
Create a new project in Edge Impulse.
Make sure to set your labeling method to 'Bounding boxes (object detection)'.
Collect and prepare your dataset as in Object detection
Add an 'Object Detection (Images)' block to your impulse.
Under Images, select 'Grayscale'
Under Object detection, select 'Choose a different model' and select one of the FOMO models.
Make sure to lower the learning rate to 0.001 to start.
FOMO is currently compatible with all fully-supported development boards that have a camera, and with Edge Impulse for Linux (any client). Of course, you can export your model as a C++ Library and integrate it as usual on any device or development board, the output format of models is compatible with normal object detection models; and our SDK runs on almost anything under the sun (see Running your impulse locally for an overview) from RTOS's to bare-metal to special accelerators and GPUs.
Additional configuration for FOMO can be accessed via expert mode.
FOMO is sensitive to the ratio of objects to background cells in the labelled data. By default the configuration is to weight object output cells x100 in the loss function, object_weight=100
, as a way of balancing what is usually a majority of background. This value was chosen as a sweet spot for a number of example use cases. In scenarios where the objects to detect are relatively rare this value can be increased, e.g. to 1000, to have the model focus even more on object detection (at the expense of potentially more false detections).
FOMO uses MobileNetV2 as a base model for its trunk and by default does a spatial reduction of 1/8th from input to output (e.g. a 96x96
input results in a 12x12
output). This is implemented by cutting MobileNet off at the intermediate layer block_6_expand_relu
Choosing a different cut_point
results in a different spatial reduction; e.g. if we cut higher at block_3_expand_relu
FOMO will instead only do a spatial reduction of 1/4 (i.e. a 96x96
input results in a 24x24
output)
Note though; this means taking much less of the MobileNet backbone and results in a model with only 1/2 the params. Switching to a higher alpha may counteract this parameter reduction. Later FOMO releases will counter this parameter reduction with a UNet style architecture.
FOMO can be thought of logically as the first section of MobileNetV2 followed by a standard classifier where the classifier is applied in a fully convolutional fashion.
In the default configuration this FOMO classifier is equivalent to a single dense layer with 32 nodes followed by a classifier with num_classes
outputs.
For a three way classifier, using the default cut point, the result is a classifier head with ~3200 parameters.
We have the option of increasing the capacity of this classifier head by either 1) increasing the number of filters in the Conv2D
layer, 2) adding additional layers or 3) doing both.
For example we might change the number of filters from 32 to 16, as well as adding another convolutional layer, as follows.
For some problems an additional layer can improve performance, and in this case actually uses less parameters. It can though potentially take longer to train and require more data. In future releases the tuning of this aspect of FOMO can be handled by the EON Tuner.
The EON Tuner helps you find and select the best embedded machine learning model for your application within the constraints of your target device. The EON Tuner analyzes your input data, potential signal processing blocks, and neural network architectures - and gives you an overview of possible model architectures that will fit your chosen device's latency and memory requirements.
First, make sure you have an audio, motion, or image classification project in your Edge Impulse account to run the EON Tuner with. No projects yet? Follow one of our tutorials to get started:
Log in to the Edge Impulse Studio and open a project.
Select the EON Tuner tab.
Click the Configure target button to select your model’s dataset category, target device, and time per inference (in ms).
Click on the Dataset category dropdown and select the use case unique to your motion, audio, or image classification project.
Click Save and then select Start EON Tuner
Wait for the EON Tuner to finish running, then click the Select button next to your preferred DSP/neural network model architecture to save as your project’s primary blocks:
Click on the DSP and Neural Network tabs within your Edge Impulse project to see the parameters the EON Tuner has generated and selected for your dataset, use case, and target device hardware
Now you’re ready to deploy your automatically configured Edge Impulse model to your target edge device!
The EON Tuner performs end-to-end optimizations, from the digital signal processing (DSP) algorithm to the machine learning model, helping you find the ideal trade-off between these two blocks to achieve optimal performance on your target hardware. The unique features and options available in the EON Tuner are described below.
The Tuner can directly analyze the performance on any device fully supported by Edge Impulse. If you are targeting a different device, select a similar class of processor or leave the target as the default. You'll have the opportunity to further refine the EON tuner results to fit your specific target and application later.
The EON Tuner currently supports three different types of sensor data: motion, images, and audio. From these, the tuner can optimize for different types of common applications or dataset categories.
The EON Tuner evaluates different configurations for creating samples from your dataset. For time series data, the tuner tests different sample window sizes and increment amounts. For image data, the tuner compares different image resolutions.
Depending on the selected dataset category, the EON Tuner considers a variety of Processing blocks when evaluating model architectures. The EON Tuner will test different parameters and configurations of these processing blocks.
Different model architectures, hyper-parameters, and even data augmentation techniques are evaluated by the EON Tuner. The tuner combines these different neural networks with the processing and input options described above, and then compares the end-to-end performance
During operation, the tuner first generates many variations of input, processing, and learning blocks. It then schedules training and testing of each variation. The top level progress bar shows tests started (blue stripes) as well completed tests (solid blue), relative to the total number of generated variations.
Detailed logs of the run are also available. To view them, click on the button next to Target shown below.
As results become available, they will appear in the tuner window. Each result shows the on-device performance and accuracy, as well as details on the input, processing, and learning blocks used. Clicking Select sets a result as your project's primary impulse, and from there you can view or modify the design in the Impulse Design tabs.
While the EON Tuner is running, you can filter results by job status, processing block, and learning block categories.
View options control what information is shown in the tuner results. You can choose which dataset is used when displaying model accuracy, as well as whether to show the performance of the unoptimized float32
, or the quantized int8
, version of the neural network.
Sorting options are available to find the parameters best suited to a given application or hardware target. For constrained devices, sort by RAM to show options with the smallest memory footprint, or sort by latency to find models with the lowest number of operations per inference. It's also possible to sort by label, finding the best model for identifying a specific class.
The selected sorting criteria will be shown in the top left corner of each result.
After training and validating your model, you can now deploy it to any device. This makes the model run without an internet connection, minimizes latency, and runs with minimal power consumption.
The Deployment page consists of a variety of deploy options to choose from depending on your target device. Regardless of whether you are using a fully supported development board or not, Edge Impulse provides deploy options through C++ library in which you can use to deploy your model on any targets (as long as the target has enough compute can handle the task).
The following are the 4 main categories of deploy options currently supported by Edge Impulse:
Deploy as a customizable library
Deploy as a pre-built firmware - for fully supported development boards
Run directly on your phone or computer
Use Edge Impulse for Linux for Linux targets
This deploy option lets you turn your impulse into a fully optimized source code that can be further customized and integrated with your application. This option supports the following libraries:
You can run your impulse locally as an Arduino library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package.
To deploy as an Arduino library, select Arduino library on the Deployment page and click Build to create the library. Download the .ZIP file and import it as a sketch in your Arduino IDE then run your application.
For a full tutorial on how to run your impulse locally as an arduino library, have a look at Running your impulse locally - Arduino.
You can run your Impulse as a C++ library. This packages all of your signal processing blocks, configuration and learning blocks up into a single package that can be easily ported to your custom applications.
Visit Running your impulse locally for a deep dive on how to deploy your impulse as a C++ library.
If you want to deploy your impulse to an STM32 MCU, you can use the Cube.MX CMSIS-PACK. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in any STM32 project with a single function call.
Have a look at Running your impulse locally - using CubeAI for a deep dive on how to deploy your impulse on STM32 based targets using the Cube.MX CMSIS-PACK.
When you want to deploy your impulse to a web app you can use the WebAssembly library.This packages all your signal processing blocks, configuration and learning blocks up into a single package that can run without any compilation.
Have a look at Running your impulse locally - through WebAssembly (Browser) fora deep dive on how you can run your impulse to classify sensor data in your Node.js application.
For this option, you can use a ready-to-go binary for your development board that bundles signal processing blocks, configuration and learning blocks up into a single package. This option is currently only available for fully supported development boards as shown in the image below:
To deploy your model using ready to go binaries, select your target device and click "build". Flash the downloaded firmware to your device then run the following command:
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
If you are developing for Linux based devices, you can use Edge Impulse for Linux for deployment. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
For a deep dive on how to deploy your impulse to linux targets using Edge Impulse for linux, you can visit the Edge Impulse for Linux tutorial.
You can run your impulse directly on your computer/mobile phone without the need of additional app. To run on your computer, you simply just need to select "computer" then click "Switch to classification mode". To run on your mobile phone, select 'Mobile Phone' then scan the QR code and click 'switch to classification mode".
When building your impulse for deployment, Edge Impulse gives you the option of adding another layer of optimization to your impulse using the EON compiler. The EON Compiler lets you run neural networks in 25-55% less RAM, and up to 35% less flash, while retaining the same accuracy, compared to TensorFlow Lite for Microcontrollers.
To activate the EON Compiler, select you preferred deployment option then go to Enable EON™ Compiler then enable it and click 'Build' to build your impulse for deployment.
To have a peek of how your impulse would utilize compute resources of your target device, Edge Impulse also gives an estimate of latency, flash, RAM to be consumed by your target device even before deploying your impulse locally. This can really save you a lot of engineering time costs incurred by recurring iterations and experiments.
You can also select whether to run the unquantized float32 or the quantized int8 models as shown in the image below.
The above confusion matrix is only based on the test data to help you know how your model performs on unseen real world data. It can also help you know whether your model has learned to overfit on your training data which is a common occurrence.
Organizational datasets allow you to build a large collection of organized sensor data that is internal to your organization. This data can then be used to create new Edge Impulse projects, imported in Pandas or Matlab for internal exploration by your data scientists, or be processed and shared with partners. Data files within the datasets can be stored on-premise or in your own cloud infrastructure.
In this tutorial we'll set up a first dataset, explore the powerful query tool, and show how to create new Edge Impulse projects from raw data.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. If you choose to host the data yourself your infrastructure should be available through the S3 API, and you are responsible for setting up proper backups. To configure a new storage bucket, head to your organization, choose Data > Buckets, click Add new bucket, and fill in your access credentials. Make sure to name your storage bucket Internal datasets
, as we'll need it to upload data later.
With the storage bucket in place you can upload your first dataset. Datasets in Edge Impulse have three layers: 1) the dataset, a larger set of data items, grouped together. 2) data item, an item with metadata and files attached. 3) data file, the actual files. For example, if we're collecting data on physical activities from many subjects, we can have:
Dataset: 'Activities Field Study September 1994'.
Data item: 'Forrest Gump Running', with metadata fields "name=Forrest Gump" and "activity=running".
Data file: 'running01.parquet', with raw sensor data.
Data file: 'running02.parquet', with raw sensor data.
From here you can query and group the data. For example, you can retrieve all data from the 'Activities Field Study September 1994' dataset that was tagged with the 'running' activity. Or, you can select all the files that are smaller than 1MB and were generated by 'Forrest Gump' over all datasets.
For this tutorial we'll use a dataset containing 9 minutes of accelerometer data for a gesture recognition system. Download the dataset and unzip it in a convenient location.
No required format for data files
There is no required format for data files. You can upload data in any format, whether it's CSV, Parquet, or a proprietary data format.
There are three ways of uploading data to your dataset. You can either:
Upload the files directly with the UI (we'll do this in this tutorial).
Upload data through the Edge Impulse API.
Or, upload data directly to the storage bucket (recommended for large datasets). In this case use Add data... > Add dataset from bucket and the data will be discovered automatically.
For this dataset we want to create four data items, one for every class ('idle', 'snake', 'updown', 'wave'). On the Data page, select Add data... > Add data item, set the name to 'Idle', the dataset to 'Gestures study', the metadata to { "gesture": "idle" }
, and select all 'idle' files.
Do the same for the 'snake', 'updown' and 'wave' data, so you end up with four data items with 70 files in total.
Organizational datasets contain a powerful query system which lets you explore and slice data. You control the query system through the 'Filter' text box, and you use a language which is very similar to SQL (documentation). For example, here are some queries that you can make:
dataset = 'Gestures study'
- returns all items and files from the study.
bucket_name = 'Internal datasets' AND name IN ('Updown', 'Snake')
- returns data whose name is either 'Updown' or 'Snake, and that is stored in the 'Internal datasets' bucket.
metadata->gesture = 'updown'
- return data that have a metadata field 'gesture' which contains 'updown'.
created > DATE('2020-03-01')
- returns all data that was created after March 1, 2020.
After you've created a filter, you can select one or more data items, and select Download selected to create a ZIP file with the data files. The file count reflects the number of files returned by the filter.
The previous queries all returned all files for a data item. But you can also query files through the same filter. In that case the data item will be returned, but only with the files selected. For example:
file_name LIKE '%.0.cbor'
- returns all files that end with .0.cbor
.
If you have an interesting query that you'd like to share with your colleagues, you can just share the URL. The query is already added to it automatically.
These are all the available fields in the query interface:
dataset
- Dataset.
bucket_id
- Bucket ID.
bucket_name
- Bucket name.
bucket_path
- Path of the data item within the bucket.
id
- Data item ID.
name
- Data item name.
total_file_count
- Number of files for the data item.
total_file_size
- Total size of all files for the data item.
created
- When the data item was created.
metadata->key
- Any item listed under 'metadata'.
file_name
- Name of a file.
file_names
- All filenames in the data item, that you can use in conjunction with CONTAINS
. E.g. find all items with file X, but not file Y: file_names CONTAINS 'x' AND not file_names CONTAINS 'y'
.
If you have an interesting subset of data, and want to train a machine learning on this data, you can export the data into a new Edge Impulse project. This will make a copy of the data, that you can then manipulate and explore like any other project, or share with outside researchers without any risk of leaking the rest of your dataset. Data is also stripped of any metadata, like the name of the data item, or any metadata that you attached to the files.
Edge Impulse data acquisition format
This section only applies if your data is already in either the Edge Impulse Data acquisition format (CBOR and JSON both work), or in WAV, JPG or PNG format. For other data you'll need to use a transformation block before being able to create a new project.
Let's put this in practice. You need to select some data for the new project. Go to the Data page and set the filter to:
Then, select all items and click Transform selected (70 files)
This redirects you to the 'Transformation job' page. Under 'Import data into', select 'Project'. Under 'Project' select '+ Create new project', and enter a name. Next, select the category. This determines whether this is 'training' or 'testing' data, or that the data should be split up between these two categories. For now, select 'Split'. Then, click Create project to import the data.
This pulls down the gesture data from the bucket, and then imports it into the project. You don't need to stay on the page, the job will continue running in the background.
If you now go back to your project you have a copy of the organizational dataset to your disposal, ready to build your next machine learning model. You can also add colleagues or outside collaborators to this specific project by going to Dashboard, and selecting the "Collaborators" widget. And if you want to do another experiment with the same data, you can easily create a new project with the same flow without any fear of changing any of the source data. 🚀
Any questions, or interested in the enterprise version of Edge Impulse? Contact us for more information.
You can optionally show a check mark in the list of data items, and show a check list for data items. This can be used to quickly view which data items are complete (if you need to capture data from multiple sources) or whether items are in the right format.
Checklists are driven by the metadata for a data item. Set the ei_check
metadata item to either 0
or 1
to show a check mark in the list. Set an ei_check_KEYNAME
metadata item to 0
or 1
to show the item in the check list.
To query for items with or without a check mark, use a filter in the form of:
To make it easy to create these lists on the fly you can set these metadata items directly from a transformation block.
Within an organization you can work on one project with multiple people. These can be colleagues, outside researchers, or even members of the community. They will only get access to the specific data in the project, and not to any of the raw data in your organizational datasets.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
To give someone access, go to your project's dashboard, and find the "Collaborators" widget. Click the '+' icon, and type the username or e-mail address of the other user. This user needs to have an Edge Impulse account already.
Upload portals are a secure way to let external parties upload data to your datasets. Through an upload portal they get an easy user interface to add data, but they have no access to the content of the dataset, nor can they delete any files. Data that is uploaded through the portal can be stored on-premise or in your own cloud infrastructure.
In this tutorial we'll set up an upload portal, show you how to add new data, and how to show this data in Edge Impulse for further processing.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
Data is stored in storage buckets, which can either be hosted by Edge Impulse, or in your own infrastructure. Follow the tutorial to set up your storage bucket.
With your storage bucket configured you're ready to set up your first upload portal. In your organization go to Data > Upload portals and choose Create new upload portal. Here, select a name, a description, the storage bucket, and a path in the storage bucket.
Note: You'll need to enable CORS headers on the bucket. If these are not configured you'll get prompted with instructions. Talk to your user success engineer (when your data is hosted by Edge Impulse), or your system administrator to configure this.
After your portal is created a link is shown. This link contains an authentication token, and can be shared directly with the third party.
Click the link to open the portal. If you ever forget the link: no worries. Click the ⋮
next to your portal, and choose View portal.
To upload data you can now drag & drop files or folders to the drop zone on the right, or use Create new folder to first create a folder structure. There's no limit to the amount of files you can upload here, and all files are hashed, so if you upload a file that's already present the file will be skipped.
Note: Files with the same name but with a different hash are overwritten.
To view the uploaded data in your dataset now go to your organization, and select Data. Then select Add data > Add dataset from bucket. Here, enter a name for the dataset, and select the same bucket path as you used for the portal. Then click Add data.
You now have the data in your Edge Impulse organization, ready to be applied to your next machine learning project.
Transformation blocks take raw data from your and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.
In this tutorial we build a Python-based transformation block that loads Parquet files, splits the data into thirty second windows, and uploads the data to a new project.
Want more? We also have an end-to-end example transformation block that mixes noise into an audio dataset: .
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
The file which you can use to test the transformation block. This contains some data from the dataset in Parquet format.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
This is the Parquet schema for the gestures.parquet
file which we'll want to transform into data for a project:
To build a transformation block open a command prompt or terminal window, create a new folder, and run:
This will prompt you to log in, and enter the details for your block. E.g.:
Then, create the following files in this directory:
Dockerfile
We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt
) and which script to run (transform.py
).
Note: Do not use a WORKDIR
under /home
! The /home
path will be mounted in by Edge Impulse, making your files inaccessible.
ENTRYPOINT vs RUN / CMD
If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
requirements.txt
This file describes the dependencies for the block. We'll be using pandas
and pyarrow
to parse the Parquet file, and numpy
to do some calculations.
transform.py
This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):
--in-file
- A file from the organizational dataset. In this case the gestures.parquet
file.
--out-directory
- Directory to write files to.
--hmac-key
- You can use this HMAC key to sign the output files.
Add the following content:
On your local machine
To test the transformation block locally, if you have Python and all dependencies installed, just run:
This generates a number of JSON files in the out/
directory. You can test the import into an Edge Impulse project via:
Docker
You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.
First, build the container:
Then, run the container (make sure gestures.parquet
is in the same directory):
This generates a number of JSON files in the out/
directory. You can test the import into an Edge Impulse project via:
With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.
The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.
If you make any changes to the block, just re-run edge-impulse-blocks push
and the block will be updated.
Next, upload the gestures.parquet
file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.
This makes the gestures.parquet
file available from the Data page.
With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'
.
Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page, select 'Import data into Project'. Then under 'Project', select '+ Create new project' and enter a name. Under 'Transformation block' select the new transformation block.
Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally imports the data into the new project. If you have multiple files selected the transformations will also run in parallel.
Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:
EI_API_KEY
- an API key with 'member' privileges for the organization.
EI_ORGANIZATION_ID
- the organization ID that the block runs in.
EI_API_ENDPOINT
- the API endpoint (default: https://studio.edgeimpulse.com/v1).
is available for everyone but has to be self-hosted. If you want to host it on Edge Impulse infrastructures, you can do that within your organization interface.
In this tutorial, you'll learn how to use to push your custom DSP block to your organisation and how to make this processing block available in the Studio for all users in the organization.
The Custom Processing block we are using for this tutorial can be found here: . It is written in Python. Please note that one of the beauties with custom blocks is that you can write them in any language as we will host a Docker container and we are not tied to a specific runtime.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The . If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
installed on your machine. Custom blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
A running with Docker.
Inside your Custom DSP block folder, run the following command:
The output will look like this:
Modify or update your custom code if needed and run the following command:
The output will look similar to this:
That's it, now your custom DSP block is hosted on your organization. To make sure it is up and running, in your organisation, go to Custom blocks->DSP and you will see the following screen:
To use your DSP block, simply add it as a processing block in the Create impulse view:
Transformation blocks take raw data from your ) and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.
In this tutorial we build a Python-based transformation block that loads Parquet files, calculates features from the Parquet file, and then writes a new file back to your dataset.If you haven't done so, go through first.
From dataset to project
You can also transform data in your organizational dataset into an Edge Impulse project. See
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
The file which you can use to test the transformation block. This contains some data from the dataset in Parquet format.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
This is the Parquet schema for the gestures.parquet
file which we'll transform:
To build a transformation block open a command prompt or terminal window, create a new folder, and run:
This will prompt you to log in, and enter the details for your block. E.g.:
Then, create the following files in this directory:
Dockerfile
We're building a Python based transformation block. The Dockerfile describes our base image (Python 3.7.5), our dependencies (in requirements.txt
) and which script to run (transform.py
).
Note: Do not use a WORKDIR
under /home
! The /home
path will be mounted in by Edge Impulse, making your files inaccessible.
ENTRYPOINT vs RUN / CMD
If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
requirements.txt
This file describes the dependencies for the block. We'll be using pandas
and pyarrow
to parse the Parquet file, and numpy
to do some calculations.
transform.py
This file includes the actual application. Transformation blocks are invoked with three parameters (as command line arguments):
--in-file
- A file from the organizational dataset. In this case the gestures.parquet
file.
--out-directory
- Directory to write files to.
--hmac-key
- You can use this HMAC key to sign the output files. This is not used in this tutorial.
--metadata
- Additional key/value pairs defined for the incoming item(s). This is not used in this tutorial.
Add the following content. This takes in the Parquet file, groups data by their label, and then calculates the RMS over the X, Y and Z axes of the accelerometer.
On your local machine
To test the transformation block locally, if you have Python and all dependencies installed, just run:
Docker
You can also build the container locally via Docker, and test the block. The added benefit is that you don't need any dependencies installed on your local computer, and can thus test that you've included everything that's needed for the block. This requires Docker desktop to be installed.
To build the container and test the block, open a command prompt or terminal window and navigate to the source directory. First, build the container:
Then, run the container (make sure gestures.parquet
is in the same directory):
Seeing the output
This process has generated a new Parquet file in the out/
directory containing the RMS of the X, Y and Z axes. If you inspect the content of the file (e.g. using parquet-tools) you'll see the output:
Success!
With the block ready we can push it to your organization. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization.
The transformation block is now available in Edge Impulse under Data transformation > Transformation blocks.
If you make any changes to the block, just re-run edge-impulse-blocks push
and the block will be updated.
Next, upload the gestures.parquet
file, by going to Data > Add data... > Add data item, setting name as 'Gestures', dataset to 'Transform tutorial', and selecting the Parquet file.
This makes the gestures.parquet
file available from the Data page.
With the Parquet file in Edge Impulse and the transformation block configured you can now create a new job. Go to Data, and select the Parquet file by setting the filter to dataset = 'Transform tutorial'
.
Click the checkbox next to the data item, and select Transform selected (1 file). On the 'Create transformation job' page select 'Import data into Dataset'. Under 'output dataset', select 'Same dataset as source', and under 'Transformation block' select the new transformation block.
Click Start transformation job to start the job. This pulls the data in, starts a transformation job and finally uploads the data back to your dataset. If you have multiple files selected the transformations will also run in parallel.
You can now find the transformed file back in your dataset:
You can update the metadata of blocks directly from a transformation block by creating a ei-metadata.json
file in the output directory. The metadata is then applied to the new data item automatically when the transform job finishes. The ei-metadata.json
file has the following structure:
Some notes:
If action
is set to add
the metadata keys are added to the data item. If action
is set to replace
all existing metadata keys are removed.
Transformation blocks get access to the following environmental variables, which let you authenticate with the Edge Impulse API. This way you don't have to inject these credentials into the block. The variables are:
EI_API_KEY
- an API key with 'member' privileges for the organization.
EI_ORGANIZATION_ID
- the organization ID that the block runs in.
EI_API_ENDPOINT
- the API endpoint (default: https://studio.edgeimpulse.com/v1).
Transformation blocks take raw data from your and convert the data into files that can be loaded in an Edge Impulse project. You can use transformation blocks to only include certain parts of individual data files, calculate long-running features like a running mean or derivatives, or efficiently generate features with different window lengths. Transformation blocks can be written in any language, and run on the Edge Impulse infrastructure.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
Transformation blocks can output data to either:
A project - here the data in your dataset is placed in an Edge Impulse project, and you'll have all the normal features from the studio available to build your machine learning models. This is great if you already have an idea what you want with the data, and are looking for a reproducible pipeline.
Back in the dataset - here data is placed back in the dataset. This is great for extracting long running features, batch jobs, or combining data from multiple sources - even when you don't want to place the data in a project yet.
For both of these you can find tutorials here:
One of the most powerful features in Edge Impulse are the built-in deployment targets (under Deployment in the Studio), which let you create ready-to-go binaries for development boards, or custom libraries for a wide variety of targets that incorporate your trained impulse. You can also create custom deployment blocks for your organization. This lets developers quickly iterate on products without getting your embedded engineers involved, lets your customers build personalized firmware using their own data, or lets you create custom libraries.
In this tutorial you'll learn how to use custom deployment blocks to create a new deployment target, and how to make this target available in the Studio for all users in the organization.
Only available for enterprise customers
Organizational features are only available for enterprise customers. for more information.
You'll need:
The .
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
Deployment blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. If you want to test your blocks locally you'll also need (this is not a requirement):
installed on your machine.
Then, create a new folder on your computer named custom-deploy-block
.
When a user deploys with a custom deployment block two things happen:
A package is created that contains information about the deployment (like the sensors used, frequency of the data, etc.), any trained neural network in .tflite and SavedModel formats, the Edge Impulse SDK, and all DSP and ML blocks as C++ code.
This package is then consumed by the custom deployment block, which can incorporate it with a base firmware, or repackage it into a new library.
To obtain this package go to your project's Dashboard, look for Administrative zone, enable Custom deploys, and click Save.
If you now go to the Deployment page, a new option appears under 'Create library':
Once you click Build you'll receive a ZIP file containing five items:
trained.tflite
- if you have a neural network in the project this contains neural network in .tflite format. This network is already fully quantized if you choose the int8
optimization, otherwise this is the float32
model.
trained.savedmodel.zip
- if you have a neural network in the project this contains the full TensorFlow SavedModel. Note that we might update the TensorFlow version used to train these networks at any time, so rely on the compiled model or the TFLite file where possible.
model-parameters
- impulse and block configuration in C++ format. Can be used by the SDK to quickly run your impulse.
tflite-model
- neural network as source code in a way that can be used by the SDK to quickly run your impulse.
Store the unzipped file under custom-deploy-block/input
.
With the basic information in place we can create a new deployment block. Here we'll build a standalone application that runs our impulse on Linux, very useful when running your impulse on a gateway or desktop computer. First, open a command prompt or terminal window, navigate to the custom-deploy-block
folder (that you created under 1.), and run:
This will prompt you to log in, and enter the details for your block.
Unzip under custom-deploy-block/app
.
To build this application we need to combine the application with the edge-impulse-sdk
, model-parameters
and tflite-model
folder, and invoke the (already included) Makefile.
To build the application we use Docker, a virtualization technique which lets developers package up an application with all dependencies in a single package. In this container we'll place the build tools required for this application, and scripts to combine the trained impulse with the base application.
First, let's create a small build script. As a parameter you'll receive --metadata
which points to the deployment information. In here you'll also get information on the input and output folders where you need to read from and write to.
Create a new file called custom-deploy-block/build.py
and add:
build.py
Next, we need to create a Dockerfile, which contains all dependencies for the build. These include GNU Make, a compiler, and both the build script and the base application.
Create a new file called custom-deploy-block/Dockerfile
and add:
Dockerfile
To test the build script we first build the container, then invoke it with the files from the input
directory. Open a command prompt or terminal, navigate to the custom-deploy-block
folder and:
Build the container:
Invoke the build script - this mounts the current directory in the container under /home
, and then passes the downloaded metadata script to the container:
Or if you run Windows or macOS, you can use Docker to run this application:
With the deployment block ready you can make it available in Edge Impulse. Open a command prompt or terminal window, navigate to the folder you created earlier, and run:
This packages up your folder, sends it to Edge Impulse where it'll be built, and finally is added to your organization. The transformation block is now available in Edge Impulse under Deployment blocks. You can go here to set the logo, update the description, and set extra command line parameters.
The deployment block is automatically available for all organizational projects. Go to the Deployment page on a project, and you'll find a new section 'Custom targets'. Select your new deployment target and click Build.
And now you'll have a freshly built binary from your own deployment block!
Custom deployment blocks are a powerful tool for your organization. They let you build binaries for unreleased products, let you package up impulse as custom libraries, or can let your customers deploy to private targets (if you add an external collaborator to a project they'll have access to the blocks as well). Because the deployment blocks are integrated with your project, and hosted by Edge Impulse this lets everyone, from FAE to R&D developer, now iterate on on-device models without getting your embedded engineers involved.
You can run the 'Add dataset from bucket' step any time new data was added. Data is automatically de-duplicated and new files will be picked up. You can also automate this through our .
If you need a secure way for external parties to contribute data to your datasets then upload portals are the way to go. They offer a friendly user interface, upload data directly into your storage buckets, and give you an easy way to use the data directly in Edge Impulse.
Any questions, or interested in the enterprise version of Edge Impulse? for more information.
Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the so you can automatically convert new incoming data. Want more? We also have an end-to-end example transformation block that mixes noise into an audio dataset: .
If you're interested in transformation blocks or any of the other enterprise features,
Full instruction on how to build processing blocks:
Blog post:
Transformation blocks are a powerful feature which let you set up a data pipeline to turn raw data into actionable machine learning features. It also gives you a reproducible way of transforming many files at once, and is programmable through the so you can automatically convert new incoming data. If you're interested in transformation blocks or any of the other enterprise features,
deployment-metadata.json
- this contains all information about the deployment, like the names of all classes, the frequency of the data, full impulse configuration, and quantization parameters. A specification can be found here: .
edge-impulse-sdk
- a copy of the latest .
Next, we'll add the application. The base application can be found at .
.
Voila. You now have an output
folder which contains a ZIP file. Unzip output/deploy.zip
and now you have a standalone application which runs your impulse. If you run Linux you can invoke this application directly (grab some data from 'Live classification' for the features, see ):
Deployment blocks do not have access to the internet by default. If you need this, or if you need to pull additional information from the project (e.g. access to DSP blocks) you can set the 'privileged' flag on a deployment block. This will enable outside internet access, and will pass in the project.apiKey
parameter in the (if a development API key is set) that you can use to authenticate with the .
You can also use custom deployment blocks with the other organizational features, and can use this to set up powerful pipelines automating , , training new impulses and then deploying back to your device - either through the UI, or via the API. If you're interested in deployment blocks or any of the other enterprise features,
This is the specification for the deployment-metadata.json
file from Building deployment blocks.
There is a list of development boards that are fully supported by Edge Impulse. These boards come with a special firmware which enables data collection from all their sensors, allows you to build new ready-to-go binaries that include your trained impulse, and come with examples on integrating your impulse with your custom firmware. These boards are the perfect way to start building Machine Learning solutions on real embedded hardware.
Different development board? No problem, you can always collect data using the Data forwarder or the Edge Impulse for Linux SDK, and deploy your model back to the device with the Running your impulse locally tutorials. Also, if you feel like porting your board, use this Porting guide.
Just want to experience Edge Impulse? You can also use your Mobile phone!
Enterprise customers can add fully custom learning models in Edge Impulse. These models can be represented in any deep learning framework as long as it can output TFLite files.
Only available for enterprise customers
Organizational features are only available for enterprise customers. View our pricing for more information.
Custom learning blocks for organizations go beyond project's expert mode, which lets you to design your own neural network architectures on a project by project basis, but has some caveats:
You can't load pretrained weights from expert mode, and
Your expert mode model needs to be representable in Keras / TensorFlow.
Load pretrained weights
✅
❌
Use any ML framework to define your model
✅
Keras only
This tutorial describes how to build these models. Alternatively, we've put together two example projects, which bring these models into Edge Impulse:
YOLOv5 - brings a YOLOv5 transfer learning model (trained with PyTorch) into Edge Impulse
Keras - shows how to bring custom Keras blocks into Edge Impulse.
PyTorch - shows how to bring custom PyTorch blocks into Edge Impulse.
The Edge Impulse CLI
If you receive any warnings that's fine. Run edge-impulse-blocks
afterwards to verify that the CLI was installed correctly.
Docker desktop - Custom learning models use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package.
To bring custom learning models into Edge Impulse you'll need to encapsulate your training pipeline into a container. This container takes in the training data, trains the model and then spits out TFLite files.
To see the data that will be passed into the container:
Create a project in Edge Impulse and add your data.
Under Create impulse add the DSP block that you'll use (e.g. 'Image' for images, or 'Spectrogram' for audio) and a random neural network block.
Generate features for the DSP block.
Now go to Dashboard, and under 'Download block data' grab the two items marked 'Training data' / 'Training labels'.
This data is in the following formats:
Training data: a numpy matrix with one sample per row containing the output of the DSP block (use np.load('X_train_features.npy')
to see).
Training labels
If you're using object detection: a JSON file (despite the '.npy' extension) containing structured information about the bounding boxes. Every item in the samples
array maps to one row in the data matrix. The label
is the index of the class, and is 1-based.
If you're not using object detection: a numpy matrix with one sample per row, and the first column of every row is the index of the class. The last three columns are the sampleId and the start / end time of the sample (when going through time series). During training you can typically discard this.
This data is passed into the container as files, located here:
Data: /home/X_train_features.npy
Labels: /home/y_train.npy
After training you need to output a TFLite file. You need to write these here:
Float32 (unquantized) model: /home/model.tflite
Int8 quantized model with int8 inputs: /home/model_quantized_int8_io.tflite
Docker containers are a virtualization technique which lets developers package up an application with all dependencies in a single package. To train your custom model you'll need to wrap all the required packages, your scripts, and (if you use transfer learning) your pretrained weights into this container. When running in Edge Impulse the container does not have network access, so make sure you don't download dependencies while running (fine when building the container).
A typical Dockerfile might look like:
It's important to create an ENTRYPOINT
at the end of the Dockerfile to specify which file to run.
The train script will receive the following arguments:
--epochs <epochs>
- number of epochs to train (e.g. 50
).
--learning-rate <lr>
- learning rate (e.g. 0.001
).
--validation-set-size <size>
- size of the validation set (e.g. 0.2
for 20% of total training set).
--input-shape <shape>
- shape of the training data (e.g. (320,320,3)
for a 320x320 RGB image).
To run the container, first create a home
folder in your script directory and copy the training data / training labels here (named X_train_features.npy
and y_train.npy
). Then build and run the container:
This should train the model and spit out .tflite files.
If your block works you can bring it into Edge Impulse via:
To edit the block, go to your organization, Custom blocks > Transfer learning models.
The block is now available for every Edge Impulse project under your organization.
Unfortunately object detection models typically don't have a standard way to go from neural network output layer to bounding boxes. Currently we support the following types of output layers:
MobileNet SSD
Edge Impulse FOMO
YOLOv5
If you have an object detection model with a different output layer then please contact your user success engineer with an example on how to interpret the output, and we can add it.
The Arduino Nano 33 BLE Sense is a tiny development board with a Cortex-M4 microcontroller, motion sensors, a microphone and BLE - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 30 USD from Arduino and a wide range of distributors.
You can also use the Arduino Tiny Machine Learning Kit to run image classification models on the edge with the Arduino Nano and attached OV7675 camera module (or connect the hardware together via jumper wire and a breadboard if purchased separately).
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-arduino-nano-33-ble-sense.
To set this device up in Edge Impulse, you will need to install the following software:
Here's an instruction video for Windows.
The Arduino website has instructions for macOS and Linux.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. Then press RESET twice to launch into the bootloader. The on-board LED should start pulsating to indicate this.
The development board does not come with the right firmware yet. To update the firmware:
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
You will need the following hardware:
Arduino Nano 33 BLE Sense board with headers.
OV7675 camera module.
Micro-USB cable.
Solderless breadboard and female-to-male jumper wires.
First, slot the Arduino Nano 33 BLE Sense board into a solderless breadboard:
With female-to-male jumper wire, use the following wiring diagram, pinout diagrams, and connection table to link the OV7675 camera module to the microcontroller board via the solderless breadboard:
Download the full pinout diagram of the Arduino Nano 33 BLE Sense here.
Finally, use a micro-USB cable to connect the Arduino Nano 33 BLE Sense development board to your computer.
Now build & train your own image classification model and deploy to the Arduino Nano 33 BLE Sense with Edge Impulse!
The Himax WE-I Plus is a tiny development board with a camera, a microphone, an accelerometer and a very fast DSP - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 65 USD from .
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
Espressif ESP-EYE (ESP32) is a compact development board based on Espressif's ESP32 chip, equipped with a 2-Megapixel camera and a microphone. ESP-EYE also offers plenty of storage, with 8 MB PSRAM and 4 MB SPI flash - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 22 USD from and a wide range of distributors.
There are plenty of other boards built with ESP32 chip - and of course there are custom designs utilizing ESP32 SoM. Edge Impulse firmware was tested with ESP-EYE and ESP FireBeetle boards, but there is a possibility to modify the firmware to use it with other ESP32 designs. Read more on that in section of this documentation.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
Python 3.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
The standard firmware supports the following sensors:
Camera: OV2640, OV3660, OV5640 modules from Omnivision
Microphone: I2S microphone on ESP-EYE (MIC8-4X3-1P0)
LIS3DHTR module connected to I2C (SCL pin 22, SDA pin 21)
Any analog sensor, connected to A0
ESP32 is a very popular chip both in a community projects and in industry, due to its high performance, low price and large amount of documentation/support available. There are other camera enabled development boards based on ESP32, which can use Edge Impulse firmware after applying certain changes, e.g.
AI-Thinker ESP-CAM
M5STACK ESP32 PSRAM Timer Camera X (OV3660)
M5STACK ESP32 Camera Module Development Board (OV2640)
Additionally, since Edge Impulse firmware is open-source and available to public, if you have made modifications/added new sensors capabilities, we encourage you to make a PR in firmware repository!
CY8CKIT-062S2 Pioneer Kit and CY8CKIT-028-SENSE expansion kit required
This guide assumes you have the attached to a
The Infineon Semiconductor enables the evaluation and development of applications using the PSoC 62 Series MCU. This low-cost hardware platform enables the design and debug of the PSoC 62 MCU and the Murata 1LV Module (CYW43012 Wi-Fi + Bluetooth Combo Chip). The PSoC 6 MCU is Infineon' latest, ultra-low-power PSoC specifically designed for wearables and IoT products. The board features a PSoC 6 MCU, and a CYW43012 Wi-Fi/Bluetooth combo module. Infineon CYW43012 is a 28nm, ultra-low-power device that supports single-stream, dual-band IEEE 802.11n-compliant Wi-Fi MAC/baseband/radio and Bluetooth 5.0 BR/EDR/LE. When paired with the , the PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit can be used to easily interface a variety of sensors with the PSoC™ 6 MCU platform, specifically targeted for audio and machine learning applications which are fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models to your PSoC® 62S2 Wi-Fi® BLUETOOTH® Pioneer Kit, directly from the Edge Impulse Studio.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up with Edge Impulse, you will need to install the following software:
Problems installing the CLI?
Then select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-infineon-cy8ckit-062s2.hex
). You can now press the Connect
button to connect to the board, and finally the Program
button to load the base firmware image onto the CY8CKIT-062S2 Pioneer Kit.
With all the software in place, it's time to connect the CY8CKIT-062S2 Pioneer Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
The Nicla Sense ME is a tiny, low-power tool that sets a new standard for intelligent sensing solutions. With the simplicity of integration and scalability of the Arduino ecosystem, the board combines four state-of-the-art sensors from Bosch Sensortec:
BHI260AP motion sensor system with integrated AI.
BMM150 magnetometer.
BMP390 pressure sensor.
BME688 4-in-1 gas sensor with AI and integrated high-linearity, as well as high-accuracy pressure, humidity and temperature sensors.
Designed to easily analyze motion and the surrounding environment – hence the “M” and “E” in the name – it measures rotation, acceleration, pressure, humidity, temperature, air quality and CO2 levels by introducing completely new Bosch Sensortec sensors on the market.
Its tiny size and robust design make it suitable for projects that need to combine sensor fusion and AI capabilities on the edge, thanks to a strong computational power and low-consumption combination that can even lead to standalone applications when battery operated.
The Arduino Nicla Sense ME is available for around 55 USD from the .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the nicla_sense_ingestion.ino
sketch in a text editor or the Arduino IDE.
For data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the Environmental sensors:
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Sense using the Arduino CLI:
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_sense_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Sense.
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Sense ME. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
The Nicla Vision is a ready-to-use, standalone camera for analyzing and processing images on the Edge. Thanks to its 2MP color camera, smart 6-axis motion sensor, integrated microphone, and distance sensor, it is suitable for asset tracking, object recognition, and predictive maintenance. Some of its key features include:
Powerful microcontroller equipped with a 2MP color camera
Tiny form factor of 22.86 x 22.86 mm
Integrated microphone, distance sensor, and intelligent 6-axis motion sensor
Onboard Wi-Fi and Bluetooth® Low Energy connectivity
Standalone when battery-powered
Expand existing projects with sensing capabilities
Enable fast Machine Vision prototyping
Compatible with Nicla, Portenta, and MKR products
Its exceptional capabilities are supported by a powerful STMicroelectronics STM32H747AII6 Dual ARM® Cortex® processor, combining an M7 core up to 480 Mhz and an M4 core up to 240 Mhz. Despite its industrial strength, it keeps energy consumption low for battery-powered standalone applications.
The Arduino Nicla Vision is available for around 95 EUR from the .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Open the nicla_vision_ingestion.ino
(for IMU/proximity sensor) or nicla_vision_ingestion_mic.ino
(for microphone) sketch in a text editor or the Arduino IDE.
For IMU/proximity sensor data ingestion into your Edge Impulse project, at the top of the file, select 1 or multiple sensors by un-commenting the defines and select a desired sample frequency (in Hz). For example, for the accelerometer sensor:
For microphone data ingestion, you do not need to change the default parameters in nicla_vision_ingestion_mic.ino
sketch.
Then, from your sketch's directory, run the Arduino CLI to compile:
Then flash to your Nicla Vision using the Arduino CLI:
Alternatively, if you opened the sketch in Arduino IDE, you can compile and upload the sketch from there.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor's axes (depending on which sensor you selected in your compiled nicla_vision_ingestion.ino
sketch). If you want to switch projects/sensors run the command with --clean
. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:
Note: These exact axis names are required for the Edge Impulse Arduino library deployment example applications for the Nicla Vision.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. You will also name your sensor axes - in case of microphone, you need to enter audio
. If you want to switch projects/sensors run the command with --clean
. Please refer to the follow table for the names used for each axis corresponding to the type of sensor:
Note: These exact axis name is required for the Edge Impulse Arduino library deployment example application for the Nicla Vision Microphone ingestion.
With everything set up you can now build your first machine learning model with these tutorials:
With the impulse designed, trained and verified you can deploy this model back to your Arduino Nicla Vision. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board.
The Portenta H7 is a powerful development board from Arduino with both a Cortex-M7 microcontroller and a Cortex-M4 microcontroller, a BLE/WiFi radio, and an extension slot to connect the Portenta vision shield - which adds a camera and dual microphones. At the moment the Portenta H7 is partially supported by Edge Impulse, letting you collect data from the camera, build computer vision models, and deploy trained machine learning models back to the development board. The Portenta H7 and the vision shield are available directly from for ~$150 in total.
There are two versions of the vision shield: one that has an Ethernet connection and one with a LoRa radio. Both of these can be used with Edge Impulse.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Using the vision shield using two edge connectors on the back Portenta H7.
Use a USB-C cable to connect the development board to your computer. Then, double-tap the RESET button to put the device into bootloader mode. You should see the green LED on the front pulsating.
The development board does not come with the right firmware yet. To update the firmware:
Double press on the RESET button on your board to put it in the bootloader mode.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
Download your custom firmware from the Deployment tab in the Studio and install the firmware with the same method as in the "Update the firmware" section and run the edge-impulse-run-impulse
command:
Note that it may take up to 10 minutes to compile the firmware for the Arduino Portenta H7
If you come across this issue:
You probably forgot to double press the RESET button before running the flash script.
The Nordic Semiconductor nRF5340 DK is a development board with dual Cortex-M33 microcontrollers, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF5340 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF5340 DK is available for around 50 USD from a .
If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF5340 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Drag the nrf5340-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF5340 DK exposes multiple UARTs. If prompted, choose the bottom one:
With everything set up you can now build your first machine learning model with these tutorials:
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Flash new firmware via:
The Nordic Semiconductor nRF9160 DK is a development board with an nRF9160 SIP incorporating a Cortex M-33 for your application, a full LTE-M/NB-IoT modem with GPS along with 1 MB of flash and 256 KB RAM. It also includes an nRF52840 board controller with Bluetooth Low Energy connectivity. The Development Kit is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF9160 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF9160 DK is available for around 150 USD from a variety of distributors including .
If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF9160 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications. You can also remove the shield before flashing the board.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Flash the board controller, you only need to do this once. Go to step 4 if you've performed this step before.
Ensure that the PROG/DEBUG
switch is in nRF52
position.
Copy board-controller.bin
to the JLINK
mass storage device.
Flash the application:
Ensure that the PROG/DEBUG
switch is in nRF91
position.
Run the flash script for your Operating System.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The nRF9160 DK exposes multiple UARTs. If prompted, choose the top one:
With everything set up you can now build your first machine learning model with these tutorials:
The Nordic Semiconductor nRF52840 DK is a development board with a Cortex-M4 microcontroller, QSPI flash, and an integrated BLE radio - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. As the nRF52840 DK does not have any built-in sensors we recommend you to pair this development board with the shield (with a MEMS accelerometer and a MEMS microphone). The nRF52840 DK is available for around 50 USD from a variety of distributors including .
If you don't have the X-NUCLEO-IKS02A1 shield you can use the to capture data from any other sensor, and then follow the tutorial to run your impulse. Or, you can modify the example firmware (based on nRF Connect) to interact with other accelerometers or PDM microphones that are supported by Zephyr.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
Remove the pin header protectors on the nRF52840 DK and plug the X-NUCLEO-IKS02A1 shield into the development board.
Note: Make sure that the shield does not touch any of the pins in the middle of the development board. This might cause issues when flashing the board or running applications.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one on the short side of the board. Then, set the power switch to 'on'.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name JLINK
. Make sure you can see this drive.
Drag the nrf52840-dk.bin
file to the JLINK
drive.
Wait 20 seconds and press the BOOT/RESET button.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
If you don't see the JLINK
drive show up when you connect your nRF52840 DK you'll have to update the interface firmware.
Set the power switch to 'off'.
Hold BOOT/RESET while you set the power switch to 'on'.
Your development board should be mounted as BOOTLOADER
.
After 20 seconds disconnect the USB cable, and plug the cable back in.
The development board should now be mounted as JLINK
.
If your board fails to flash new firmware (a FAIL.txt
file might appear on the JLINK
drive) you can also flash using nrfjprog
.
Flash new firmware via:
.
See the guide.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
If you export to the Himax WE-I Plus you could receive the error: "All licenses are in use by other developers.". Unfortunately we have a limited number of licenses for the MetaWare compiler and these are shared between all Studio users. Try again in a little bit, or export your project as a C++ Library, add it to the project and compile locally.
If no device shows up in your OS (ie: COMxx, /dev/tty.usbxx) after connecting the board and your USB cable supports data transfer, you may need to install .
.
.
The has instructions for macOS and Linux.
See the guide.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
The analog sensor and LIS3DHTR module were tested on ESP32 FireBeetle board and .
The pins used for camera connection on different development boards are not the same, therefore you will need to change the #define to fit your development board, compile and flash the firmware. Specifically for AI-Thinker ESP-CAM, since this board needs an external USB to TTL Serial Cable to upload the code/communicate with the board, the data transfer baud rate must be changed to 115200 .
The analog sensor and LIS3DH accelerometer can be used on any other development board without changes, as long as the interface pins are not changed. If I2C/ADC pins that accelerometer/analog sensor are connected to are different, from described in Sensors available section, you will need to in LIS3DHTR component for ESP32, compile and flash it to your board.
. A utility program we will use to flash firmware images onto the target.
The which will enable you to connect your CY8CKIT-062S2 Pioneer Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
See the guide.
Edge Impulse Studio can collect data directly from your CY8CKIT-062S2 Pioneer Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your CY8CKIT-062S2 Pioneer Kit you first need to flash it with our .
, and unzip the file. Once downloaded, unzip it to obtain the firmware-infineon-cy8ckit-062s2.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the CY8CKIT-062S2 Pioneer Kit to your development computer (where you downloaded and installed ).
You can use to flash your CY8CKIT-062S2 Pioneer Kit with our . To do this, first select your board from the dropdown list on the top left corner. Make sure to select the item that starts with CY8CKIT-062S2-43012
:
Keep Handy
will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices on the left sidebar. The device will be listed there:
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
.
.
Here's an .
The has instructions for macOS and Linux.
See the guide.
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with the .
Looking to connect different sensors? Use the nicla_sense_ingestion
sketch and the Edge Impulse to easily send data from any sensor on the Nicla Sense into your Edge Impulse project.
Use the tutorial and select one of the Nicla Sense examples.
.
.
Here's an .
The has instructions for macOS and Linux.
See the guide.
and unzip the file.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
Looking to connect different sensors? Use the nicla_vision_ingestion
sketch and the Edge Impulse to easily send data from any sensor on the Nicla Vision into your Edge Impulse project.
Use the tutorial and select one of the Nicla Vision examples.
.
.
Here's an .
The has instructions for macOS and Linux.
See the guide.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Use the tutorial and select one of the portenta examples:
For an end-to-end example that classifies data and then sends the result over LoRaWAN. Please see the example.
.
See the guide.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Install the .
.
See the guide.
Install the .
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
.
See the guide.
If this is not the case, see at the bottom of this page.
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
Download the latest and drag the .bin
file onto the BOOTLOADER
drive.
Install the .
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_ORIENTATION
heading, pitch, roll
#define SAMPLE_ENVIRONMENTAL
temperature, barometer, humidity, gas
#define SAMPLE_ROTATION_VECTOR
rotX, rotY, rotZ, rotW
Sensor
Axis names
#define SAMPLE_ACCELEROMETER
accX, accY, accZ
#define SAMPLE_GYROSCOPE
gyrX, gyrY, gyrZ
#define SAMPLE_PROXIMITY
cm
The OpenMV Cam is a small and low-power development board with a Cortex-M7 microcontroller supporting MicroPython, a μSD card socket and a camera module capable of taking 5MP images - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models through the studio and the OpenMV IDE. It is available for 80 USD directly from OpenMV.
To set this device up in Edge Impulse, you will need to install the following software:
Problems installing the CLI?
See the installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse. To make this easy we've put some tutorials together which takes you through all the steps to acquire data, train a model, and deploy this model back to your device.
Adding sight to your sensors - end-to-end tutorial.
Collecting image data with the OpenMV Cam H7 Plus - collecting datasets using the OpenMV IDE.
Running your impulse on your OpenMV camera - run your trained impulse on the OpenMV Cam H7 Plus.
The ST IoT Discovery Kit (also known as the B-L475E-IOT01A) is a development board with a Cortex-M4 microcontroller, MEMS motion sensors, a microphone and WiFi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around 50 USD from a variety of distributors including Digikey.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-st-b-l475e-iot01a.
Two variants of this board
There are two variants of this board, the B-L475E-IOT01A1 (US region) and the B-L475E-IOT01A2 (EU region) - the only difference is the sub-GHz radio. Both are usable in Edge Impulse.
To set this device up in Edge Impulse, you will need to install the following software:
On Windows:
ST Link - drivers for the development board. Run dpinst_amd64
on 64-bits Windows, or dpinst_x86
on 32-bits Windows.
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?"
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. There are two USB ports on the development board, use the one the furthest from the buttons.
The development board does not come with the right firmware yet. To update the firmware:
The development board is mounted as a USB mass-storage device (like a USB flash drive), with the name DIS_L4IOT
. Make sure you can see this drive.
Drag the DISCO-L475VG-IOT01A.bin
file to the DIS_L4IOT
drive.
Wait until the LED stops flashing red and green.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, choose an Edge Impulse project, and set up your WiFi network. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you experience the following error when attempting to connect to a WiFi network:
You have hit a known issue with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.
If the LED does not flash red and green when you copy the .bin
file to the device and instead is a solid red color, and you are unable to connect the device with Edge Impulse, there may be an issue with your device's native firmware.
To restore functionality, use the following tool from ST to update your board to the latest version:
You might need to set up udev rules on Linux before being able to talk to the device. Create a file named /etc/udev/rules.d/50-stlink.rules
and add the following content:
Then unplug the development board and plug it back in.
The Silicon Labs xG24 Dev Kit (xG24-DK2601B) is a compact, feature-packed development platform built for the EFR32MG24 Cortex-M33 microcontroller. It provides the fastest path to develop and prototype wireless IoT products. This development platform supports up to +10 dBm output power and includes support for the 20-bit ADC as well as the xG24's AI/ML hardware accelerator. The platform also features a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse! You'll be able to sample raw data as well as build and deploy trained machine learning models directly from the Edge Impulse Studio - and even stream your machine learning results over BLE to a phone.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-silabs-xg24.
To set this device up with Edge Impulse, you will need to install the following software:
Simplicity Commander. A utility program we will use to flash firmware images onto the target.
The Edge Impulse CLI which will enable you to connect your xG24 Dev Kit directly to Edge Impulse Studio, so that you can collect raw data and trigger in-system inferences.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Edge Impulse Studio can collect data directly from your xG24 Dev Kit and also help you trigger in-system inferences to debug your model, but in order to allow Edge Impulse Studio to interact with your xG24 Dev Kit you first need to flash it with our base firmware image.
Download the latest Edge Impulse firmware, and unzip the file. Once downloaded, unzip it to obtain the firmware-xg24.hex
file, which we will be using in the following steps.
Use a micro-USB cable to connect the xG24 Dev Kit to your development computer (where you downloaded and installed Simplicity Commander).
You can use Simplicity Commander to flash your xG24 Dev Kit with our base firmware image. To do this, first select your board from the dropdown list on the top left corner:
Then go to the "Flash" section on the left sidebar, and select the base firmware image file you downloaded in the first step above (i.e., the file named firmware-xg24.hex
). You can now press the Flash
button to load the base firmware image onto the xG24 Dev Kit.
Keep Simplicity Commander Handy
Simplicity Commander will be needed to upload any other project built on Edge Impulse, but the base firmware image only has to be loaded once.
With all the software in place, it's time to connect the xG24 Dev Kit to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices on the left sidebar. The device will be listed there:
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Nordic Semiconductor Thingy:91 is an easy-to-use battery-operated prototyping platform for cellular IoT using LTE-M, NB-IoT and GPS. It is ideal for creating Proof-of-Concept (PoC), demos and initial prototypes in your cIoT development phase. Thingy:91 is built around the nRF9160 SiP and is certified for a broad range of LTE bands globally, meaning the Nordic Thingy:91 can be used just about anywhere in the world. There is an nRF52840 multiprotocol SoC on the Thingy:91. This offers the option of adding Bluetooth Low Energy connectivity to your project.
Nordic's Thingy:91 is fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. Thingy:91 is available for around 120 USD from a variety of distributors.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-nordic-thingy91.
To set this device up in Edge Impulse, you will need to install the following software:
nRF Connect for Desktop v3.7.1 - install exactly version 3.7.1, please follow the below instructions to downgrade or newly install v3.71:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
Before you start a new project, you need to update the Thingy:91 firmware to our latest build.
Use a micro-USB cable to connect the development board to your computer. Then, set the power switch to 'on'.
Download the latest Edge Impulse firmware. The extracted archive contains the following files:
firmware.hex
: the Edge Impulse firmware image for the nRF9160 SoC, and
connectivity-bridge.hex
: a connectivity application for the nRF52840 that you only need on older boards (hardware version < 1.4)
Open nRF Connect for Desktop and launch the Programmer application.
Scroll down in the menu on the right and make sure Enable MCUboot is selected.
Switch off the Nordic Thingy:91.
Press the multi-function button (SW3) while switching SW1 to the ON position.
In the Programmer navigation bar, click Select device.
In the menu on the right, click Add HEX file > Browse, and select the firmware.hex file from the firmware previously downloaded.
Scroll down in the menu on the right to Device and click Write:
In the MCUboot DFU window, click Write. When the update is complete, a Completed successfully message appears.
You can now disconnect the board.
Thingy:91 hardware version < 1.4.0
Updating the firmware with older hardware versions may fail. Moreover, even if the update works, the device may later fail to connect to Edge Impulse Studio:
In these cases, you will also need to flash the connectivity-bridge.hex
onto the nRF52840 in the Thingy:91. Follow the steps here to update the nRF52840 SOC application with the connectivity-bridge.hex
file through USB.
If this method doesn't work, you will need to flash both hex files using an external probe."
With all the software in place it's time to connect the development board to Edge Impulse. From a command prompt or terminal, run:
This starts a wizard which asks you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
The Thingy:91 exposes multiple UARTs. If prompted, choose the first one:
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with this tutorial:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
The Silicon Labs Thunderboard Sense 2 is a complete development board with a Cortex-M4 microcontroller, a wide variety of sensors, a microphone, Bluetooth Low Energy and a battery holder - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio - and even stream your machine learning results over BLE to a phone. It's available for around 20 USD directly from Silicon Labs.
The Edge Impulse firmware for this development board is open source and hosted on on GitHub: edgeimpulse/firmware-silabs-thunderboard-sense-2.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer. The development board should mount as a USB mass-storage device (like a USB flash drive), with the name TB004
. Make sure you can see this drive.
The development board does not come with the right firmware yet. To update the firmware:
Drag the silabs-thunderboard-sense2.bin
file to the TB004
drive.
Wait 30 seconds.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
Did you know? You can also stream the results of your impulse over BLE to a nearby phone or gateway: see Streaming results over BLE to your phone.
When dragging and dropping an Edge Impulse pre-built .bin firmware file, the binary seems to flash, but when the device reconnects a FAIL.TXT file appears with the contents "Error while connecting to CPU" and the following errors appear from the Edge Impulse CLI impulse runner:
To fix this error, install the Simplicity Studio 5 IDE and flash the binary through the IDE's built in "Upload application..." menu under "Debug Adapters", and select your Edge Impulse firmware to flash:
Your Edge Impulse inferencing application should then run successfully with edge-impulse-run-impulse
.
Sony's Spresense is a small, but powerful development board with a 6 core Cortex-M4F microcontroller and integrated GPS, and a wide variety of add-on modules including an extension board with headphone jack, SD card slot and microphone pins, a camera board, a sensor board with accelerometer, pressure, and geomagnetism sensors, and Wi-Fi board - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio.
To get started with the Sony Spresense and Edge Impulse you'll need:
The Spresense main development board - available for around 55 USD from a wide range of distributors.
The Spresense extension board - to connect external sensors.
A micro-SD card to store samples.
In addition you'll want some sensors, these ones are fully supported (note that you can collect data from any sensor on the Spresense with the data forwarder):
For image models: the Spresense CXD5602PWBCAM1 camera add-on.
For accelerometer models: the Spresense Sensor EVK-70 add-on.
For audio models: an electret microphone and a 2.2K Ohm resistor, wired to the extension board's audio channel A, following this schema (picture here).
Note: for audio models you must also have a FAT formatted SD card for the extension board, with the Spresense's DSP files included in a BIN
folder on the card, see instructions here and a screenshot of the SD card directory here.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: edgeimpulse/firmware-sony-spresense.
To set this device up in Edge Impulse, you will need to install the following software:
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
See the Installation and troubleshooting guide.
With all the software in place it's time to connect the development board to Edge Impulse.
Make sure the SD card is formatted as FAT before inserting it into the Spresense.
Use a micro-USB cable to connect the main development board (not the extension board) to your computer.
The development board does not come with the right firmware yet. To update the firmware:
Install Python 3.7 or higher.
Download the latest Edge Impulse firmware, and unzip the file.
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete. The on-board LEDs should stop blinking to indicate that the new firmware is running.
From a command prompt or terminal, run:
Mac: Device choice
If you have a choice of serial ports and are not sure which one to use, pick /dev/tty.SLAB_USBtoUART or /dev/cu.usbserial-*
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See this blog post for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
If you see:
Upgrade pyserial:
If the edge-impulse-daemon
or edge-impulse-run-impulse
commands do not start it might be because of an error interacting with the SD card or because your board has an old version of the bootloader. To see the debug logs, run:
And press the RESET button on the board. If you see Welcome to nash
you'll need to update the bootloader. To do so:
Install and launch the Arduino IDE.
Go to Preferences and under 'Additional Boards Manager URLs' add https://github.com/sonydevworld/spresense-arduino-compatible/releases/download/generic/package_spresense_index.json
(if there's already text in this text box, add a ,
before adding the new URL).
Then go to Tools > Boards > Board manager, search for 'Spresense' and click Install.
Select the right board via: Tools > Boards > Spresense boards > Spresense.
Select your serial port via: Tools > Port and selecting the serial port for the Spresense board.
Select the Spresense programmer via: Tools > Programmer > Spresense firmware updater.
Update the bootloader via Tools > Burn bootloader.
Then update the firmware again (from step 3: Update the bootloader and the firmware).
The is the debut microcontroller from Raspberry Pi - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the studio. It's available for around $4 from Raspberry Pi foundation and a wide range of distributors.
To get started with the Raspberry Pi RP2040 and Edge Impulse you'll need:
A . The pre-built firmware and Edge Impulse Studio exported binary are tailored for , but with a few simple steps you can collect the data and run your models with other RP2040-based boards, such as . For more details, check out .
(Optional) If you are using the , the makes it easier to connect external sensors for data collection/inference.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
If you'd like to interact with the board using a set of pre-defined AT commands (not necessary for standard ML workflow), you will need to also install a serial communication program, for example minicom
, picocom
or use Serial Monitor from Arduino IDE (if installed).
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the CLI?
With all the software in place, it's time to connect the development board to Edge Impulse.
Use a micro-USB cable to connect the development board to your computer while holding down the BOOTSEL button, forcing the Raspberry Pi Pico into USB Mass Storage Mode.
The development board does not come with the right firmware yet. To update the firmware:
Drag the ei_rp2040_firmware.uf2
file from the folder to the USB Mass Storage device.
Wait until flashing is complete, unplug and replug in your board to launch the new firmware.
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model. Since Raspberry Pi Pico does not have any built-in sensors, we decided to add the following ones to be supported out of the box, with a pre-built firmware:
Analog signal sensor (pin A0).
Once you have the compatible sensors, you can then follow these tutorials:
While RP2040 is a relatively new microcontroller, it was already utilized to build several boards:
The official Raspberry Pi Pico RP2040
Arduino RP2040 Connect (WiFi, BLuetooth, onboard sensors)
Seeed Studio XIAO RP2040 (extremely small footprint)
Black Adafruit Feather RP2040 (built-in LiPoly charger)
And others. While pre-built Edge Impulse firmware is mainly tested with Pico board, it is compatible with other boards, with the exception of I2C sensors - different boards use different pins for I2C, so if you’d like to use LSM6DS3 or LSM6DSOX accelerometer & gyroscope modules, you will need to change I2C pin values in Edge Impulse RP2040 firmware source code, recompile it and upload it to the board.
You can use your Linux x86_64 device or computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a webcam and microphone plugged into your system, they are automatically detected and can be used to build models.
Instruction set architectures
If you are not sure about your instruction set architectures, use:
To set this device up in Edge Impulse, run the following commands:
Ubuntu/Debian:
With all software set up, connect your camera and microphone to your operating system (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally run on your Linux platform:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
The TinyML Board is a with a microphone and accelerometer, USB host microcontroller and an always-on Neural Decision Processor™, featuring ultra low-power consumption, a fully connected neural network architecture, and fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained embedded machine learning models directly from the Edge Impulse studio to create the next generation of low-power, high-performance audio interfaces.
The Edge Impulse firmware for this development board is open source and hosted on .
IMU data acquisition - SD Card
An SD Card is required to use IMU data acquisition as the internal RAM of the MCU is too small. You don't need the SD Card for inferencing only or for audio projects.
To set this device up in Edge Impulse, you will need to install the following software:
Select one of the 2 firmwares below for audio or IMU projects:
Insert SD Card if you need IMU data acquisition and connect the USB cable to your computer. Double-click on the script for your OS. The script will flash the Arduino firmware and a default model on the NDP101 chip.
Flashing issues
0x000000: read 0x04 != expected 0x01
Some flashing issues can occur on the Serial Flash. In this case, open a Serial Terminal on the TinyML board and send the command: :F. This will erase the Serial Flash and should fix the flashing issue.
Connect the Syntiant TinyML Board directly to your computer's USB port. Linux, Mac OS, and Windows 10 platforms are supported.
Audio - USB microphone (macOS/Linux only)
Check that the Syntiant TinyML enumerates as "TinyML" or "Arduino MKRZero". For example, in Mac OS you'll find it under System Preferences/Sound:
Audio acquisition - Windows OS
Using the Syntiant TinyML board as an external microphone for data collection doesn't currently work on Windows OS.
IMU
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model and evaluate it using the Syntiant TinyML Board with this tutorial:
Board is detected as MKRZero and not TinyML: when compiling using the Arduino IDE, the board name will change from TinyML to MKRZero as it automatically retrieves the name from the board type. This doesn't affect the execution of the firmware.
How to label my classes? The NDP101 chip expects one and only negative class and it should be the last in the list. For instance, if your original dataset looks like: yes, no, unknown, noise
and you only want to detect the keyword 'yes' and 'no', merge the 'unknown' and 'noise' labels in a single class such as z_openset
(we prefix it with 'z' in order to get this class last in the list).
The is a development board equipped with the multiprotocol wireless CC1352P microcontroller. The Launchpad, when paired with the and booster packs, is fully supported by Edge Impulse, and is able to sample accelerometer & microphone data, build models, and deploy directly to the device without any programming required. The , , and boards are available for purchase directly from Texas Instruments.
If you don't have either booster pack or are using different sensing hardware, you can use the to capture data from any other sensor type, and then follow the tutorial to run your impulse. Or, you can clone and modify the open source project on GitHub.
The Edge Impulse firmware for this development board is open source and hosted on GitHub: .
To set this device up in Edge Impulse, you will need to install the following software:
Add the installation directory to your PATH
On Linux:
GNU Screen: install for example via sudo apt install screen
.
Problems installing the Edge Impulse CLI?
With all the software in place it's time to connect the development board to Edge Impulse.
To interface the Launchpad with sensor hardware, you will need to either connect the BOOSTXL-SENSORS to collect accelerometer data, or the CC3200AUDBOOST to collect audio data. Follow the guides below based on what data you want to collect.
Before you start
2. Connect the development board to your computer
Use a micro-USB cable to connect the development board to your computer.
3. Update the firmware
The development board does not come with the right firmware yet. To update the firmware:
Open the flash script for your operating system (flash_windows.bat
, flash_mac.command
or flash_linux.sh
) to flash the firmware.
Wait until flashing is complete, and press the RESET button once to launch the new firmware.
Problems flashing firmware onto the Launchpad?
3. Setting keys
From a command prompt or terminal, run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
Which device do you want to connect to?
The Launchpad enumerates two serial ports. The first is the Application/User UART, which the edge-impulse firmware communicates through. The other is an Auxiliary Data Port, which is unused.
When running the edge-impulse-daemon
you will be prompted on which serial port to connect to. On Mac & Linux, this will appear as:
Generally, select the lower numbered serial port. This usually corresponds with the Application/User UART. On Windows, the serial port may also be verified in the Device Manager
4. Verifying that the device is connected
With everything set up you can now build and run your first machine learning model with these tutorials:
Failed to flash
If the UniFlash CLI is not added to your PATH, the install scripts will fail. To fix this, add the installation directory of UniFlash (example /Applications/ti/uniflash_6.4.0
on macOS) to your PATH on:
If during flashing you encounter further issues, ensure:
The device is properly connected and/or the cable is not damaged.
You have the proper permissions to access the USB device and run scripts. On macOS you can manually approve blocked scripts via System Preferences->Security Settings->Unlock Icon
If on Linux you may want to try copying tools/71-ti-permissions.rules to /etc/udev/rules.d/. Then re-attach the USB cable and try again.
The Jetson Nano is an embedded Linux dev kit featuring a GPU accelerated processor (NVIDIA Tegra) targeted at edge AI applications. You can easily add a USB external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Jetson Nano is available from 59 USD from a wide range of distributors, including .
In addition to the Jetson Nano we recommend that you also add a camera and / or a microphone. Most popular USB webcams work fine on the development board out of the box.
Powering your Jetson
Although powering your Jetson via USB is technically supported, some users report on forums that they have issues using USB power. If you have any issues such as the board resetting or becoming unresponsive, consider powering via a 5V, 4A power supply on the DC barrel connector. Don't forget to change the jumper! is an example power supply for sale.
An added bonus to powering via the DC barrel plug, you can carry out your first boot w/o an external monitor or keyboard.
Issue the following command to check:
The result should look similar to this:
To set this device up in Edge Impulse, run the following commands (from any folder). When prompted, enter the password you created for the user on your Jetson in step 1. The entire script takes a few minutes to run (using a fast microSD card).
With all software set up, connect your camera and microphone to your Jetson (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally, just connect to your Jetson again, and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
This is probably caused by a missing dependency on libjpeg. If you run:
The end of the output should show support for file import/export with libjpeg, like so:
If you don't see jpeg support as "yes", rerun the setup script and take note of any errors.
If you encounter this error, ensure that your entire home directory is owned by you (especially the .config folder):
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
You can use your Intel or M1-based Mac computer as a fully-supported development environment for Edge Impulse for Linux. This lets you sample raw data, build models, and deploy trained machine learning models directly from the Studio. If you have a Macbook, the webcam and microphone of your system are automatically detected and can be used to build models.
To connect your Mac to Edge Impulse:
Open a terminal window and install the dependencies:
Last, install the Edge Impulse CLI:
Problems installing the CLI?
With the software installed, open a terminal window and run::
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
With everything set up you can now build your first machine learning model with these tutorials:
To run your impulse locally, just open a terminal and run:
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
.
See the guide.
, and unzip the file.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
(GP16; pin D16 on Grove Shield for Pi Pico).
(GP18; pin D18 on Grove Shield for Pi Pico).
(I2C0).
There is a vast variety of analog signal sensors, that can take advantage of RP2040 10-bit ADC (Analog to Digital Converter), from common ones, such as Light sensor, Sound level sensor to more specialized ones, e.g. , or even an .
.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse.
That's all! Your machine is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your local machine, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
How to use Arduino-CLI with macOS M1 chip? You will need to install Rosetta2 to run the Arduino-CLI. See details on .
.
Install the desktop version for your operating system
See for more details
See the guide.
The Launchpad jumper connections should be in their original configuration out of the box. If you have already modified the jumper connections, see the Launchpad's for the original configuration.
You will need five extra to connect the CC3200AUDBOOST to the Launchpad, as described in the .
The CC3200AUDBOOST board requires modifications to interface properly with the CC1352P series of Launchpads. The full documentation regarding these modifications is available from Texas Instruments in their , and a summary of the steps to configure the board are shown below.
The pin connections shown below are required by TI to interface between the two boards. Connect the pins by using jumper wires and following the diagram. For more information see the CC3200AUDBOOST and
Perform all modifications to the Launchpad and audio booster pack described in the
, and unzip the file.
See the section for more information.
If a selected serial port fails to connect. Test the other port before checking for other common issues.
Alternatively, recent versions of Google Chrome and Microsoft Edge can collect data directly from your development board, without the need for the Edge Impulse CLI. See for more information.
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
.
Looking to connect different sensors? The lets you easily send data from any sensor into Edge Impulse, and you can with custom firmware or sensor data.
Alternatively, the gcc/build/edge-impulse-standalone.out
binary file may be flashed to the Launchpad using the UniFlash GUI or web-app. See the for more info.
Depending on your hardware, follow NVIDIA's setup instructions ( or ) for both "Write Image to SD Card" and "Setup and First Boot." When finished, you should have a bash prompt via the USB serial port, or using an external monitor and keyboard attached to the Jetson. You will also need to connect your Jetson to the internet via the Ethernet port (there is no WiFi on the Jetson). (After setting up the Jetson the first time via keyboard or the USB serial port, you can SSH in.)
That's all! Your device is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your Jetson, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
Due to some incompatibilities we don't run models on the GPU by default. You can enable this by following the in the C++ SDK.
Install .
Install .
See the guide.
That's all! Your Mac is now connected to Edge Impulse. To verify this, go to , and click Devices. The device will be listed here.
Looking to connect different sensors? Our lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our has examples on how to integrate the model with your favourite programming language.
If you have a development board that is not officially supported by Edge Impulse, no problem. This guide contains information on connecting any device to Edge Impulse.
Edge Impulse can handle data from any device, whether it's coming from a new development board or from a device that has been in production for years. Just post your data to the ingestion service and it will automatically show up in the studio. You can either do this directly from your device (if it has an IP connection) or through an intermediate protocol like a phone application. To deal with data that is already collected we have the Uploader tools, which can label and import data.
A quick way of getting data from devices is using the Data forwarder. This lets you forward data collected over a serial interface to the studio. This method only works on sensors with lower sampling frequencies (f.e. no audio), does not allow sensor selection, and does not sign data on device. It's however a really easy way to collect data from existing devices with just a few lines of code.
The Inferencing SDK enables you to run impulses locally and on-device. The SDK contains efficient native implementations of all processing and learning blocks. The SDK was written in portable C++11 with as little dependencies as possible, and the best way of testing out whether it works on your platform is through the Deployment page in the studio. From here you can export a library with all blocks, configuration and the SDK. See the Running your impulse locally tutorials.
If you need to make changes to the SDK to get it to run on your device we welcome contributions. We also welcome contributions which add optimized code paths for your specific hardware. The SDK documentation has more information on where to add these.
Devices can be controlled from the studio through the Remote management interface. This is a service where devices connect to, either over a web socket or through a serial connection (with the help of the Serial daemon). The studio lists these devices, and can instruct them to start sampling straight from the UI.
To add full support for your development board you'll need to implement the serial protocol and (if your device has an IP connection) the websocket protocol. Alternatively you can also implement the web socket protocol through an intermediate layer (like a mobile phone app). There are end-to-end integration tests available at edgeimpulse/integration-tests-firmware which validate both the serial and websocket protocols on a development board.
Devices that connect through the data forwarder can be controlled by the studio, but have a limited integration. They don't support sensor or frequency selection.
Do you want help porting? Or want to get the best integration in Edge Impulse, including full studio support, and want to let users build binaries directly from the UI? Let us know at hello@edgeimpulse.com and we'll let you know the possibilities.
Community board
This is a community board by Arducam, and it's not maintained by Edge Impulse. For support head to the Arducam support page.
The Arducam Pico4ML TinyML Dev Kit is a development board from Arducam with a RP2040 microcontroller, QVGA camera, bluetooth module (depending on your version), LCD screen, onboard microphone, accelerometer, gyroscope, and compass. Arducam has created in depth tutorials on how to get started using the Pico4ML Dev Kit with Edge Impulse, including how to collect new data and how to train and deploy your Edge Impulse models to the Pico4ML. The Arducam Pico4ML TinyML Dev Kit has two versions, the version with BLE is available for 55 USD and the version without BLE is available for 50 USD.
To set up your Arducam Pico4ML TinyML Dev Kit, follow this guide: Arducam: How to use Edge Impulse to train machine learning models for Raspberry Pico.
With everything set up you can now build your first machine learning model with the Edge Impulse continuous motion recognition tutorial.
Or you can follow Arducam's tutorial on How to build a Magic Wand with Edge Impulse for Arducam Pico4ML-BLE.
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Arducam Pico4ML TinyML Dev Kit. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Arducam's How to use Edge Impulse to train machine learning models for Raspberry Pico tutorial for more information on deploying your model onto the device.
The Raspberry Pi 4 is a versatile Linux development board with a quad-core processor running at 1.5GHz, a GPIO header to connect sensors, and the ability to easily add an external microphone or camera - and it's fully supported by Edge Impulse. You'll be able to sample raw data, build models, and deploy trained machine learning models directly from the Studio. The Raspberry Pi 4 is available from 35 USD from a wide range of distributors, including DigiKey.
In addition to the Raspberry Pi 4 we recommend that you also add a camera and / or a microphone. Most popular USB webcams and the Camera Module work fine on the development board out of the box.
You can set up your Raspberry Pi without a screen. To do so:
Raspberry Pi OS - Bullseye release
Last release of the Raspberry Pi OS requires Edge Impulse Linux CLI version >= 1.3.0.
Flash the Raspberry Pi OS image to an SD card.
After flashing the OS, find the boot
mass-storage device on your computer, and create a new file called wpa_supplicant.conf in the boot
drive. Add the following code:
(Replace the fields marked with <>
with your WiFi credentials)
Next, create a new file called ssh
into the boot
drive. You can leave this file empty.
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Find the IP address of your Raspberry Pi. You can either do this through the DHCP logs in your router, or by scanning your network. E.g. on macOS and Linux via:
Here 192.168.1.19
is your IP address.
Connect to the Raspberry Pi over SSH. Open a terminal window and run:
Log in with password raspberry
.
If you have a screen and a keyboard / mouse attached to your Raspberry Pi:
Flash the Raspberry Pi OS image to an SD card.
Plug the SD card into your Raspberry Pi 4, and let the device boot up.
Connect to your WiFi network.
Click the 'Terminal' icon in the top bar of the Raspberry Pi.
To set this device up in Edge Impulse, run the following commands:
If you have a Raspberry Pi Camera Module, you also need to activate it first. Run the following command:
Use the cursor keys to select and open Interfacing Options, and then select Camera and follow the prompt to enable the camera. Then reboot the Raspberry.
If you want to install Edge Impulse on your Raspberry Pi using Docker you can run the following commands:
Once on the Docker container, run:
and
You should now be able to run Edge Impulse CLI tools from the container running on your Raspberry.
Note that this will only work using an external USB camera
With all software set up, connect your camera and microphone to your Raspberry Pi (see 'Next steps' further on this page if you want to connect a different sensor), and run:
This will start a wizard which will ask you to log in, and choose an Edge Impulse project. If you want to switch projects run the command with --clean
.
That's all! Your device is now connected to Edge Impulse. To verify this, go to your Edge Impulse project, and click Devices. The device will be listed here.
With everything set up you can now build your first machine learning model with these tutorials:
Looking to connect different sensors? Our Linux SDK lets you easily send data from any sensor and any programming language (with examples in Node.js, Python, Go and C++) into Edge Impulse.
To run your impulse locally, just connect to your Raspberry Pi again, and run:
This will automatically compile your model with full hardware acceleration, download the model to your Raspberry Pi, and then start classifying. Our Linux SDK has examples on how to integrate the model with your favourite programming language.
If you have an image model then you can get a peek of what your device sees by being on the same network as your device, and finding the 'Want to see a feed of the camera and live classification in your browser' message in the console. Open the URL in a browser and both the camera feed and the classification are shown:
Community board
This is a community board by Blues Wireless, and is not maintained by Edge Impulse. For support head to the Blues Wireless homepage.
The Blues Wireless Swan is a development board featuring a 120MHz ARM Cortex-M4 from STMicroelectronics with 2MB of flash and 640KB of RAM. Blues Wireless has created an in-depth tutorial on how to get started using the Swan with Edge Impulse, including how to collect new data from a triple axis accelerometer and how to train and deploy your Edge Impulse models to the Swan. For more details and ordering information, visit the Blues Wireless Swan product page.
To set up your Blues Wireless Swan, follow this complete guide: Using Swan with Edge Impulse.
The Blues Wireless Swan tutorial will guide you through how to create a simple classification model with an accelerometer designed to analyze movement over a brief period of time (2 seconds) and infer how the motion correlates to one of the following four states:
Idle (no motion)
Circle
Slash
An up-and-down motion in the shape of the letter "W"
For more insight into using a triple axis accelerometer to build an embedded machine learning model visit the Edge Impulse continuous motion recognition tutorial.
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Blues Wireless Swan. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package the complete impulse - including the signal processing code, neural network weights, and classification code - up into a single library that you can run on your development board. See the end of the Blues Wireless' [Using Swan with Edge Impulse] (https://dev.blues.io/get-started/swan/using-swan-with-edge-impulse) tutorial for more information on deploying your model onto the device.
Community board
This is a community board by Seeed Studios, and it's not maintained by Edge Impulse. For support head to the Seeed Forum.
The Seeed Wio Terminal is a development board from Seeed Studios with a Cortex-M4 microcontroller, motion sensors, an LCD display, and Grove connectors to easily connect external sensors. Seeed Studio has added support for this development board to Edge Impulse, so you can sample raw data and build machine learning models from the studio. The board is available for 29 USD directly from Seeed.
To set up your Seeed Wio Terminal, follow this guide: Getting started with Edge Impulse - Seeed Wiki.
With everything set up you can now build your first machine learning model with this full end-to-end course from Seeed's EDU team: TinyML with Wio Terminal Course.
Looking to connect different sensors? The Data forwarder lets you easily send data from any sensor into Edge Impulse.
With the impulse designed, trained and verified you can deploy this model back to your Wio Terminal. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single library that you can run on your development board.
The easiest way to deploy your impulse to the Seeed Wio Terminal is via an Arduino library. See Running your impulse locally on your Arduino for more information.
You can use any smartphone with a modern browser as a fully-supported client for Edge Impulse. You'll be able to sample raw data (from the accelerometer, microphone and camera), build models, and deploy machine learning models directly from the studio. Your phone will behave like any other device, and data and models that you create using your mobile phone can also be deployed to embedded devices.
The mobile client is open source and hosted on GitHub: edgeimpulse/mobile-client. As there are thousands of different phones and operating system versions we'd love to hear from you there if something is amiss.
There's also a video version of this tutorial:
To connect your mobile phone to Edge Impulse, go to the your Edge Impulse project, and head to the Devices page. Then click Connect a new device.
Select Mobile phone, and a QR code will appear. Either scan the QR code with the camera of your phone - many phones will automatically recognize the code and offer to open a browser window - or click on the link above the QR code to open the mobile client.
This opens the mobile client, and registers the device directly. On your phone you see a Connected message.
That's all! Your device is now connected to Edge Impulse. If you return to the Devices page in the studio, your phone now shows as connected. You can change the name of your device by clicking on ⋮
.
With everything set up you can now build your first machine learning model with these tutorials:
Your phone will show up like any other device in Edge Impulse, and will automatically ask permission to use sensors.
You might need to enable motion sensors in the Chrome settings via Settings > Site settings > Motion sensors.
With the impulse designed, trained and verified you can deploy this model back to your phone. This makes the model run without an internet connection, minimizes latency, and runs with minimum power consumption. Edge Impulse can package up the complete impulse - including the signal processing code, neural network weights, and classification code - up in a single WebAssembly package that you can straight from the browser.
To do so, just click Switch to classification mode at the bottom of the mobile client. This will first build the impulse, and then samples data from the sensor, run the signal processing code, and then classify the data:
Victory! You're now running your machine learning model locally in your browser - you can even turn on airplane mode and the model will continue running. You can also download the WebAssembly package to include in your own website or Node.js application. 🚀
Edge Impulse for Linux is the easiest way to build Machine Learning solutions on real embedded hardware. It contains tools which let you collect data from any microphone or camera, can be used with the Node.js, Python, Go and C++ SDKs to collect new data from any sensor, and can run impulses with full hardware acceleration - with easy integration points to write your own applications.
This is a list of development boards that are fully supported by Edge Impulse for Linux. Follow the instructions to get started:
.
.
.
.
Different development board? Probably no problem! You can use the Linux x86_64 getting started guide to set up the Edge Impulse for Linux CLI tool, and you can run your impulse on any x86_64, ARMv7 or AARCH64 Linux target. For support please head to the .
To build your own applications, or collect data from new sensors, you can use the high-level language SDKs. These use full hardware acceleration, and let you integrate your Edge Impulse models in a few lines of code:
.
.
.
.
Edge Impulse for Linux models are delivered in .eim
format. This is an executable that contains your signal processing and ML code, compiled with optimizations for your processor or GPU (e.g. NEON instructions on ARM cores) plus a very simple IPC layer (over a Unix socket). We do this because your model file is now completely self-contained, it does not depend on anything (except glibc) and thus you don't need specific TensorFlow versions, avoid Python dependency hell, and will never have to worry about why you're not running at full native speed.
This library lets you run machine learning models and collect sensor data on machines using Node.js. The SDK is open source and hosted on GitHub: .
Add the library to your application via:
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from the camera or microphone, follow the for your development board.
To collect data from other sensors you'll need to write some code where you instantiate a DataForwarder
object, write data samples, and finally call finalize()
which uploads the data to Edge Impulse. .
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:
- configures devices over serial, and acts as a proxy for devices that do not have an IP connection.
- allows uploading and signing local files.
- a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.
- show the impulse running on your device.
- create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.
- to flash the Himax WE-I Plus.
Did you know you can also connect devices directly to your browser?
Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See for more information.
This library lets you run machine learning models and collect sensor data on machines using Python. The SDK is open source and hosted on GitHub: .
Install a recent version of (>=3.7).
Install the SDK
Raspberry Pi
Jetson Nano
Other platforms
Clone this repository to get the examples:
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
If you see this error you can re-install portaudio via:
This error shows when you want to gain access to the camera or the microphone on macOS from a virtual shell (like the terminal in Visual Studio Code). Try to run the command from the normal Terminal.app.
This library lets you run machine learning models and collect sensor data on machines using C++. The SDK is open source and hosted on GitHub: .
Install GNU Make and a recent C++ compiler (tested with GCC 8 on the Raspberry Pi, and Clang on other targets).
Clone this repository and initialize the submodules:
If you want to use the audio or camera examples, you'll need to install libasound2 and OpenCV 4. You can do so via:
Linux
macOS
Note that you cannot run any of the audio examples on macOS, as these depend on libasound2, which is not available there.
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
This repository comes with four classification examples:
To build an application:
Export your trained impulse as a C++ Library from the Edge Impulse Studio (see the Deployment page) and copy the folders into this repository.
Build the application via:
Replace APP_CUSTOM=1
with the application you want to build. See 'Hardware acceleration' below for the hardware specific flags. You probably want these.
The application is in the build directory:
For many targets there is hardware acceleration available. To enable this:
Raspberry Pi 4 (and other Armv7l Linux targets)
Build with the following flags:
Jetson Nano (and other AARCH64 targets)
Install Clang:
Build with the following flags:
Linux x86 targets
Build with the following flags:
Intel-based Macs
Build with the following flags:
Build with the following flags:
Note that this does build an x86 binary, but it runs very fast through Rosetta.
On the Jetson Nano you can also build with support for TensorRT, this fully leverages the GPU on the Jetson Nano. Unfortunately this is currently not available for object detection models - which is why this is not enabled by default. To build with TensorRT:
Go to the Deployment page in the Edge Impulse Studio.
Select the 'TensorRT library', and the 'float32' optimizations.
Build the library and copy the folders into this repository.
Download the shared libraries via:
Build your application with:
Note that there is significant ramp up time required for TensorRT. The first time you run a new model the model needs to be optimized - which might take up to 30 seconds, then on every startup the model needs to be loaded in - which might take up to 5 seconds. After this, the GPU seems to be warming up, so expect full performance about 2 minutes in. To do a fair performance comparison you probably want to use the custom application (no camera / microphone overhead) and run the classification in a loop.
You can also build .eim files for high-level languages using TensorRT via:
Long warm-up time and under-performance
By default, the Jetson Nano enables a number of aggressive power saving features to disable and slow down hardware that is detected to be not in use. Experience indicates that sometimes the GPU cannot power up fast enough, nor stay on long enough, to enjoy best performance. You can run a script to enable maximum performance on your Jetson Nano.
ONLY DO THIS IF YOU ARE POWERING YOUR JETSON NANO FROM A DEDICATED POWER SUPPLY. DO NOT RUN THIS SCRIPT WHILE POWERING YOUR JETSON NANO THROUGH USB.
To enable maximum performance, run:
The model will be placed in build/model.eim
and can be used directly by your application.
This library lets you run machine learning models and collect sensor data on machines using Go. The SDK is open source and hosted on GitHub: .
Install or higher.
Clone this repository:
Find the example that you want to build and run go build
:
Run the example:
And follow instructions.
This SDK is also published to pkg.go.dev, so you can pull the package from there too.
Before you can classify data you'll first need to collect it. If you want to collect data from the camera or microphone on your system you can use the Edge Impulse CLI, and if you want to collect data from different sensors (like accelerometers or proprietary control systems) you can do so in a few lines of code.
To collect data from the camera or microphone, follow the for your development board.
To classify data (whether this is from the camera, the microphone, or a custom sensor) you'll need a model file. This model file contains all signal processing code, classical ML algorithms and neural networks - and typically contains hardware optimizations to run as fast as possible. To grab a model file:
Train your model in Edge Impulse.
Download the model file via:
This downloads the file into modelfile.eim
. (Want to switch projects? Add --clean
)
Then you can start classifying realtime sensor data. We have examples for:
The Node.js / Python / Go SDKs talk to the model through the IPC layer to run inference, so these SDKs are very thin, and just need the ability to spawn a binary. The SDKs are open source if you want to take a look, e.g. here is the Node.js .
You can download .eim
files using the Edge Impulse for Linux CLI or from the Studio (go to Dashboard, then enable 'Show Linux deploy options', and they'll be listed under Deployment). You can also build the .eim
files yourself with the Edge Impulse for Linux C++ SDK, see .
- grabs data from the microphone and classifies it in realtime.
- as above, but shows how to use the moving-average filter to smooth your data and reduce false positives.
- grabs data from a webcam and classifies it in realtime.
- classifies custom sensor data.
To collect data from the camera or microphone, follow the ) for your development board.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. .
Install the .
- grabs data from the microphone and classifies it in realtime.
- grabs data from a webcam and classifies it in realtime.
- classifies custom sensor data.
To collect data from the camera or microphone, follow the for your development board.
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. that you can build via:
- classify custom sensor data (APP_CUSTOM=1
).
- realtime audio classification (APP_AUDIO=1
).
- realtime image classification (APP_CAMERA=1
).
- builds an .eim file to be used from Node.js, Go or Python (APP_EIM=1
).
.
See the section below for information on enabling GPUs. To build with hardware extensions for running on the CPU:
To build Edge Impulse for Linux models () that can be used by the Python, Node.js or Go SDKs build with APP_EIM=1
:
To collect data from other sensors you'll need to write some code to collect the data from an external sensor, wrap it in the Edge Impulse Data Acquisition format, and upload the data to the Ingestion service. .
.
- grabs data from the microphone and classifies it in realtime.
- grabs data from a webcam and classifies it in realtime.
- classifies custom sensor data.
The impulse runner shows the results of your impulse running on your development board. This only applies to ready-to-go binaries built from the studio.
You start the impulse via:
This will sample data from your real sensors, classify the data, then print the results. E.g.:
--debug
- run the impulse in debug mode, this will print the intermediate DSP results. For image models, a live feed of the camera and inference results will also be locally hosted and available in your browser.
--continuous
- run the impulse in continuous mode (not available on all platforms).
The serial daemon is used to onboard new devices, configure upload settings, and acts as a proxy for devices without an IP connection.
Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the serial daemon. See this blog post for more information.
To use the daemon, connect a fully-supported development board to your computer and run:
The daemon will ask you for the server you want to connect to, prompt you to log in, and then configure the device. If your device does not have the right firmware yet, it will also prompt you to upgrade this.
This is an example of the output of the daemon:
Note: Your credentials are never stored. When you log in these are exchanged for a token. This token is used to further authenticate requests.
To clear the configuration, run:
This resets both the daemon configuration as well as the on-device configuration. If you still run into issues, you can connect to the device using a serial monitor (on baud rate 115,200) and run AT+CLEARCONFIG
. This removes all configuration from the device.
If your device is not connected to the remote management interface - for example because it does not have an IP connection, or because WiFi is out of range - the daemon will act as a proxy. It will register with Edge Impulse on behalf of the device, and proxy events through over serial. For this to work your device needs to support the Edge Impulse AT command set, please refer to the documentation for more information.
To skip any wizards (except for the login prompt) you can run the daemon in silent mode via:
This is useful in environments where there is no internet connection, as the daemon won't prompt to connect to WiFi.
You can use one device for many projects. To switch projects run:
And select the new project. The device will remain listed in the old project, and if you switch back will retain the same name and last seen date.
If you are using the ST B-L475E-IOT01A development board, you may experience the following error when attempting to connect to a WiFi network:
There is a known issue with the firmware for this development board's WiFi module that results in a timeout during network scanning if there are more than 20 WiFi access points detected. If you are experiencing this issue, you can work around it by attempting to reduce the number of access points within range of the device, or by skipping WiFi configuration.
The data forwarder is used to easily relay data from any device to Edge Impulse over serial. Devices write sensor values over a serial connection, and the data forwarder collects the data, signs the data and sends the data to the ingestion service. The data forwarder is useful to quickly enable data collection from a wide variety of development boards without having to port the full remote management protocol and serial protocol, but only supports collecting data at relatively low frequencies.
To use the data forwarder, load an application (examples for Arduino, Mbed OS and Zephyr below) on your development board, and run:
The data forwarder will ask you for the server you want to connect to, prompt you to log in, and then configure the device.
This is an example of the output of the forwarder:
Note: Your credentials are never stored. When you log in these are exchanged for a token. This token is used to further authenticate requests.
To clear the configuration, run:
To override the frequency, use:
To set a different baud rate, use:
The protocol is very simple. The device should send data on baud rate 115,200 with one line per reading, and individual sensor data should be split with either a ,
or a TAB
. For example, this is data from a 3-axis accelerometer:
The data forwarder will automatically determine the sampling rate and the number of sensors based on the output. If you load a new application where the sampling frequency or the number of axes changes, the data forwarder will automatically be reconfigured.
This is an example of a sketch that reads data from an accelerometer (tested on the Arduino Nano 33 BLE):
This is an example of an Mbed OS application that reads data from an accelerometer (tested on the ST IoT Discovery Kit):
There's also a complete example that samples data from both the accelerometer and the gyroscope here: edgeimpulse/example-dataforwarder-mbed.
This is an example of a Zephyr application that reads data from an accelerometer (tested on the Nordic Semiconductor nRF52840 DK with ST X-NUCLEO-IKS02A1 shield), based on the sensorhub example:
There's also a complete example that samples data from the accelerometer here: edgeimpulse/example-dataforwarder-zephyr.
Using the Data Forwarder, you can relay data from multiple sensors. You can check Benjamin Cabe's artificial nose for a complete example using NO2, CO, C2H5OH and VOC sensors on a WIO Terminal.
You may also have sensors with different sampling frequencies, such as:
accelerometer: 3 axis sampled at 100Hz
RMS current sensor: 1 axis sampled at 5Hz
In this case, you should first upscale to the highest frequency to keep the finest granularity: upscale RMS sensor to 100 Hz by duplicating each value 20 times (100/5). You could also smooth values over between samples.
To classify data you first deploy your project by following the steps in Running your impulse locally - which contains examples for a wide variety of platforms. Then, declare a features
array, fill it with sensor data, and run the classifier. Here are examples for Arduino, Mbed and Zephyr - but the same applies to any other platform.
Note: These examples collect a full frame of data, then classify this data. This might not be what you want (as classification blocks the collection thread). See Continuous audio sampling for an example on how to implement continuous classification.
Before adding the classifier in Zephyr:
Copy the extracted C++ library into your Zephyr project, and add the following to your CMakeLists.txt
file (where ./model
is where you extracted the library).
Enable C++ and set the stack size of the main thread to at least 4K, by adding the following to prj.conf
:
If you're on a Cortex-M target, enable hardware acceleration by adding the following defines to your CMakeLists.txt
file:
Then, run the following application:
If you are running the data forwarder on a Windows system, you need to update PowerShell's execution policy to allow running scripts:
The blocks CLI tool creates different blocks types that are used in organizational features such as:
Transformation blocks - to transform large sets of data efficiently.
Deployment blocks - to build personalized firmware using your own data or to create custom libraries.
Custom DSP blocks - to create and host your custom signal processing techniques and use it directly in your projects.
Custom learning models - to use your custom neural networks architectures and load pretrained weights.
With the blocks CLI tool, you can create new blocks, run them locally, and push them to Edge Impulse infrastructure so we can host them for you. Edge Impulse blocks can be written in any language, and are based on Docker container for maximum flexibility.
As an example here, we will show how to create a transformation block.
You can create a new block by running:
When you're done developing the block you can push it to Edge Impulse via:
The metadata about the block (which organization it belongs to, block ID) is saved in .ei-block-config
, which you should commit. To view this data in a convenient format, run:
Rather than only running custom blocks in the cloud, the edge-impulse-blocks runner
command lets developers download, configure, and run custom blocks entirely on their local machine, making testing and development much faster. The options depend on the type of block being run, and they can be viewed by using the help menu:
As seen above, the runner
accepts a list of relevant option flags along with a variable number of extra arguments that get passed to the Docker container at runtime for extra flexibility. As an example, here is what happens when edge-impulse-blocks runner
is used on a file transformation block:
Best of all, the runner
only downloads data when it isn't present locally, thus saving time and bandwidth.
Transformation blocks use Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. Thus, every block needs at least a Dockerfile
. This is a file describing how to build the container that powers the block, and it has information about the dependencies for the block - like a list of Python packages your block needs. This Dockerfile
needs to declare an ENTRYPOINT
: a command that needs to run when the container starts.
An example of a Python container is:
Which takes a base-image with Python 3.7.5, then installs all dependencies listed in requirements.txt
, and finally starts a script called transform.py
.
Note: Do not use a WORKDIR under /home! The /home path will be mounted in by Edge Impulse, making your files inaccessible.
Note: If you use a different programming language, make sure to use ENTRYPOINT
to specify the application to execute, rather than RUN
or CMD
.
Besides your Dockerfile
you'll also need the application files, in the example above transform.py
and requirements.txt
. You can place these in the same folder.
When pushing a new block all files in your folder are archived and sent to Edge Impulse, where the container is built. You can exclude files by creating a file called .ei-ignore
in the root folder of your block. You can either set absolute paths here, or use wildcards to exclude many files. For example:
To clear the configuration, run:
This resets the CLI configuration and will prompt you to log in again.
You can use an API key to authenticate with:
Note that this resets the CLI configuration and automatically configures your organization.
--dev
- lists development servers, use in conjunction with --clean
.
This Edge Impulse CLI is used to control local devices, act as a proxy to synchronise data for devices that don't have an internet connection, and to upload and convert local files. The CLI consists of seven tools:
edge-impulse-daemon - configures devices over serial, and acts as a proxy for devices that do not have an IP connection.
edge-impulse-uploader - allows uploading and signing local files.
edge-impulse-data-forwarder - a very easy way to collect data from any device over a serial connection, and forward the data to Edge Impulse.
edge-impulse-run-impulse - show the impulse running on your device.
edge-impulse-blocks - create organizational transformation, custom dsp, custom deployment and custom transfer learning blocks.
himax-flash-tool - to flash the Himax WE-I Plus.
Connect to devices without the CLI? Recent versions of Google Chrome and Microsoft Edge can connect directly to fully-supported development boards, without the CLI. See this blog post for more information.
Install Python 3 on your host computer.
Install Node.js v14 or higher on your host computer.
For Windows users, install the Additional Node.js tools (called Tools for Native Modules on newer versions) when prompted.
Install the CLI tools via:
You should now have the tools available in your PATH.
If you haven't already, create an edge impulse account. Many of our CLI tools require the user to log in to connect with the Edge Impulse Studio.
Install Python 3 on your host computer.
Install Node.js v14 or higher on your host computer.
Alternatively, run the following commands:
The last command should return the node version, v14 or above.
Let's verify the node installation directory:
If it returns /usr/local/, run the following commands to change npm's default directory:
Install the CLI tools via:
You should now have the tools available in your PATH.
If you haven't already, create an edge impulse account. Many of our CLI tools require the user to log in to connect with the Edge Impulse Studio.
If you have issues installing the CLI you can also collect data from fully-supported development boards directly using recent versions of Google Chrome and Microsoft Edge. See this blog post on how to get started.
This error indicates an issue occurred when installing the edge-impulse-cli for the first time or you have not selected to install the addition tools when installing NodeJS (not selected by default).
Remove NodeJS and install it again selecting the option:
Re-install the CLI via
If you receive the following error: The tools version "2.0" is unrecognized. Available tools versions are "4.0"
, launch a new command window as administrator and run:
This is indication that the node_modules
is not owned by you, but rather by root. This is probably not what you want. To fix this, run:
Try to set the npm user to root and re-run the installation command. You can do this via:
If you receive an error such as:
You're running an older version of node-gyp
(a way to build binary packages). Upgrade via:
This error occurs when you have upgraded Node.js since installing the Edge Impulse CLI. Re-install the CLI via:
Which will rebuild the dependencies.
This can happen even though you have Xcode CLT installed if you've updated macOS since your install. Follow this guide to reinstall Xcode CLT.
If you see this error message and you're behind a proxy you will need to set your proxy settings via:
Windows
macOS, Linux
Manually delete the Edge Impulse directory from node_modules
and reinstall:
The uploader signs local files and uploads them to the ingestion service. This is useful to upload existing data sets, or to migrate data between Edge Impulse instances. The uploader currently handles these type of files:
.cbor
- Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.
.json
- Files in the Edge Impulse Data Acquisition format. The uploader will not resign these files, only upload them.
.wav
- Lossless audio files. It's recommended to use the same frequency for all files in your data set, as signal processing output might be dependent on the frequency.
.jpg
- Image files. It's recommended to use the same ratio for all files in your data set.
You can now also upload data directly from the studio. Go to the Data acquisition page, and click the 'upload' icon. You can select files, the category and the label directly from here.
You can upload files via the Edge Impulse CLI via:
You can upload multiple files in one go via:
The first time you'll be prompted for a server, and your login credentials (see Edge Impulse Daemon for more information).
Files are automatically uploaded to the training
category, but you can override the category with the --category
option. E.g.:
Or set the category to split
to automatically split data between training and testing sets (recommended for a balanced dataset). This is based on the hash of the file, so this is a deterministic process.
A label is automatically inferred from the file name, see the Ingestion service documentation. You can override this with the --label
option. E.g.:
To clear the configuration, run:
This resets the uploader configuration and will prompt you to log in again.
You can use an API key to authenticate with:
Note that this resets the uploader configuration and automatically configures the uploader's account and project.
If you want to upload data for object detection, the uploader can label the data for you as it uploads it. In order to do this, all you need is to create a bounding_boxes.labels
file in the same folder as your image files. The contents of this file are formatted as JSON with the following structure:
You can have multiple keys under the boundingBoxes
object, one for each file name. If you have data in multiple folders, you can create a bounding_boxes.labels
in each folder.
You don't need to upload bounding_boxes.labels
When uploading one or more images, we check whether a labels file is present in the same folder, and automatically attach the bounding boxes to the image.
So you can just do:
or
Also note that this feature is currently only supported by the uploader, you cannot yet upload object detection data via the studio.
Let the Studio do the work for you!
Unsure about the structure of the bounding boxes file? Label some data in the studio, then export this data by selecting Dashboard > Export. The bounding_boxes.labels
file will be included in the exported archive.
The uploader data in the OpenMV dataset format. Pass in the option --format-openmv
and pass the folder of your dataset in to automatically upload data. Data is automatically split between testing and training sets. E.g.:
--silent
- omits information on startup. Still prints progress information.
--dev
- lists development servers, use in conjunction with --clean
.
--hmac-key <key>
- set the HMAC key, only used for files that need to be signed such as wav
files.
--concurrency <count>
- number of files to uploaded in parallel (default: 20).
--progress-start-ix <index>
- when set, the progress index will start at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.
--progress-end-ix <index>
- when set, the progress index will end at this number. Useful to split up large uploads in multiple commands while the user still sees this as one command.
--progress-interval <interval>
- when set, the uploader will not print an update for every line, but every interval
period (in ms.).
--allow-duplicates
- to avoid pollution of your dataset with duplicates, the hash of a file is checked before uploading against known files in your dataset. Enable this flag to skip this check.
When using command line wildcards to upload large datasets you may encounter an error similar to this one:
This happens if the number of .wav
files exceeds the total number of arguments allowed for a single command on your shell. You can easily work around this shell limitation by using the find
command to call the uploader for manageable batches of files:
You can include any necessary flags by appending them to the xargs
portion, for example if you wish to specify a category
:
The Himax flash tool uploads new binaries to the over a serial connection.
You upload a new binary via:
This will yield a response like this:
--baud-rate <n>
- sets the baud rate of the bootloader. This should only be used during development.
--verbose
- enable debug logs, including all communication received from the device.
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build a desktop application to classify sensor data.
Even though this is a C++ library you can link to it from C applications. See 'Using the library from C' below.
Knowledge required
This tutorial assumes that you know how to build C++ applications, and works on macOS, Linux and Windows. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Note: This tutorial provides the instructions necessary to build the C++ SDK library locally on your desktop. If you would like a full explanation of the Makefile and how to use the library, please see the .
Looking for examples that integrate with sensors? See the Edge Impulse for Linux.
Make sure you followed the tutorial, and have a trained impulse. Also install the following software:
macOS, Linux
- to build the application. make
should be in your PATH.
A modern C++ compiler. The default LLVM version on macOS works, but on Linux upgrade to LLVM 9 ().
Windows
which includes both GNU Make and a compiler. Make sure mingw32-make
is in your PATH. See for more information.
We created an example repository which contains a Makefile and a small CLI example application, which takes the raw features as an argument, and prints out the final classification. Clone or download this repository at .
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library, and click Build to create the library.
Download the .zip
file and place the contents in the 'example-standalone-inferencing' folder (which you downloaded above). Your final folder structure should look like this:
To get inference to work, we need to add raw data from one of our samples to main.cpp. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'. Make a note of the classification results, as we want our local application to produce the same numbers from inference.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open source/main.cpp in an editor of your choice. Find the following line:
Paste in your raw sample data where you see // Copy raw features here
:
Note: the raw features will likely be longer than what I listed here (the ...
won't compile--I just wanted to demonstrate where the features would go).
Save and exit.
Open a terminal or command prompt, and build the project:
macOS, Linux
Windows
This will first build the inferencing engine, and then build the complete application. After building succeeded you should have a binary in the build/ directory.
Then invoke the local application by calling the binary name:
macOS, Linux
Windows
This will run the signal processing pipeline using the values you provided in the features[]
buffer and then give you the classification output:
Which matches the values we just saw in the studio. You now have your impulse running locally!
The provided methods package all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally.
Impulses can be deployed as a C++ library. The library does not have any external dependencies and can be built with any C++11 compiler, see .
We have end-to-end guides for:
We also have tutorials for:
Did you know?
The input to the run_classifier
function is always a signal_t
structure with raw sensor values. This structure has two properties:
total_length
- the total number of values. This should be equal to EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE
(from model_metadata.h
). E.g. if you have 3 sensor axes, 100Hz sensor data, and 2 seconds of data this should be 600.
get_data
- a function that retrieves slices of data required by the DSP process. This is used in some DSP algorithms (like all audio-based ones) to page in the required data, and thus saves memory. Using this function you can store (f.e.) the raw data in flash or external RAM, and page it in when required.
F.e. this is how you would page in data from flash:
Signals are always a flat buffer, so if you have multiple sensor data you'll need to flatten it. E.g. for sensor data with three axes:
The signal for image data is also flattened, starting with row 1, then row 2 etc. And every pixel is a single value in HEX format (RRGGBB). E.g.:
If you're doing image classification and have a quantized model, the data is automatically quantized when reading the data from the signal to save memory. This is automatically enabled when you call run_impulse
. To control the size of the buffer that's used to read from the signal in this case you can set the EI_DSP_IMAGE_BUFFER_STATIC_SIZE
macro (which also allocates the buffer statically).
To statically allocate the neural network model, set this macro:
EI_CLASSIFIER_ALLOCATION_STATIC=1
Additionally we support full static allocation for quantized image models. To do so set this macro:
EI_DSP_IMAGE_BUFFER_STATIC_SIZE=1024
Static allocation is not supported for other DSP blocks at the moment.
Impulses can be deployed as a C++ library. This packages all your signal processing blocks, configuration and learning blocks up into a single package. You can include this package in your own application to run the impulse locally. In this tutorial you'll export an impulse, and build an Mbed OS application to classify sensor data.
Knowledge required
This tutorial assumes that you're familiar with Mbed OS, and have installed Mbed CLI. If you're unfamiliar with these tools you can build binaries directly for your development board from the Deployment page in the studio.
Note: Are you looking for an example that has all sensors included? The Edge Impulse firmware for the ST IoT Discovery Kit has that. See .
Make sure you followed the tutorial, and have a trained impulse. Also install the following software:
- make sure mbed
is in your PATH.
- make sure arm-none-eabi-gcc
is in your PATH.
We created an example repository which contains a small Mbed OS application, which takes the raw features as an argument, and prints out the final classification. Import this repository using Mbed CLI:
Head over to your Edge Impulse project, and go to Deployment. From here you can create the full library which contains the impulse and all external required libraries. Select C++ library and click Build to create the library.
Download the .zip
file and place the contents in the 'example-standalone-inferencing-mbed' folder (which you downloaded above). Your final folder structure should look like this:
With the project ready it's time to verify that the application works. Head back to the studio and click on Live classification. Then load a validation sample, and click on a row under 'Detailed result'.
To verify that the local application classifies the same, we need the raw features for this timestamp. To do so click on the 'Copy to clipboard' button next to 'Raw features'. This will copy the raw values from this validation file, before any signal processing or inferencing happened.
Open main.cpp
and paste the raw features inside the static const float features[]
definition, for example:
Then build and flash the application to your development board with Mbed CLI:
This will run the signal processing pipeline, and then classify the output:
Which matches the values we just saw in the studio. You now have your impulse running on your Mbed-enabled development board!
In a real application, you would want to make the features[]
buffer non-const. You would fill it with samples from your sensor(s) and call run_classifier()
or run_classifier_continuous()
. See for more information.
Even though the impulse is deployed as a C++ application, you can link to it from C applications. This is done by compiling the impulse as a shared library with the EIDSP_SIGNAL_C_FN_POINTER=1
and EI_C_LINKAGE=1
macros, then link to it from a C application. The run_classifier
can then be invoked from your application. An end-to-end application that demonstrates this and can be used with this tutorial is under .
with our C++, Node.js, Python or Go SDKs.
These tutorials show you how to run your impulse, but you'll need to hook in your sensor data yourself. We have a number of examples on how to do that in the documentation, or you can use the full firmware for any of the as a starting point - they have everything (including sensor integration) already hooked up. Or keep reading for documentation about the sensor format and inputs that we expect.
You can build binaries for supported development boards straight from the studio. These will include your full impulse. See
If you have your data already in RAM you can use the function to construct the signal:
The get_data
function expects floats to be returned, but you can use the and helper functions if your own buffers are int8_t
or int16_t
(useful to save memory). E.g.:
We do have an end-to-end example on constructing a signal from a frame buffer in RGB565 format, which is easily adaptable to other image formats, see: .
To see the output of the impulse, connect to the development board over a serial port on baud rate 115,200 and reset the board (e.g. by pressing the black button on the . You can do this with your favourite serial monitor or with the Edge Impulse CLI:
A demonstration on how to plug sensor values into the classifier can be found here: .