Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
🤗 Hugging Face offers a suite of tools that assist with various AI applications. Most notably, they provide a hub for people to share their pre-trained models. In this tutorial, we will demonstrate how to download a simple ResNet model from the Hugging Face hub, profile it, and convert it to a C++ library for use in your edge application. This particular model was trained to identify species of bean plants using the bean dataset.
To learn more about using the Python SDK, please see: Edge Impulse Python SDK Overview
You will need to obtain an API key from an Edge Impulse project. Log into edgeimpulse.com and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
To download a model from the Hugging Face hub, we need to first find a model. Head to huggingface.co/models. On the left side, click Image Classification to filter under the Tasks tab and under the Libraries tab, filter by ONNX (as the Edge Impulse Python SDK easily accepts ONNX models). You should see the resnet-tiny-beans model trained by user fxmarty.
Click on the resnet-tiny-beans entry (or follow this link) to read about the model and view the files. If you click on the Files* tab, you can see all of the files available in this particular model.
Set the name of the repo (username/repo-name) and the file we want to download.
To start, we need to list the possible target devices we can use for profiling. We need to pick from this list.
You should see a list printed such as:
A common option is the cortex-m4f-80mhz
, as this is a relatively low-power microcontroller family. From there, we can use the Edge Impulse Python SDK to generate a profile for your model to ensure it fits on your target hardware and meets your timing requirements.
Once you are happy with the performance of the model, you can deploy it to a number of possible hardware targets. To see the available hardware targets, run the following:
You should see a list printed such as:
The most generic target is to download a .zip file containing a C++ library containing the inference runtime and your trained model, so we choose 'zip'
from the above list. We also need to tell Edge Impulse how we are planning to use the model. In this case, we want to perform classification, so we set the output type to Classification.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy()
function. Your deployment file(s) will be downloaded to that directory.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation here.
While the Edge Impulse Studio is a great interface for guiding you through the process of collecting data and training a model, the Python SDK allows you to programmatically Bring Your Own Model (BYOM), developed and trained on any platform. See .
With the following tutorials, you will learn how to use the Edge Impulse Python SDK with a number of other machine-learning frameworks and platforms:
In this ML & data engineering section, you will discover useful techniques to train your models, generate synthetic datasets or to perform advanced feature extraction:
In this tutorial, we will explore how to label image data using GPT-4o, a powerful language model developed by OpenAI. GPT-4o is capable of generating accurate and meaningful labels for images, making it a valuable tool for image classification tasks. By leveraging the capabilities of GPT-4o, we can automate the process of labeling image data, saving time and effort in data preprocessing.
We packaged in a "pre-built Transformation block" (available for all Enterprise plans), an innovative method to distill LLM knowledge.
This pre-built transformation block can be found under the Data sources tab in the Data acquisition view.
The block takes all your unlabeled image files and asks GPT-4o to label them based on your prompt - and we automatically add the reasoning as metadata to your items!
Your prompt should return a single label, e.g.
The GPT-4o model processes images and assigns labels based on the content, filtering out any images that do not meet the quality criteria.
Navigate to the Data acquisition page and add images to your project's dataset. In the video tutorial above, we show how to collect a video recorded directly from a phone, upload it to Edge Impulse and split the video into individual frames.
In the Data sources tab, add the "Label image data using GPT-4o" block:
OpenAI API key: Add your OpenAI API key. This value will be stored as a secret, and won't be shown again.
Prompt: Your prompt should return a single label. For example:
Disable samples w/ label: If a certain label is output, disable the data item - these are excluded from training. Multiple labels are accepted, separate them with a coma.
Max. no. of samples to label: Number of samples to label.
Concurrency: Number of samples to label in parallel.
Auto-convert videos: If set, all videos are automatically split into individual images before labeling.
To edit your configuration, you need to update the json-like steps of your block:
Then, run the block to automatically label the frames.
And here is an example of the returned logs:
Use the labeled data to train a machine learning model. See the end-to-end tutorial Adding sight to your sensors.
In the video tutorial, we deployed the trained model to an MCU-based edge device - the Arduino Nicla Vision.
The small model we tested this on performed exceptionally well, identifying toys in various scenes quickly and accurately. By distilling knowledge from the large LLM, we created a specialized, efficient model suitable for edge deployment.
The latest multimodal LLMs are incredibly powerful but too large for many practical applications. At Edge Impulse, we enable the transfer of knowledge from these large models to smaller, specialized models that run efficiently on edge devices.
Our "Label image data using GPT-4o" block is available for enterprise customers, allowing you to experiment with this technology.
For further assistance, visit our forum.
Blog post: Label image data using GPT-4o blog post
TensorFlow is an open source library for training machine learning models. Keras is an open source Python library that makes creating neural networks in TensorFlow much easier. We use these two libraries together to very quickly train a model to identify handwritten digits. From there, we use the Edge Impulse Python SDK library to profile the model to see how inference will perform on a target edge device. Then, we use the SDK again to convert our trained model to a C++ library that can be deployed to an edge hardware platform, such as a microcontroller.
Follow the code below to see how to train a simple machine learning model and deploy it to a C++ library using Edge Impulse.
To learn more about using the Python SDK, please see Edge Impulse Python SDK Overview.
You will need to obtain an API key from an Edge Impulse project. Log into edgeimpulse.com and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
We want to create a classifier that can uniquely identify handwritten digits. To start, we will use TensorFlow and Keras to train a very simple convolutional neural network (CNN) on the classic MNIST dataset, which consists of handwritten digits from 0 to 9.
To start, we need to list the possible target devices we can use for profiling. We need to pick from this list.
You should see a list printed such as:
A common option is the cortex-m4f-80mhz
, as this is a relatively low-power microcontroller family. From there, we can use the Edge Impulse Python SDK to generate a profile for your model to ensure it fits on your target hardware and meets your timing requirements.
Once you are happy with the performance of the model, you can deploy it to a number of possible hardware targets. To see the available hardware targets, run the following:
You should see a list printed such as:
The most generic target is to download a .zip file that holds a C++ library containing the inference runtime and your trained model, so we choose 'zip'
from the above list. To do that, we first need to create a Classification object which contains our label strings (and other optional information about the model). These strings will be added to the C++ library metadata so you can access them in your edge application.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy()
function. Your deployment file(s) will be downloaded to that directory.
Important! The deployment targets list will change depending on the values provided for model
, model_output_type
, and model_input_type
in the next part. For example, you will not see openmv
listed once you upload a model (e.g. using .profile()
or .deploy()
) if model_input_type
is not set to ei.model.input_type.ImageInput()
. If you attempt to deploy to an unavailable target, you will receive the error Could not deploy: deploy_target: ...
. If model_input_type
is not provided, it will default to OtherInput. See this page for more information about input types.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation here.
Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to adjust experiments, collaborate seamlessly within your organization, and deploy models to production without leaving SageMaker Studio.
To learn more about using the Python SDK, please see: Edge Impulse Python SDK Overview.
This guide has been built from AWS reference project Introduction to SageMaker TensorFlow - Image Classification, please have a look at this AWS documentation page.
Below are the changes made to the original training script and configuration:
The Python 3 (Data Science 3.0)
kernel was used.
The dataset has been changed to classify images as car
vs unknown
. You can download the dataset from this Edge Impulse public project and store it in your S3 bucket.
The dataset has been imported in the Edge Impulse S3 bucket configured when creating the SageMaker Studio domain. Make sure to adapt to your path or use the AWS reference project.
The training instance used is ml.m5.large
.
</div> Install dependencies
Below is the structure of our dataset in our S3 bucket
We have used the default bucket created when configuring SageMaker Studio domain:
You can continue with the default model, or can choose a different model from the list. Note that this tutorial has been tested with MobileNetv2 based models. A complete list of SageMaker pre-trained models can also be accessed at Sagemaker pre-trained Models.
Optional, ship this next cell if you don't want to retrain the model. And uncomment the last line of the cell after
You will need to obtain an API key from an Edge Impulse project. Log into edgeimpulse.com and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
Voila! You now have a C++ library ready to be compiled and integrated in your embedded targets. Feel free to have a look at Edge Impulse deployment options on the documentation to understand how you can integrate that to your embedded systems.
You can also have a look at the deployment page of your project to test your model on a web browser or test
If you want to upload files directly to an Edge Impulse project, we recommend using the . However, sometimes you cannot upload your samples directly, as you might need to convert the files to one of the accepted formats or modify the data prior to model training. Edge Impulse offers for some types of projects, but you might want to create your own custom augmentation scheme. Or perhaps you want to and script the upload process.
The Python SDK offers a set of functions to help you move data into and out of your project. This can be extremely helpful when generating or augmenting your dataset. The following cells demonstrate some of these upload and download functions.
You can find the API documentation for the functions found in this tutorial .
WARNING: This notebook will add and delete data in your Edge Impulse project, so be careful! We recommend creating a throwaway project when testing this notebook.
Note that you might need to refresh the page with your Edge Impulse project to see the samples appear.
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
The following file formats are allowed: .cbor, .json, .csv, .wav, .jpg, .png, .mp4, .avi.
If you head to the Data acquisition page on your project, you should see images in your dataset.
You can download samples from your Edge Impulse project if you know the sample IDs. You can get sample IDs by calling the ei.data.get_sample_ids()
function, which allows you to filter IDs based on filename, category, and label.
Take a look at the files in this directory. You should see the downloaded images. They should match the images in the dataset/ directory, which were the original images that we uploaded.
If you know the ID of the sample you would like to delete, you can call the delete_sample_by_id()
function. You can also delete all the samples in your project by calling delete_all_samples()
.
Take a look at the data in your project. The samples that we uploaded should be gone.
Important! The annotations file must be named exactly labels.info
If you head to the Data acquisition page on your project, you should see images in your dataset along with the bounding box information.
If you head to the Data acquisition page on your project, you should see your time series data.
The raw data must be encoded in an IO object. We convert the dictionary objects to a BytesIO
object, but you can also read in data from .json files.
If you head to the Data acquisition page on your project, you should see your time series data.
Important! NumPy arrays must be in the shape
(Number of samples, number of data points, number of sensors)
If you are working with image data in NumPy, we recommend saving those images as .png or .jpg files and using upload_directory()
.
If you head to the Data acquisition page on your project, you should see your time series data. Note that the sample names are randomly assigned, so we recommend recording the sample IDs when you upload.
Note that several other packages exist that work as drop-in replacements for pandas. You can use these replacements so long as you import that with the name pd
. For example, one of:
The first option is to upload one dataframe for each sample (non-time series)
You can also upload one dataframe for each sample (time series). As with previous examples, we'll assume that the sample rate is 10 ms.
You can upload non-time series data where each sample is a row in the dataframe. Note that you need to provide labels in the rows.
A "wide" dataframe is one where each column represents a value in the time series data, and the rows become individual samples. Note that you need to provide labels in the rows.
A DataFrame can also be divided into "groups" so you can upload multidimensional time series data.
is an online framework for helping manage machine learning training, data versioning, and experiments. When running experiments for edge-focused ML projects, it can be helpful to see the required memory (RAM and ROM) along with estimated inference times of your model for your target hardware. By viewing these metrics, you can quickly gauge if your model will fit onto your target device!
Follow the code below to see how to train a simple machine learning model with different hyperparameters and log those values to the Weights & Biases dashboard.
To learn more about using the Python SDK, please see:
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
We want to vary the hyperparameters in our model and see how it affects the accuracy and predicted RAM, ROM, and inference time on our target platform. To do that, we construct a function that builds a simple model using Keras, trains the model, and computes the accuracy and loss from our holdout test set. We then use the Edge Impulse Python SDK to generate a profile of our model for our target hardware. We log the hyperparameter (number of nodes in the hidden layer), test loss, test accuracy, estimated RAM, estimated ROM, and estimated inference time (ms) to our Weights and Biases console.
Now, it's time to run the experiment and log the results in Weights and Biases. Simply call our function and provide a new hyperparameter value for the number of nodes.
Once you are happy with the performance of your model, you can then deploy it to your target hardware. We will assume that 32 nodes in our hidden layer provided the best trade-off of RAM, flash, inference time, and accuracy for our needs. To start, we will retrain the model:
Next, we should evaluate the model on our holdout test set.
From there, we can see the available hardware targets for deployment:
You should see a list printed such as:
The most generic target is the .zip file that holds a C++ library containing our trained model and inference runtime. To pass our labels to the C++ library, we create a Classification object, which contains our label strings.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy() function. Your deployment file(s) will be downloaded to that directory.
We have put together the following tutorials to help you get started with synthetic datasets generation:
.
.
.
Synthetic datasets are a collection of data artificially generated rather than being collected from real-world observations or measurements. They are created using algorithms, simulations, or mathematical models to mimic the characteristics and patterns of real data. Synthetic datasets are a valuable tool to generate data for experimentation, testing, and development when obtaining real data is challenging, costly, or undesirable.
You might want to generate synthetic datasets for several reasons:
Cost Efficiency: Creating synthetic data can be more cost-effective and efficient than collecting large volumes of real data, especially in resource-constrained environments.
Data Augmentation: Synthetic datasets allow users to augment their real-world data with variations, which can improve model robustness and performance.
Data Diversity: Synthetic datasets enable the inclusion of uncommon or rare scenarios, enriching model training with a wider range of potential inputs.
Privacy and Security: When dealing with sensitive data, synthetic datasets provide a way to train models without exposing real information, enhancing privacy and security.
You can upload all files in a directory using the Python SDK. Note that you can set the category, label, and metadata for all files with a single call. If you want to use a different label for each file set label=None
in the function call and name your files with <label>.<name>.<ext>. For example, wave.01.csv will have the label wave when uploaded. See for more information.
For object detection, you can put bounding box information (following the ) in a file named labels.info in that same directory.
The Edge Impulse ingestion service accepts CSV files, which we can use to upload raw data. Note that if you configure a CSV template using the , then the expected format of the CSV file might change. If you do not configure a CSV template, then the ingestion service expects CSV data to be in a particular format. See .
Another way to upload data is to encode it in JSON format. See the for more information on acceptable key/value pairs. Note that at this time, the signature
value can be set to 0
.
is powerful Python library for working with large arrays and matrices. You can upload NumPy arrays directly into your Edge Impulse project. Note that the arrays are required to be in a particular format, and must be uploaded with required metadata (such as a list of labels and the sample rate).
is popular Python library for performing data manipulation and analysis. The Edge Impulse library supports a number of ways to upload dataframes. We will go over each format.
To use Weights and Biases, you will need to create an account on and call the wandb.login()
function. This will prompt you to log in to your account. Your credentials should be stored, which allows you to use the wandb
package in your Python library.
We want to create a classifier that can uniquely identify handwritten digits. To start, we will use TensorFlow and Keras to train a very simple convolutional neural network (CNN) on the classic dataset, which consists of handwritten digits from 0 to 9.
Head to and log in (if you have not already done so). Under My projects on the left, click on the nodes-sweep project. You can visualize the results of your experiments with the various charts that Weights & Biases offers. For example, here is a that allows you to quickly visualize the different hyperparameters and metrics (including our new edge profile metrics).
If you would like to deploy your model to your target hardware, the Python SDK can help you with that, too. See our documentation .
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation .
This example comes from the Edge Impulse Linux Inferencing Python SDK that has been slightly modify to upload the raw data back to Edge Impulse based on the inference results.
To run the example:
Clone this repository:
Install the dependencies:
Grab your the API key of the project you want to upload the inferred results raw data:
Past the new key in the EI_API_KEY
variable in the audio-classify-export.py
file. Alternatively, load it from your environment variable:
Download your modelfile.eim:
Run the script:
Here are the arguments you can set:
modelfile.eim
, path the model.eim
yes,no
, labels to upload, separated by comas, no space
0.6
, low confidence threshold
0.8
, high confidence threshold
<audio_device_ID, optional>
In a keyword spotting model, it can give the following results:
This notebook explores how we can use generative AI to create datasets which don't exist yet. This can be a good starting point for your project if you have not collected or cannot collect the data required. It is important to note the limitations of generative AI still apply here, biases can be introduced through your prompts, results can include "hallucinations" and quality control is important.
This example uses the OpenAI API to call the Dall-E image generation tool, it explores both generation and variation but there are other tools such as editing which could also be useful for augmenting an existing dataset.
There is also a video version of this tutorial:
We have wrapped this example into a Transformation Block (Enterprise Feature) to make it even easier to generate images and upload them to your organization. See: https://github.com/edgeimpulse/example-transform-Dall-E-images
Python 3
Pip package manager
Jupyter Notebook: https://jupyter.org/install
pip packages (install with pip install
packagename
):
openai https://pypi.org/project/openai/
First off you will need to set up and Edge Impulse account and create your first project.
You will also need to create an API Key for OpenAI: https://platform.openai.com/docs/api-reference/authentication
The API takes in a prompt, number of images and a size
The API also has a variations call which takes in an existing images and creates variations of it. This could also be used to modify existing images.
Here we are iterate through a number of images and variations to generate a dataset based on the prompts/labels given.
These files can then be uploaded to a project with these commands (run in a separate terminal window):
(run edge-impulse-uploader --clean if you have used the CLI before to reset the target project)
Now you can use your images to create an image classification model on Edge Impulse.
Why not try some other OpenAI calls, 'edit' could be used to take an existing image and translate it into different environments or add different humans to increase the variety of your dataset. https://platform.openai.com/docs/guides/images/usage
Python 3
Pip package manager
Jupyter Notebook: https://jupyter.org/install
pip packages (install with pip install
packagename
):
pydub https://pypi.org/project/pydub/
google-cloud-texttospeech https://cloud.google.com/python/docs/reference/texttospeech/latest
requests https://pypi.org/project/requests/
First off you will need to set up and Edge Impulse account and create your first project. You will also need a Google Cloud account with the Text to Speech API enabled: https://cloud.google.com/text-to-speech, the first million characters generated each month are free (WaveNet voices), this should be plenty for most cases as you'll only need to generate your dataset once. From google you will need to download a credentials JSON file and set it to the correct environment variable on your system to allow the python API to work: (https://developers.google.com/workspace/guides/create-credentials#service-account)
First off we need to set our desired keywords and labels:
Then we need to set up the parameters for our speech dataset, all possible combinations will be iterated through:
languages - Choose the text to speech voice languages to use (https://cloud.google.com/text-to-speech/docs/voices)
pitches - Which voice pitches to apply
genders - Which SSML genders to apply
speakingRates - Which speaking speeds to apply
Then provide some other key parameters:
out_length - How long each output sample should be
count - Maximum number of samples to output (if all combinations of languages, pitches etc are higher then this restricts output)
voice-dir - Where to store the clean samples before noise is added
noise-url - Which noise file to download and apply to your samples
output-folder - The final output location of the noised samples
num-copies - How many different noisy versions of each sample to create
max-noise-level - in Db,
Then we need to check all the output folders are ready
And download the background noise file
Then we can generate a list of all possible parameter combinations based on the input earlier. If you have set num_copies
to be smaller than the number of combinations then these options will be reduced:
Finally we iterate though all the options generated, call the Google TTS API to generate the desired sample, and apply noise to it, saving locally with metadata:
The files in ./out-noisy
can be uploaded easily using the Edge Impulse CLI tool:
Now you can use your keywords to create a robust keyword detection model in Edge Impulse Studio!
Make use of our pre-built keyword dataset to add noise and 'unknown' words to your model: Keyword Spotting Dataset
Try out both classification models and the transfer learning keyword spotting model to see which works best for your case
This notebook takes you through a basic example of using the physics simulation tool PyBullet to generate an accelerometer dataset representing dropping the Nordic Thingy:53 devkit from different heights. This dataset can be used to train a regression model to predict drop height.
This idea could be used for a wide range of simulatable environments- for example generating accelerometer datasets for pose estimation or fall detection. The same concept could be applied in an FMEA application for generating strain datasets for structural monitoring.
There is also a video version of this tutorial:
Python 3
Pip package manager
Jupyter Notebook: https://jupyter.org/install
The dependencies can be installed with:
We need to load in a Universal Robotics Description Format file describing an object with the dimensions and weight of a Nordic Thingy:53. In this case, measuring our device it is 64x60x23.5mm and its weight 60g. The shape is given by a .obj 3D model file.
To generate the required data we will be running PyBullet in headless "DIRECT" mode so we can iterate quickly over the parameter field. If you run the python file below you can see how pybullet simulates the object dropping onto a plane
First off we need to set up a pybullet physics simulation environment. We load in our object file and a plane for it to drop onto. The plane's dynamics can be adjusted to better represent the real world (in this case we're dropping onto carpet)
We also need to define the output folder for our simulated accelerometer files
And define the drop parameters
We also need to define the characteristics of the IMU on the real device we are trying to simulate. In this case the Nordic Thingy:53 has a Bosch BMI270 IMU (https://www.bosch-sensortec.com/products/motion-sensors/imus/bmi270/) which is set to a range of +-2g with a resolution of 0.06g. These parameters will be used to restrict the raw acceleration output:
Finally we are going to give the object and plane restitution properties to allow for some bounce. In this case I dropped the real Thingy:53 onto a hardwood table. You can use p.changeDynamics to introduce other factors such as damping and friction.
Here we iterate over a range of heights, randomly changing its start orientation for i number of simulations per height. The acceleration is calculated relative to the orientation of the Thingy:53 object to represent its onboard accelerometer.
Finally we save the metadata file to the output folder. This can be used to tell the edge-impulse-uploader CLI tool the floating point labels for each file.
These files can then be uploaded to a project with these commands (run in a separate terminal window):
(run edge-impulse-uploader --clean if you have used the CLI before to reset the target project)
Now you can use your dataset a drop height detection regression model in Edge Impulse Studio!
See if you can edit this project to simulate throwing the object up in the air to predict the maximum height, or add in your own custom object. You could also try to better model the real environment you're dropping the object in- adding air resistance, friction, damping and material properties for your surface.