Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
While the Edge Impulse Studio is a great interface for guiding you through the process of collecting data and training a model, the edgeimpulse Python SDK allows you to programmatically Bring Your Own Model (BYOM), developed and trained on any platform. See here for the Python SDK API reference documentation.
With the following tutorials, you will learn how to use the Edge Impulse Python SDK with a number of other machine-learning frameworks and platforms:
In this ML & data engineering section, you will discover useful techniques to train your models, generate synthetic datasets or to perform advanced feature extraction:
The following tutorials detail how to work with synthetic datasets in Edge Impulse:
Learn about how to integrate synthetic data models into your Edge Impulse project with the following guide:
Synthetic datasets are a collection of data artificially generated rather than being collected from real-world observations or measurements. They are created using algorithms, simulations, or mathematical models to mimic the characteristics and patterns of real data. Synthetic datasets are a valuable tool to generate data for experimentation, testing, and development when obtaining real data is challenging, costly, or undesirable.
You might want to generate synthetic datasets for several reasons:
Cost Efficiency: Creating synthetic data can be more cost-effective and efficient than collecting large volumes of real data, especially in resource-constrained environments.
Data Augmentation: Synthetic datasets allow users to augment their real-world data with variations, which can improve model robustness and performance.
Data Diversity: Synthetic datasets enable the inclusion of uncommon or rare scenarios, enriching model training with a wider range of potential inputs.
Privacy and Security: When dealing with sensitive data, synthetic datasets provide a way to train models without exposing real information, enhancing privacy and security.
You can generate synthetic data directly from Edge Impulse using the Synthetic Data tab in the Data acquisition view. This tab provides a user-friendly interface to generate synthetic data for your projects. You can create synthetic datasets using a variety of tools and models.
We have put together the following tutorials to help you get started with synthetic datasets generation:
DALL-E Image Generation Block: Generate image datasets using Dall·E using the DALL-E model.
Whisper Keyword Spotting Generation Block: Generate keyword-spotting datasets using the Whisper model. Ideal for keyword spotting and speech recognition applications.
Eleven Labs Sound Generation Block: Generate sound datasets using the Eleven Labs model. Ideal for generating realistic sound effects for various applications.
Note that you will need an API Key/Access Token from the different providers to run the model used to generate the synthetic data.
If you want to create your own synthetic data block, see Custom synthetic data blocks.
Generate image datasets using Dall·E (Jupyter Notebook and Transformation block source code available).
Generate keyword-spotting datasets (Jupyter Notebook source code available).
Generate physics simulation datasets (Jupyter Notebook source code available).
This example comes from the Edge Impulse Linux Inferencing Python SDK that has been slightly modify to upload the raw data back to Edge Impulse based on the inference results.
To run the example:
Clone this repository:
Install the dependencies:
Grab your the API key of the project you want to upload the inferred results raw data:
Past the new key in the EI_API_KEY
variable in the audio-classify-export.py
file. Alternatively, load it from your environment variable:
Download your modelfile.eim:
Run the script:
Here are the arguments you can set:
modelfile.eim
, path the model.eim
yes,no
, labels to upload, separated by comas, no space
0.6
, low confidence threshold
0.8
, high confidence threshold
<audio_device_ID, optional>
In a keyword spotting model, it can give the following results:
In this tutorial, we will explore how to label image data using GPT-4o, a powerful language model developed by OpenAI. GPT-4o is capable of generating accurate and meaningful labels for images, making it a valuable tool for image classification tasks. By leveraging the capabilities of GPT-4o, we can automate the process of labeling image data, saving time and effort in data preprocessing.
We packaged in a "pre-built Transformation block" (available for all Enterprise plans), an innovative method to distill LLM knowledge.
This pre-built transformation block can be found under the Data sources tab in the Data acquisition view.
The block takes all your unlabeled image files and asks GPT-4o to label them based on your prompt - and we automatically add the reasoning as metadata to your items!
Your prompt should return a single label, e.g.
The GPT-4o model processes images and assigns labels based on the content, filtering out any images that do not meet the quality criteria.
Navigate to the Data acquisition page and add images to your project's dataset. In the video tutorial above, we show how to collect a video recorded directly from a phone, upload it to Edge Impulse and split the video into individual frames.
In the Data sources tab, add the "Label image data using GPT-4o" block:
OpenAI API key: Add your OpenAI API key. This value will be stored as a secret, and won't be shown again.
Prompt: Your prompt should return a single label. For example:
Disable samples w/ label: If a certain label is output, disable the data item - these are excluded from training. Multiple labels are accepted, separate them with a coma.
Max. no. of samples to label: Number of samples to label.
Concurrency: Number of samples to label in parallel.
Auto-convert videos: If set, all videos are automatically split into individual images before labeling.
To edit your configuration, you need to update the json-like steps of your block:
Then, run the block to automatically label the frames.
And here is an example of the returned logs:
Use the labeled data to train a machine learning model. See the end-to-end tutorial Adding sight to your sensors.
In the video tutorial, we deployed the trained model to an MCU-based edge device - the Arduino Nicla Vision.
The small model we tested this on performed exceptionally well, identifying toys in various scenes quickly and accurately. By distilling knowledge from the large LLM, we created a specialized, efficient model suitable for edge deployment.
The latest multimodal LLMs are incredibly powerful but too large for many practical applications. At Edge Impulse, we enable the transfer of knowledge from these large models to smaller, specialized models that run efficiently on edge devices.
Our "Label image data using GPT-4o" block is available for enterprise customers, allowing you to experiment with this technology.
For further assistance, visit our forum.
Blog post: Label image data using GPT-4o blog post
Generate audio data using the . This integration allows you to generate realistic sound effects for your projects, such as glass breaking, car engine revving, or other custom sounds. You can customize the sound prompts and generate high-quality audio samples for your datasets.
This integration allows you to expand your datasets with sounds that may be difficult or expensive to record naturally. This approach not only saves time and money but also enhances the accuracy and reliability of the models we deploy on edge devices.
In this tutorial, we focus on a practical application that can be used in a smart security system, or in a factory to detect incidents, such as detecting the sounds of glass breaking.
There is also a video version of this guide:
Only available with Edge Impulse Pro Plan and Enterprise Plan
Navigate to Data Acquisition: Once you're in your project, navigate to the Data Acquisition section, go to Synthetic data and select the ElevenLabs Synthetic Audio Generator data source.
First, get your Eleven Labs API Key. Navigate to the Eleven Labs web interface to get your key and optionally test your prompt.
Here we will be trying to collect a glass-breaking sound or impact.
Prompt: "glass breaking"
Simple prompts are just that: they are simple, one-sided prompts where we try to get the AI to generate a single sound effect. This could be, for example, “person walking on grass” or “glass breaking.” These types of prompts will generate a single type of sound effect with a few variations either in the same generation or in subsequent generations. All in all, they are fairly simple.
There are a few ways to improve these prompts, however, and that is by adding a little bit more detail. Even if they are simple prompts, they can be made to give better output by improving the prompt itself. For example, something that sometimes works is adding details like “high-quality, professionally recorded footsteps on grass, sound effects foley.” It can require some experimentation to find a good balance between being descriptive and keeping it brief enough to have AI understand the prompt. e.g. high quality audio of window glass breaking
Label: The label of the generated audio sample.
Prompt influence: Between 0 and 1, this setting ranges from giving the AI more creativity in how it interprets the prompt to telling the AI to be more strict in following the exact prompt that you’ve given. 1 being more creative.
Number of samples: Number of samples to generate
Minimum length (seconds): Minimum length of generated audio samples. Audio samples will be padded with silence to minimum length. It also determines how long your generations should be. Depending on what you set this as, you can get quite different results. For example, if I write “kick drum” and set the length to 11 seconds, I might get a full drum loop with a kick drum in it, but that might not be what I want. On the other hand, if I set the length to 1 second, I might just get a one-shot with a single instance of a kick drum.
Frequency (Hz): Audio frequency, ElevenLabs generates data at 44100Hz, so any other value will be resampled.
Upload to category: Data will be uploaded to this category in your project.
Once you've set up your prompt, and api key, run the pipeline to generate the sound samples. You can then view the output in the Data Acquisition section.
Enhance Data Quality: Generative AI can create high-quality sound samples that are difficult to record naturally.
Increase Dataset Diversity: Access a wide range of sounds to enrich your training dataset and improve model performance.
Save Time and Resources: Quickly generate the sound samples you need without the hassle of manual recording.
Improve Model Accuracy: High-quality, diverse sound samples can help fill gaps in your dataset and enhance model performance.
By leveraging generative AI for sound generation, you can enhance the quality and diversity of your training datasets, leading to more accurate and reliable edge AI models. This innovative approach saves time and resources while improving the performance of your models in real-world applications. Try out the Eleven Labs block in Edge Impulse today and start creating high-quality sound datasets for your projects.
This notebook takes you through a basic example of using the physics simulation tool PyBullet to generate an accelerometer dataset representing dropping the Nordic Thingy:53 devkit from different heights. This dataset can be used to train a regression model to predict drop height.
This idea could be used for a wide range of simulatable environments- for example generating accelerometer datasets for pose estimation or fall detection. The same concept could be applied in an FMEA application for generating strain datasets for structural monitoring.
There is also a video version of this tutorial:
Python 3
Pip package manager
The dependencies can be installed with:
We need to load in a Universal Robotics Description Format file describing an object with the dimensions and weight of a Nordic Thingy:53. In this case, measuring our device it is 64x60x23.5mm and its weight 60g. The shape is given by a .obj 3D model file.
To generate the required data we will be running PyBullet in headless "DIRECT" mode so we can iterate quickly over the parameter field. If you run the python file below you can see how pybullet simulates the object dropping onto a plane
First off we need to set up a pybullet physics simulation environment. We load in our object file and a plane for it to drop onto. The plane's dynamics can be adjusted to better represent the real world (in this case we're dropping onto carpet)
We also need to define the output folder for our simulated accelerometer files
And define the drop parameters
We also need to define the characteristics of the IMU on the real device we are trying to simulate. In this case the Nordic Thingy:53 has a Bosch BMI270 IMU (https://www.bosch-sensortec.com/products/motion-sensors/imus/bmi270/) which is set to a range of +-2g with a resolution of 0.06g. These parameters will be used to restrict the raw acceleration output:
Finally we are going to give the object and plane restitution properties to allow for some bounce. In this case I dropped the real Thingy:53 onto a hardwood table. You can use p.changeDynamics to introduce other factors such as damping and friction.
Here we iterate over a range of heights, randomly changing its start orientation for i number of simulations per height. The acceleration is calculated relative to the orientation of the Thingy:53 object to represent its onboard accelerometer.
Finally we save the metadata file to the output folder. This can be used to tell the edge-impulse-uploader CLI tool the floating point labels for each file.
These files can then be uploaded to a project with these commands (run in a separate terminal window):
(run edge-impulse-uploader --clean if you have used the CLI before to reset the target project)
Now you can use your dataset a drop height detection regression model in Edge Impulse Studio!
See if you can edit this project to simulate throwing the object up in the air to predict the maximum height, or add in your own custom object. You could also try to better model the real environment you're dropping the object in- adding air resistance, friction, damping and material properties for your surface.
is an open source library for training machine learning models. is an open source Python library that makes creating neural networks in TensorFlow much easier. We use these two libraries together to very quickly train a model to identify handwritten digits. From there, we use the Edge Impulse Python SDK library to profile the model to see how inference will perform on a target edge device. Then, we use the SDK again to convert our trained model to a C++ library that can be deployed to an edge hardware platform, such as a microcontroller.
Follow the code below to see how to train a simple machine learning model and deploy it to a C++ library using Edge Impulse.
To learn more about using the Python SDK, please see: .
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
To start, we need to list the possible target devices we can use for profiling. We need to pick from this list.
You should see a list printed such as:
A common option is the cortex-m4f-80mhz
, as this is a relatively low-power microcontroller family. From there, we can use the Edge Impulse Python SDK to generate a profile for your model to ensure it fits on your target hardware and meets your timing requirements.
Once you are happy with the performance of the model, you can deploy it to a number of possible hardware targets. To see the available hardware targets, run the following:
You should see a list printed such as:
The most generic target is to download a .zip file that holds a C++ library containing the inference runtime and your trained model, so we choose 'zip'
from the above list. To do that, we first need to create a Classification object which contains our label strings (and other optional information about the model). These strings will be added to the C++ library metadata so you can access them in your edge application.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy()
function. Your deployment file(s) will be downloaded to that directory.
Python 3
Pip package manager
Jupyter Notebook: https://jupyter.org/install
pip packages (install with pip install
packagename
):
pydub https://pypi.org/project/pydub/
google-cloud-texttospeech https://cloud.google.com/python/docs/reference/texttospeech/latest
requests https://pypi.org/project/requests/
First off you will need to set up and Edge Impulse account and create your first project. You will also need a Google Cloud account with the Text to Speech API enabled: https://cloud.google.com/text-to-speech, the first million characters generated each month are free (WaveNet voices), this should be plenty for most cases as you'll only need to generate your dataset once. From google you will need to download a credentials JSON file and set it to the correct environment variable on your system to allow the python API to work: (https://developers.google.com/workspace/guides/create-credentials#service-account)
First off we need to set our desired keywords and labels:
Then we need to set up the parameters for our speech dataset, all possible combinations will be iterated through:
languages - Choose the text to speech voice languages to use (https://cloud.google.com/text-to-speech/docs/voices)
pitches - Which voice pitches to apply
genders - Which SSML genders to apply
speakingRates - Which speaking speeds to apply
Then provide some other key parameters:
out_length - How long each output sample should be
count - Maximum number of samples to output (if all combinations of languages, pitches etc are higher then this restricts output)
voice-dir - Where to store the clean samples before noise is added
noise-url - Which noise file to download and apply to your samples
output-folder - The final output location of the noised samples
num-copies - How many different noisy versions of each sample to create
max-noise-level - in Db,
Then we need to check all the output folders are ready
And download the background noise file
Then we can generate a list of all possible parameter combinations based on the input earlier. If you have set num_copies
to be smaller than the number of combinations then these options will be reduced:
Finally we iterate though all the options generated, call the Google TTS API to generate the desired sample, and apply noise to it, saving locally with metadata:
Now you can use your keywords to create a robust keyword detection model in Edge Impulse Studio!
Try out both classification models and the transfer learning keyword spotting model to see which works best for your case
The is Edge Impulse's automated machine learning (AutoML) tool to help you find the best combination of blocks and hyperparameters for your model and within your hardware constraints. This example will walk you through uploading data, running the EON Tuner, and interpreting the results.
WARNING: This notebook will add and delete data in your Edge Impulse project, so be careful! We recommend creating a throwaway project when testing this notebook.
To start, create a new project in Edge Impulse. Do not add any data to it.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
To start, we need to list the possible target devices we can use for profiling. We need to pick from this list.
You should see a list printed such as:
From there, we start the tuner with start_tuner()
and wait for completion via check_tuner()
. In this example, we configure the tuner to target for the cortex-m4f-80mhz device. Since we want to classify the motion, we choose classification
for our classifcation_type
and our dataset as motion continuous. We constrain our model to a latency of 100ms for running the impulse.
NOTE: We set the max trials to 3 here. In a real life situation, you will omit this so the tuner decides the best number of trials.
Once the tuner is done, you can print out the results to determine the best combination of blocks and hyperparameters.
To visualize the results of the tuner trials, you can head to the project page on Edge Impulse Studio.
Alternatively, you can access the results programmatically: the configuration settings and output of the EON Tuner is stored in the variable state
. You can access the results of the various trials with state.trials
. Note that some trials can fail, so it's a good idea to test the status of each trial.
From there, you will want to sort the results based on some metric. In this example, we will sort based on int8 test set accuracy from highest to lowest.
Note: Edge Impulse supports only one learning block per project at this time (excluding anomaly detection blocks). As a result, we will use the first learning block (e.g.
learning_blocks[0]
) in the list to extract metrics.
Now that we have the sorted results, we can extract the values we care about. We will print out the following metrics along with the impulse configuration (processing/learning block configuration and hyperparameters) of the top-performing trial.
This will help you determine if the impulse can fit on your target hardware and run fast enough for your needs. The impulse configuration can be used to recreate the processing and learning blocks on Edge Impulse. Later, we will set the project impulse based on the trial ID to simply deploy (rather than re-train).
Note: we assume the first learning block has the metrics we care about.
You can optionally use a plotting package like matplotlib to graph the results from the top results to compare the metrics.
We can replace the current impulse with the top performing trial from the EON Tuner. From there, we can deploy it, just like we would any impulse.
You should see a list printed such as:
The most generic target is to download a .zip file that holds a C++ library containing the inference runtime and your trained model, so we choose 'zip'
from the above list. To do that, we first need to create a Classification object which contains our label strings (and other optional information about the model). These strings will be added to the C++ library metadata so you can access them in your edge application.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy()
function. Your deployment file(s) will be downloaded to that directory.
By default, the EON Tuner will make a guess at a search space based on the type of data you uploaded (e.g. using spectral-analysis blocks for feature extraction). As a result, you can run the tuner without needing to construct a search space. However, you may want to define your own search space.
The best way to define a search space is to open your project (after uploading data), head to the EON Tuner page, click Run EON Tuner, and select the Space tab.
The search space is defined in JSON format, so we can just copy that to create a dictionary. This is a good place to start for tuning blocks and hyperparameters.
Note: Functions to get available blocks and search space parameters coming soon
🤗 offers a suite of tools that assist with various AI applications. Most notably, they provide a for people to share their pre-trained models. In this tutorial, we will demonstrate how to download a simple model from the Hugging Face hub, profile it, and convert it to a C++ library for use in your edge application. This particular model was trained to identify species of bean plants using the .
To learn more about using the Python SDK, please see:
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
Set the name of the repo (username/repo-name) and the file we want to download.
To start, we need to list the possible target devices we can use for profiling. We need to pick from this list.
You should see a list printed such as:
A common option is the cortex-m4f-80mhz
, as this is a relatively low-power microcontroller family. From there, we can use the Edge Impulse Python SDK to generate a profile for your model to ensure it fits on your target hardware and meets your timing requirements.
Once you are happy with the performance of the model, you can deploy it to a number of possible hardware targets. To see the available hardware targets, run the following:
You should see a list printed such as:
The most generic target is to download a .zip file containing a C++ library containing the inference runtime and your trained model, so we choose 'zip'
from the above list. We also need to tell Edge Impulse how we are planning to use the model. In this case, we want to perform classification, so we set the output type to Classification.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy()
function. Your deployment file(s) will be downloaded to that directory.
Try our FREE today.
You will also need an account and API Key.
See for more information
Jupyter Notebook:
Bullet3:
We want to create a classifier that can uniquely identify handwritten digits. To start, we will use TensorFlow and Keras to train a very simple convolutional neural network (CNN) on the classic dataset, which consists of handwritten digits from 0 to 9.
Important! The deployment targets list will change depending on the values provided for model
, model_output_type
, and model_input_type
in the next part. For example, you will not see openmv
listed once you upload a model (e.g. using .profile()
or .deploy()
) if model_input_type
is not set to ei.model.input_type.ImageInput()
. If you attempt to deploy to an unavailable target, you will receive the error Could not deploy: deploy_target: ...
. If model_input_type
is not provided, it will default to . See for more information about input types.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation .
The files in ./out-noisy
can be uploaded easily using the :
Make use of our pre-built keyword dataset to add noise and 'unknown' words to your model:
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
We start by downloading the and uploading it to our project.
If you have installed, you can make the previous section much easier by reporting metrics as a DataFrame.
Important! The deployment targets list will change depending on the values provided for model
, model_output_type
, and model_input_type
in the next part. For example, you will not see openmv
listed once you upload a model (e.g. using .profile()
or .deploy()
) if model_input_type
is not set to ei.model.input_type.ImageInput()
. If you attempt to deploy to an unavailable target, you will receive the error Could not deploy: deploy_target: ...
. If model_input_type
is not provided, it will default to . See for more information about input types.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation .
To download a model from the Hugging Face hub, we need to first find a model. Head to . On the left side, click Image Classification to filter under the Tasks tab and under the Libraries tab, filter by ONNX (as the Edge Impulse Python SDK easily accepts ONNX models). You should see the resnet-tiny-beans model trained by user fxmarty.
Click on the resnet-tiny-beans entry (or follow ) to read about the model and view the files. If you click on the Files* tab, you can see all of the files available in this particular model.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation .
Weights & Biases is an online framework for helping manage machine learning training, data versioning, and experiments. When running experiments for edge-focused ML projects, it can be helpful to see the required memory (RAM and ROM) along with estimated inference times of your model for your target hardware. By viewing these metrics, you can quickly gauge if your model will fit onto your target device!
Follow the code below to see how to train a simple machine learning model with different hyperparameters and log those values to the Weights & Biases dashboard.
To learn more about using the Python SDK, please see: Edge Impulse Python SDK Overview
You will need to obtain an API key from an Edge Impulse project. Log into edgeimpulse.com and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
To use Weights and Biases, you will need to create an account on wandb.ai and call the wandb.login()
function. This will prompt you to log in to your account. Your credentials should be stored, which allows you to use the wandb
package in your Python library.
We want to create a classifier that can uniquely identify handwritten digits. To start, we will use TensorFlow and Keras to train a very simple convolutional neural network (CNN) on the classic MNIST dataset, which consists of handwritten digits from 0 to 9.
We want to vary the hyperparameters in our model and see how it affects the accuracy and predicted RAM, ROM, and inference time on our target platform. To do that, we construct a function that builds a simple model using Keras, trains the model, and computes the accuracy and loss from our holdout test set. We then use the Edge Impulse Python SDK to generate a profile of our model for our target hardware. We log the hyperparameter (number of nodes in the hidden layer), test loss, test accuracy, estimated RAM, estimated ROM, and estimated inference time (ms) to our Weights and Biases console.
Now, it's time to run the experiment and log the results in Weights and Biases. Simply call our function and provide a new hyperparameter value for the number of nodes.
Head to wandb.ai and log in (if you have not already done so). Under My projects on the left, click on the nodes-sweep project. You can visualize the results of your experiments with the various charts that Weights & Biases offers. For example, here is a parallel coordinates plot that allows you to quickly visualize the different hyperparameters and metrics (including our new edge profile metrics).
If you would like to deploy your model to your target hardware, the Python SDK can help you with that, too. See our documentation here.
Once you are happy with the performance of your model, you can then deploy it to your target hardware. We will assume that 32 nodes in our hidden layer provided the best trade-off of RAM, flash, inference time, and accuracy for our needs. To start, we will retrain the model:
Next, we should evaluate the model on our holdout test set.
From there, we can see the available hardware targets for deployment:
You should see a list printed such as:
The most generic target is the .zip file that holds a C++ library containing our trained model and inference runtime. To pass our labels to the C++ library, we create a Classification object, which contains our label strings.
Note that instead of writing the raw bytes to a file, you can also specify an output_directory
argument in the .deploy() function. Your deployment file(s) will be downloaded to that directory.
Your model C++ library should be downloaded as the file my_model_cpp.zip in the same directory as this notebook. You are now ready to use your C++ model in your embedded and edge device application! To use the C++ model for local inference, see our documentation here.
Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single web-based visual interface where you can access purpose-built tools to perform all machine learning (ML) development steps, from preparing data to building, training, and deploying your ML models, improving data science team productivity by up to 10x. You can quickly upload data, create new notebooks, train and tune models, move back and forth between steps to adjust experiments, collaborate seamlessly within your organization, and deploy models to production without leaving SageMaker Studio.
To learn more about using the Python SDK, please see: .
Below are the changes made to the original training script and configuration:
The Python 3 (Data Science 3.0)
kernel was used.
The dataset has been imported in the Edge Impulse S3 bucket configured when creating the SageMaker Studio domain. Make sure to adapt to your path or use the AWS reference project.
The training instance used is ml.m5.large
.
</div> Install dependencies
Below is the structure of our dataset in our S3 bucket
We have used the default bucket created when configuring SageMaker Studio domain:
Optional, ship this next cell if you don't want to retrain the model. And uncomment the last line of the cell after
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
You can also have a look at the deployment page of your project to test your model on a web browser or test
If you want to upload files directly to an Edge Impulse project, we recommend using the . However, sometimes you cannot upload your samples directly, as you might need to convert the files to one of the accepted formats or modify the data prior to model training. Edge Impulse offers for some types of projects, but you might want to create your own custom augmentation scheme. Or perhaps you want to and script the upload process.
The Python SDK offers a set of functions to help you move data into and out of your project. This can be extremely helpful when generating or augmenting your dataset. The following cells demonstrate some of these upload and download functions.
You can find the API documentation for the functions found in this tutorial .
WARNING: This notebook will add and delete data in your Edge Impulse project, so be careful! We recommend creating a throwaway project when testing this notebook.
Note that you might need to refresh the page with your Edge Impulse project to see the samples appear.
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Note that you do not actually need to use the project in the Edge Impulse Studio. We just need the API Key.
Paste that API key string in the ei.API_KEY
value in the following cell:
The following file formats are allowed: .cbor, .json, .csv, .wav, .jpg, .png, .mp4, .avi.
If you head to the Data acquisition page on your project, you should see images in your dataset.
You can download samples from your Edge Impulse project if you know the sample IDs. You can get sample IDs by calling the ei.data.get_sample_ids()
function, which allows you to filter IDs based on filename, category, and label.
Take a look at the files in this directory. You should see the downloaded images. They should match the images in the dataset/ directory, which were the original images that we uploaded.
If you know the ID of the sample you would like to delete, you can call the delete_sample_by_id()
function. You can also delete all the samples in your project by calling delete_all_samples()
.
Take a look at the data in your project. The samples that we uploaded should be gone.
Important! The annotations file must be named exactly info.labels
If you head to the Data acquisition page on your project, you should see images in your dataset along with the bounding box information.
If you head to the Data acquisition page on your project, you should see your time series data.
The raw data must be encoded in an IO object. We convert the dictionary objects to a BytesIO
object, but you can also read in data from .json files.
If you head to the Data acquisition page on your project, you should see your time series data.
Important! NumPy arrays must be in the shape
(Number of samples, number of data points, number of sensors)
If you are working with image data in NumPy, we recommend saving those images as .png or .jpg files and using upload_directory()
.
If you head to the Data acquisition page on your project, you should see your time series data. Note that the sample names are randomly assigned, so we recommend recording the sample IDs when you upload.
Note that several other packages exist that work as drop-in replacements for pandas. You can use these replacements so long as you import that with the name pd
. For example, one of:
The first option is to upload one dataframe for each sample (non-time series)
You can also upload one dataframe for each sample (time series). As with previous examples, we'll assume that the sample rate is 10 ms.
You can upload non-time series data where each sample is a row in the dataframe. Note that you need to provide labels in the rows.
A "wide" dataframe is one where each column represents a value in the time series data, and the rows become individual samples. Note that you need to provide labels in the rows.
A DataFrame can also be divided into "groups" so you can upload multidimensional time series data.
This notebook explores how we can use generative AI to create datasets which don't exist yet. This can be a good starting point for your project if you have not collected or cannot collect the data required. It is important to note the limitations of generative AI still apply here, biases can be introduced through your prompts, results can include "hallucinations" and quality control is important.
This example uses the OpenAI API to call the Dall-E image generation tool, it explores both generation and variation but there are other tools such as editing which could also be useful for augmenting an existing dataset.
There is also a video version of this tutorial:
Python 3
Pip package manager
Jupyter Notebook: https://jupyter.org/install
pip packages (install with pip install
packagename
):
openai https://pypi.org/project/openai/
First off you will need to set up and Edge Impulse account and create your first project.
You will also need to create an API Key for OpenAI: https://platform.openai.com/docs/api-reference/authentication
The API takes in a prompt, number of images and a size
The API also has a variations call which takes in an existing images and creates variations of it. This could also be used to modify existing images.
Here we are iterate through a number of images and variations to generate a dataset based on the prompts/labels given.
These files can then be uploaded to a project with these commands (run in a separate terminal window):
(run edge-impulse-uploader --clean if you have used the CLI before to reset the target project)
Now you can use your images to create an image classification model on Edge Impulse.
Why not try some other OpenAI calls, 'edit' could be used to take an existing image and translate it into different environments or add different humans to increase the variety of your dataset. https://platform.openai.com/docs/guides/images/usage
This guide has been built from AWS reference project Introduction to SageMaker TensorFlow - Image Classification, please have a look at this .
The dataset has been changed to classify images as car
vs unknown
. You can download the dataset from this Edge Impulse and store it in your S3 bucket.
You can continue with the default model, or can choose a different model from the list. Note that this tutorial has been tested with MobileNetv2 based models. A complete list of SageMaker pre-trained models can also be accessed at .
You will need to obtain an API key from an Edge Impulse project. Log into and create a new project. Open the project, navigate to Dashboard and click on the Keys tab to view your API keys. Double-click on the API key to highlight it, right-click, and select Copy.
Voila! You now have a C++ library ready to be compiled and integrated in your embedded targets. Feel free to have a look at Edge Impulse deployment options on the to understand how you can integrate that to your embedded systems.
You can upload all files in a directory using the Python SDK. Note that you can set the category, label, and metadata for all files with a single call. If you want to use a different label for each file set label=None
in the function call and name your files with <label>.<name>.<ext>. For example, wave.01.csv will have the label wave when uploaded. See for more information.
For object detection, you can put bounding box information (following the ) in a file named info.labels in that same directory.
The Edge Impulse ingestion service accepts CSV files, which we can use to upload raw data. Note that if you configure a CSV template using the , then the expected format of the CSV file might change. If you do not configure a CSV template, then the ingestion service expects CSV data to be in a particular format. See .
Another way to upload data is to encode it in JSON format. See the for more information on acceptable key/value pairs. Note that at this time, the signature
value can be set to 0
.
is powerful Python library for working with large arrays and matrices. You can upload NumPy arrays directly into your Edge Impulse project. Note that the arrays are required to be in a particular format, and must be uploaded with required metadata (such as a list of labels and the sample rate).
is popular Python library for performing data manipulation and analysis. The Edge Impulse library supports a number of ways to upload dataframes. We will go over each format.
We have wrapped this example into a (Enterprise Feature) to make it even easier to generate images and upload them to your organization. See: