Building a robust machine learning model, especially in the realm of computer vision, is challenging due to the need for extensive datasets and significant computational resources. Transfer learning has emerged as a powerful solution, allowing developers to leverage pre-trained models and adapt them to their specific needs. This guide provides an overview of various community created custom learn blocks, and their applications.
Prerequisites
A object detection project: See object detection for details on how to create one.
Tutorials Want to create your own Custom Learn Block? Check out our tutorial:
To select a community created learning block, click Object detection in the menu on the left. Here you can select Choose a different model, and we will select YOLOv5 which was created by our COMMUNITY. You can see the detail for the given block here too for example: Yolov5 is a transfer learning model based on Ultralytics YOLOv5 using yolov5n.pt weights, supports RGB input at any resolution (square images only).
Below is a detailed table of custom learn blocks created by the community, showcasing their capabilities and potential applications:
Notes:
Ultra Low-end MCU: Devices with very limited memory and processing power, typically used for sensor-driven tasks.
Low-end MCU: More capable than ultra low-end MCUs, but still limited in processing power and memory.
NPU: Specialized for neural network processing; efficient for machine learning tasks.
CPU (MPU): General-purpose processors, capable of handling complex computations and larger models.
GPU: High-performance processing units, ideal for large-scale and compute-intensive machine learning models.
Sensor Applications: Indicates the types of applications each model is typically used for, based on sensor data processing capabilities.
Ensure that your chosen learn block is compatible with your hardware. Some blocks, like YOLOv5, have specific hardware requirements.
Each learn block comes with its own set of limitations. Understanding these is crucial for effective model development.
Align your project requirements with the capabilities of the learn block. For instance, use YOLOv5 for complex object detection tasks and Keras for simpler tasks.
The community blocks are not always integrated by Edge Impulse. This means they won't be tested on our CI/CD workflows.
Thus, we will provide limited support on the forum. If you are interested in using them for an enterprise project, please check our pricing page and contact us directly, our solution engineers can work with you on the integration:
YOLOv5 (Community Block)
Log items.
Metrics output issues.
Jetson Nano compatibility issues.
Lack of model size feedback pre-training completion.
Fixed batch size, no modification option.
EfficientNet (Community Block)
Potential compatibility issues with low-resource devices.
Scikit-learn (Community Block)
Potential compatibility issues with low-resource devices.
Custom learn blocks offer a flexible approach to machine learning, enabling you to tailor models to your specific needs. By understanding the capabilities and limitations of each block, you can harness the power of machine learning more effectively in your projects.
Architecture | Description | Compatibility | Applications |
---|---|---|---|
YOLOv5 (Community)
A high-speed, accurate object detection model.
NPU, CPU (MPU), GPU
Advanced object detection, image analysis
EfficientNet (Community)
A scalable image classification model.
Low-end MCU, NPU, CPU (MPU), GPU
Image classification, facial recognition, scene detection
Keras (Community)
A versatile tool for classification and regression tasks.
Ultra Low-end MCU, Low-end MCU, NPU, CPU (MPU), GPU
Diverse classification tasks, data analysis
PyTorch (Community)
Suitable for foundational machine learning tasks.
Ultra Low-end MCU, Low-end MCU, NPU, CPU (MPU), GPU
Pattern recognition, foundational machine learning tasks
Scikit-learn (Community)
A logistic regression model for classification.
Ultra Low-end MCU, Low-end MCU, NPU, CPU (MPU), GPU
Prototyping, data analysis
Object detection tailored for Renesas platforms.
NPU, CPU (MPU), GPU
Industrial automation, advanced image processing
YOLOv5 Community Community
Flexible object detection model.
NPU, CPU (MPU), GPU
General object detection, traffic monitoring, retail analytics
Advanced object detection for Texas Instruments hardware.
NPU, CPU (MPU), GPU
Automotive systems, smart city applications
Want to use a novel ML architecture, or load your own transfer learning models into Edge Impulse? Create a custom learning block! It's easy to bring in any training pipeline into the Studio, as long as you can output TFLite or ONNX files. We have end-to-end examples of doing this in Keras, PyTorch and scikit-learn.
If you just want to modify the neural network architecture or loss function, you can also use expert mode directly in the Studio, without having to bring your own model. Go to any ML block, select three dots, and select Switch to Keras (expert) mode.
This page describes the input and output formats if you want to bring your own architecture, but a good way to start building a custom learning block is by modifying one of the following example repositories:
YOLOv5 - wraps the Ultralytics YOLOv5 repository (trained with PyTorch) to train a custom transfer learning model.
EfficientNet - a Keras implementation of transfer learning with EfficientNet B0.
Keras - a basic multi-layer perceptron in Keras and TensorFlow.
PyTorch - a basic multi-layer perceptron in PyTorch.
Scikit-learn - trains a logistic regression model using scikit-learn, then outputs a TFLite file for inferencing using jax.
In this tutorial, we will explain how to set up a learning block, push it to an Edge Impulse organization, and use it in a project.
A learning block consists of a Docker image that contains one or more scripts. The Docker image is encapsulated in a learning block with additional parameters.
Here is a diagram of how a minimal configuration for a learning block works:
We will walk through creating a custom learning block, pushing it to our organization (enterprise accounts only), and running it in a project. To perform this, we will use the example learning block found in this repository.
To start, create a directory somewhere on your computer. I'll name mine my-custom-learning-block/. You should also create a directory named data/ in that project directory to store a dataset that we will use for testing the block locally. Finally, create a directory for storing the output model.
We will be working in this directory to create our custom learning block. It will also hold data for testing locally. After working through this tutorial, you should have a directory structure like the following:
We will explain what each of these files does in the rest of this getting started section.
In your project, you will likely have access to data that you have collected. Note that you will need to convert your raw data into features stored in NumPy format (*.npy). For demonstrating our custom learning block, we will download pre-generated features from a public project: Tutorial: Continuous motion recognition.
From the project's dashboard, download all four NPY files. These contain the features as generated by the processing block (Spectral features), and they are what the learning block expects as inputs.
Move the files to your data/ directory, and rename them to the following:
X_split_train.npy
Y_split_train.npy
X_split_test.npy
Y_split_train.npy
You can also run the following commands to download the files directly into your data/ directory:
To initialize your block, the easiest method is to use the Edge Impulse CLI blocks command from within the my-custom-learning-block/ directory: edge-impulse-blocks init
. Follow the on-screen prompts to log in to your account, select your organization, and configure your block:
This will create a file named .ei-block-config in your current directory. Feel free to look at the contents of that file to see how the block was configured:
You can also create your learning block within Edge Impulse Studio. Open your organization page. On the side, click Machine learning under Custom blocks. From there, click Add new Machine Learning block and fill out the required information.
Download the following Python scripts and requirements file:
You can also easily download these files with the following commands:
Feel free to look through these scripts to see how Keras is used to construct and train a simple dense neural network. Also, note that you are not required to use Python! You are welcome to use any language or system you wish, so long as it will run in a Docker container.
Important! Pay attention to the inputs (features) and outputs (trained model file) of your script. They must match the expected inputs and outputs of the block. See the Input format and Output format sections for more information.
Next, we need to wrap our training script in a Docker image. To do that, we write a Dockerfile. If you are not familiar with Docker, we recommend working through Docker's getting started guide. See here to learn more about the required Dockerfile components in learning blocks.
Create a new file named Dockerfile (no extension) and copy in the following code:
Note: we are not installing CUDA for this simple example. If you wish to install CUDA in your image to enable GPU-accelerated training (which includes training inside your Edge Impulse project), please refer to the full example here.
Make sure you have Docker installed and running on your computer. Execute the following commands to build and run your image:
You should see your model train for 30 epochs and then be converted to a .tflite file for inference. Your out/ directory should have the following files/folders:
model.tflite
model_quantized_int8_io.tflite
saved_model/
saved_model.zip
The saved_model.zip file is an archive of the saved_model/ directory, which contains your model stored in the TensorFlow SavedModel format. The model.tflite file is the float32 version of the model and converted to the TensorFlow Lite format. The model_quantized_int8_io.tflite file is the same TFLite model, but with weights quantized to 8 bits.
You can expose your block's parameters to the Studio GUI by defining JSON settings in the parameters.json
file. Create a file with that exact name and copy in the following:
This will expose the epochs and learning-rate parameters to the Studio interface so that users can make changes in the project. You can learn more about arguments in this section.
Once you have verified operation of your block and configured the parameters, you will want to push it to your Edge Impulse organization. From your project directory, run the following command:
Once that command completes, head to your Organization in the Edge Impulse Studio. Click on Machine learning under Custom blocks. You should find your custom learning block listed there.
You can click on the three dots and select Edit block to view the configuration settings for your block.
Create a project in Studio under your organization (the project must be under the organization to have learning block show up!). We will demonstrate the custom learning block using the continuous gestures dataset found here. Follow the directions in that guide to upload data to your project.
Add a Spectral Analysis block to your impulse. Click on Add a learning block. Assuming your project is in your organization, you should see your custom learning block as one of the available blocks. Click add to use your custom learning block in your project.
Go to the Spectral features page, click Save parameters, and click Generate features to generate the features required for learning.
Next, go to the My learning block page, where you should see the custom parameters you set (number of training cycles and learning rate). Feel free to change those, and select "Start training." When that is finished, you should have a trained model in your project created by your custom learning block!
You can now continue to model testing and deployment, as you would with any project.
Any built-in block in the Edge Impulse Studio (e.g. classifiers, regression models or FOMO blocks) can be edited locally, and then pushed back as a custom block. This is great if you want to make heavy modifications to these training pipelines, for example to do custom data augmentation. To download a block, go to any ML block in your project, click the three dots, select Edit block locally, and follow the instructions in the README.
Training pipelines in Edge Impulse are built on top of Docker containers, a virtualization technique which lets developers package up an application with all dependencies in a single package. To train your own model you'll need to wrap all the required packages, your scripts, and (if you use transfer learning) your pre-trained weights into this container. When running in Edge Impulse the container does not have network access, so make sure you don't download dependencies while running (fine when building the container).
Important: ENTRYPOINT
It's important to create an ENTRYPOINT
at the end of the Dockerfile to specify which file to run.
GPU Support
If you want to have GPU support (only for enterprise customers), you'll need cuda packages installed. If you export a learn block from the Studio these will already have the right base packages, so use that Dockerfile as a starting point.
The entrypoint (see above in the Dockerfile) will be called with these parameters:
--data-directory
- where you can find the data (see below for the input/output formats).
--out-directory
- where to write the TFLite or ONNX files (see below for the input/output formats).
Additionally, you can specify custom arguments (like the learning rate, or whether to use data augmentation) by adding a parameters.json
file to your block. This file describes all arguments for your training pipeline, and is used to render custom UI elements for each parameter. For example, this parameters file:
Will be displayed as:
And passes in --learning-rate-1 0.01 --learning-rate-2 0.001
to your script. For more information, and all options see Adding parameters to custom blocks.
If you do not specify a parameters.json
file, there will be 2 default elements rendered ("Learning rate" and "Number of training cycles"), which will be passed in as:
--learning-rate
- learning rate to train with (set by the user in the UI).
--epochs
- number of epochs to train for (set by the user in the UI).
The data directory contains your dataset, after running any DSP blocks, and already split in a train/validation set:
X_split_train.npy
Y_split_train.npy
X_split_test.npy
Y_split_train.npy
The X_*.npy
files are float32 Numpy arrays, already in the right shape (e.g. if you're training on 96x96 RGB images this will be of shape (n, 96, 96, 3)
). You can typically load these without any modification into your training pipeline (see the notes after this section for caveats).
The Y_*.npy
files are either:
int32 Numpy arrays, with four columns (label_index
, sample_id
, sample_slice_start_ms
, sample_slice_end_ms
).
A JSON array in the form of:
[{ "sampleId": 234731, "boundingBoxes": [{ "label": 1, "x": 260, "y": 313, "w": 234, "h": 261 }] } ]
2) is sent if your dataset has bounding boxes, in all other cases 1) is sent.
Data format for image projects
For image projects, we automatically normalize data before passing the data to the ML block. The X_*.npy
values may then be rescaled based on the selected input scaling when building the custom ML block (details in the next section).
To get new data for your project, just run (requires Edge Impulse CLI v1.16 or higher):
This regenerates features (if necessary) and then downloads the updated dataset.
The input features for vision models are a 4D vector of shape (n, WIDTH, HEIGHT, CHANNELS)
, where the channel data is in RGB
format. We support three ways of scaling the input:
Pixels ranging 0..1 - just the raw pixels, without any normalization. Data coming from the Image DSP block is unchanged.
Pixels ranging 0..255 - just the raw pixels, without any normalization. Data coming from the Image DSP block is multiplied by 255.
PyTorch - the default way that inputs are scaled in most torchvision models, first it takes the raw pixels 0..1 then normalizes per-channel using the ImageNet mean and standard deviation:\
The input scaling is applied:
In the input features vector; so the inputs are already scaled correctly, no need to re-scale yourself. If you're converting the input features vector into images before training, as your training pipeline requires this, then make sure to un-normalize first.
When running inference, both in the Studio and on-device. So also, no need to re-scale yourself.
You can control the image input scaling when you create the block in the CLI (1.19.1 or higher), or by editing the block in the UI.
If you need data in channels-first (NCHW
) mode, then you'll need to transpose the input feature vector yourself before training. You can still just write out an NCHW
model, Edge Impulse supports both NHWC
and NCHW
models.
Edge Impulse only supports RGB models. If you have a model that requires BGR
input, rather than RGB
input (e.g. Resnet50) you'll need to transpose the first and last channels.
In Keras you do this by adding a lambda layer. Example using Resnet50.
For PyTorch you do this by first converting the trained model to ONNX, then transposing using scc4onnx.
The training pipeline can output either TFLite or ONNX files:
If you output TFLite files
model.tflite
- a TFLite file with float32 inputs and outputs.
model_quantized_int8_io.tflite
- a quantized TFLite file with int8 inputs and outputs.
saved_model.zip
- a TensorFlow saved model (optional).
At least one of the TFLite files is required.
If you output ONNX files
model.onnx
- An ONNX file with float16 or float32 inputs and outputs.
We automatically convert this file to both unquantized and quantized TFLite files after training.
I'm using scikit-learn, I don't have TFLite or ONNX files...
If you have a training pipeline that cannot output TFLite files by default (e.g. scikit-learn), you can use jax to implement the inference function; and compile that to TFLite. See our example repository. If there's any TFLite ops in your final model that are not supported by the EON Compiler (so you cannot run on device), then please let us know on the forums.
Host your block directly within Edge Impulse with the Edge Impulse CLI:
To edit the block, go to:
Enterprise: go to your organization, Custom blocks > Machine learning.
Developers: click on your photo on the top right corner, select Custom blocks > Machine learning.
The block is now available from inside any of your Edge Impulse projects. Add it via Create impulse > Add a learning block.
Unfortunately object detection models typically don't have a standard way to go from neural network output layer to bounding boxes. Currently we support the following types of output layers:
MobileNet SSD
Edge Impulse FOMO
YOLOv5 (compatible with Ultralytics YOLOv5 v6)
YOLOv5 for Renesas DRP-AI
YOLOv7
YOLOX
If you have an object detection model with a different output layer then please contact your user success engineer (enterprise) or let us know on the forums (free users) with an example on how to interpret the output, and we can add it.
When training locally you can use the profiling API to get latency, RAM and ROM estimates. This is very useful as you can immediately see whether your model will fit on device. Additionally, you can use this API as part your experiment tracking (f.e. in Weights & Biases or MLFlow) to wield out models that won't fit your latency or memory constraints.
The profiling API expects:
A TFLite file.
A reference device (for latency calculation) - you can get a list of all devices via getProjectInfo in the latencyDevices
object.
A reference model (which model is closest to your architecture) - you can choose between gestures-large-f32
, gestures-large-i8
, image-32-32-mobilenet-f32
, image-32-32-mobilenet-i8
, image-96-96-mobilenet-f32
, image-96-96-mobilenet-i8
, image-320-320-mobilenet-ssd-f32
, keywords-2d-f32
, keywords-2d-i8
. Make sure to use i8
models if you have quantized your model.
You can also use the Python SDK to profile your model easily. See here for an example on how to profile a model created in Keras.