The NVIDIA TAO Toolkit built on TensorFlow and PyTorch, uses the power of transfer learning while simultaneously simplifying the model training process and optimizing the model for inference throughput on the target platform. The result is an ultra-streamlined workflow. Take your own models or pre-trained models, adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets.
Edge Impulse offers the following learning blocks for NVIDIA TAO object detection and image classification tasks: RetinaNet, YOLOv3, YOLOv4, SSD, and image classification.
Only available with Edge Impulse Professional and Enterprise Plans
Try our Professional Plan or FREE Enterprise Trial today.
NVIDIA support information
For further updates on official model support by NVIDIA, please see the NVIDIA TAO Toolkit documentation.
To build your first object detection models using NVIDIA TAO Toolkit:
Create a new project in Edge Impulse.
Make sure to set your labeling method to 'Bounding boxes (object detection)' or 'One label per data item (image classification)'.
Collect and prepare your dataset as in object detection or image classification.
Create an Impulse with an Object Detection (Images) or Transfer Learning (Images) block.
Extract your images features.
In your Object Detection (Images) or Transfer Learning (Images) view, select your NVIDIA TAO model:
NVIDIA TAO learning blocks are not automatically recommended where int8 quantization is required.
Under NVIDIA TAO..., select between various parameters, in total there are 88 object detection architectures, and 15 image classification architectures.
There are pre-trained 3x224x224 backbones from the NVIDIA TAO catalog, and others trained by Edge Impulse on ImageNet.
For image classification, pre-trained weights only support 224x224 image resolution. Image width and height must be greater than 32.
Click on 'Start training'
With everything setup you can now build your machine learning model with these tutorials: