The NVIDIA TAO Toolkit built on TensorFlow and PyTorch, uses the power of transfer learning while simultaneously simplifying the model training process and optimizing the model for inference throughput on the target platform. The result is an ultra-streamlined workflow. Take your own models or pre-trained models, adapt them to your own real or synthetic data, then optimize for inference throughput. All without needing AI expertise or large training datasets.
Only available for enterprise customers
As this integration uses GPU hours for training, this integration is only available for enterprise customers. View our pricing for more information.

Getting started with NVIDIA TAO Toolkit

Check out NVIDIA's documentation for information on getting started as a first-time user with the TAO Toolkit.

Preliminary steps

Now, clone one of the following GitHub repositories in order to bring your TAO model into your Edge Impulse enterprise organization and projects:
Then, follow the instructions in the of the respective repo to integrate and run the pipeline locally or pushed to your Edge Impulse organization.
The block is now available under any of your projects via Create impulse > Add new learning block.

Running other TAO models

To use a different TAO model you can modify one of the example repositories in the previous step.
  • If the model is available in the 'Image Classification (PyT)' or 'Image Classification (TF1)' applications, you just need to change the specs file.
  • If your model is available in another application, then:
    • Modify the Dockerfile to pull from the right container.
    • Modify dataset-conversion/ to do the dataset conversion, and write out a valid specs file.
    • Modify to call the correct TAO runtime commands.

Next steps: building a machine learning model

With everything setup you can now build your machine learning model with these tutorials: