NVIDIA Train, Adapt, and Optimize (TAO) is a GUI-based, workflow-driven framework that simplifies and accelerates the creation of enterprise AI applications and services. By fine-tuning pretrained models, enterprises can produce domain specific models in hours rather than months, eliminating the need for large training runs and deep AI expertise.Apply for Early Access
Adapting AI to complex industrial environments with NVIDIA TAO, Metropolis, and Fleet Command.
What Is NVIDIA TAO?
The foundation of an AI application is a deep learning model that’s tuned and optimized to deliver the right level of accuracy and performance. Building a deep learning model consists of several steps, including collecting large, high-quality datasets, preparing the data, training the model, and finally optimizing it for deployment.
For many enterprises, this is cost-prohibitive as they may not have access to the data, deep AI domain expertise, and computing infrastructure required to train these complex models.
NVIDIA TAO lowers the barrier to AI by bringing together key NVIDIA technologies, such as pre-trained models from the NGC™ catalog, Transfer Learning Toolkit (TLT), federated learning with NVIDIA Clara™, and NVIDIA® TensorRT™, simplifying the creation of AI applications through an intuitive GUI-based workflow.
Additionally, with the integration of NVIDIA Fleet Command™, IT managers can deploy and orchestrate their optimized AI applications.
Fast-Track AI with NVIDIA TAO
NVIDIA TAO simplifies the time-consuming parts of a deep learning workflow, from data preparation to training to optimization, shortening the time to value.
Produce state of the art models in hours by fine-tuning pre-trained models from the NGC catalog across various domains, including vision, speech, recommender systems, and language understanding.
Adapt your models with your data using TLT or collaborate with partners through federated learning and contribute to a global model while preserving data privacy.
Key Features in NVIDIA TAO
Pre-Trained Models from NGC
The NGC catalog offers a diverse set of pre-trained models for a variety of common AI tasks that are optimized for NVIDIA Tensor Core GPUs and can be easily re-trained by updating just a few layers, saving valuable time.
These pre-trained models can easily integrate into AI application frameworks such as Clara for healthcare, Isaac for robotics, Jarvis for conversational AI, Metropolis for smart cities and more.
Pre-trained models in the catalog are accompanied with model credentials that show various parameters such as accuracy, training epochs, batch size, and more—giving you the confidence to choose the right model for your use case.Explore NGC pre-trained models
Transfer Learning Toolkit
The NVIDIA Transfer Learning Toolkit abstracts away the AI and deep learning framework complexity and enables you to build production-quality pre-trained models faster with no coding required.
A toolkit for anyone building AI apps and services, TLT helps reduce costs associated with large-scale data collection, labeling and eliminates the burden of training AI and machine learning models from the ground up.
With TLT, you can use NVIDIA’s production-quality pre-trained models and deploy as is or apply minimal fine-tuning for various computer vision and conversational AI use cases.Learn more about Transfer Learning Toolkit
Federated learning enables you to build generalizable AI models that have learnt from distributed diversity of data across multiple sites. Federated learning increases model performance by allowing you to securely collaborate, train, and contribute to a global model. With differential privacy, only partial model weights are shared with the global model from each site, along with the ability to add random noise to the weights, making it less exposed to model inversion.
Federated learning is currently available as part of the NVIDIA Clara application framework.Learn more about federated learning powered by NVIDIA Clara
NVIDIA TAO also leverages NVIDIA TensorRT™, an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications.
TensorRT is built on CUDA®, NVIDIA’s parallel programming model, and enables you to optimize inference leveraging libraries, development tools, and technologies in CUDA-X™ for artificial intelligence, autonomous machines, high-performance computing, intelligent video analytics, and graphics.Learn more about TensorRT
NVIDIA TAO offers the ability to deploy and orchestrate AI applications with NVIDIA Fleet Command.
Fleet Command is a hybrid-cloud platform for IT admins to remotely deploy applications, update software over the air, and monitor location health. It combines the benefits of accelerated computing at the edge with the ease of software as a service to deliver resilient AI securely and remotely to your entire network—in minutes.Learn more about Fleet Command
Latest NGC Catalog News
Check out the latest update to the NGC catalog user interface, including a richer, more seamless experience.
Learn about the value of pre-trained models from the NGC catalog with the help of an example.
Apply for exclusive news, updates, and early access to NVIDIA TAO.