AI applications are powered by models. Deep learning models are built on mathematical algorithms and trained using data and human expertise. These models can accurately predict outcomes based on input data such as images, text, or speech.
Building, training, and optimizing these tasks are both critical and time-intensive. Domain expertise and countless hours of computation are needed to develop production-quality models. This is at odds with how quickly enterprises must operationalize their AI initiatives and reduce their time to market (TTM).
Fine-tuning pretrained models without AI expertise
Fortunately, there is a solution: pretrained models. Using transfer learning, a pretrained model is one that has already been trained on representative datasets and fine-tuned with weights and biases. Unlike traditional AI algorithms that require significant time and resources to train, AI solutions built with pretrained models are delivered as fully operational, ready-to-use AI engines for a variety of use cases.
In most cases, an “out-of-box” pretrained model may not fit your use case or deliver the accuracy and performance you need. For these instances, you will have to modify or customize the pretrained model to fit your use-case needs.
Customizing pretrained models for different use cases
So how do you customize a pretrained model without spending too much time and effort? You can use NVIDIA TAO, an AI-model-adaptation framework, to simplify your development workflow. The TAO Toolkit is a CLI- and Jupyter notebook-based solution of NVIDIA TAO that makes it very easy to fine-tune pretrained models with your own data. No AI expertise is required.
The TAO Toolkit is highly extensible and helps you adapt your model to new environments, augment your data, or add new classes.
Below are three examples highlighted in the NVIDIA whitepaper that explores proven methodologies for speeding up your AI workflow process.
- Adapting to different camera types: Say you want to deploy a solution to infrared or thermal cameras. You can use the PeopleNet model that has already been trained on millions of images. Fine-tune it with only 2,500 images to achieve an mAP of almost 80%.
- Augmenting limited dataset: Data collection is time-consuming. With offline or online data augmentation, you can modify your dataset. Augmenting your dataset adds more variation and randomness that enables model generalization. This improves model accuracy on data the model has never seen before.
- Adding new classes: Imagine that you have been asked to create an application that detects if people are wearing helmets while riding their bicycles. With the TAO Toolkit, you can use a model that detects people, and add a new “helmet class” to that model. Fine-tune it with the dataset that contains classes for both people and helmet.
Put it into practice
When you eliminate AI framework complexity, you can focus on what matters: shortening your AI application’s TTM. The TAO Toolkit makes it incredibly easy for you to train, adapt and optimize pretrained models, without the need for large training datasets and AI expertise.