News

New GPU Optimized Models and Notebooks Available from TensorFlow Hub, Google AI Hub, Google Colab

Discuss (0)

This week at TensorFlow World, Google announced community contributions to TensorFlow hub, a machine learning model library. NVIDIA was a key participant, providing models and notebooks to TensorFlow Hub along with new contributions to Google AI Hub and Google Colab containing GPU optimizations from NVIDIA CUDA-X AI libraries.

UNet Models and Notebooks for Industrial Quality Inspection

The UNet model is a convolutional auto-encoder for 2D image segmentation used in industrial quality inspection.

NVIDIA contributed 10 variations of UNet to TensorFlow Hub with notebooks to try, each specializing in detecting different defects (eg: scratches, spots, etc.). NVIDIA also published a UNet notebook to the Google AI Hub with TensorFlow-TensorRT integration for optimized inference deployment.

Available from: TensorFlow Hub | Google AI Hub

BERT Question Answering Inference with Mixed Precision

Bidirectional Embedding Representations from Transformers (BERT), is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. 

This notebook walks through how to perform optimized inference for QA tasks with BERT-Large using mixed precision on Tensor Core GPUs.

Available from: Google AI Hub | Google Colab 

Additional contributions and collaborations to come from NVIDIA and Google.

These models and more are also available to try from NGC and NVIDIA Deep Learning Examples on GitHub.