After clicking “Watch Now” you will be prompted to login or join.


Click “Watch Now” to login or join the NVIDIA Developer Program.


Training Neural Networks with Tensor Core

Dusan Stosic, NVIDIA

GTC 2020

Mixed-precision training of deep neural networks enables faster training and reduces memory requirements, enabling the use of larger batch sizes, larger models, or larger inputs. Tensor Cores in Volta provide an order of magnitude more throughput when compared to FP32. We will first present considerations and techniques when training with reduced precision, and review results from networks of varying tasks and model architectures. Then we will discuss real-world training in mixed precision using popular deep learning frameworks. We will conclude with guidelines and recommendations on maximizing training performance with mixed precision.

View More GTC 2020 Content