DEVELOPER BLOG

Tag: Mixed Precision

AI / Deep Learning

Accelerating AI Training with NVIDIA TF32 Tensor Cores

NVIDIA Ampere GPU architecture introduced the third generation of Tensor Cores, with the new TensorFloat32 (TF32) mode for accelerating FP32 convolutions and… 10 MIN READ
Artificial Intelligence

Video Series: Mixed-Precision Training Techniques Using Tensor Cores for Deep Learning

Neural networks with thousands of layers and millions of neurons demand high performance and faster training times. The complexity and size of neural networks… 5 MIN READ
Accelerated Computing

Using Tensor Cores for Mixed-Precision Scientific Computing

Double-precision floating point (FP64) has been the de facto standard for doing scientific simulation for several decades. Most numerical methods used in… 9 MIN READ
AI / Deep Learning

NVIDIA Apex: Tools for Easy Mixed-Precision Training in PyTorch

Most deep learning frameworks, including PyTorch, train using 32-bit floating point (FP32) arithmetic by default. However, using FP32 for all operations is not… 8 MIN READ
AI / Deep Learning

Mixed Precision Training for NLP and Speech Recognition with OpenSeq2Seq

The success of neural networks thus far has been built on bigger datasets, better theoretical models, and reduced training time. Sequential models… 11 MIN READ
Artificial Intelligence

Tensor Ops Made Easier in cuDNN

Neural network models have quickly taken advantage of NVIDIA Tensor Cores for deep learning since their introduction in the Tesla V100 GPU last year. 6 MIN READ