Note: This video may require joining the NVIDIA Developer Program or login

GTC Silicon Valley-2019 ID:S9998:Automatic Mixed Precision in PyTorch

Michael Carilli(NVIDIA)
We'll describe NVIDIA's Automatic Mixed Precision (AMP) for PyTorch, a tool to enable mixed precision training for neural networks in just three lines of Python. Mixed precision training combines memory savings and Tensor Core-accelerated throughput of FP16 (16-bit) arithmetic for compute-intensive operations with traditional FP32 arithmetic for a few selected operations. In practice, mixed precision delivers end-to-end speedups between 2 and 4X for many bellwether networks. We'll briefly review mixed precision benefits, concepts, and best practices, then walk through implementing AMP in several example models.

View the slides (pdf)