DO MORE WITH MIXED PRECISION TRAINING

Get greater GPU acceleration for deep learning models with Tensor Cores

Learn More

Automatic Mixed Precision for Deep Learning

Deep Neural Network training has traditionally relied on IEEE single-precision format, however with mixed precision, you can train with half precision while maintaining the network accuracy achieved with single precision. This technique of using both single- and half-precision representations is referred to as mixed precision technique.

Benefits of Mixed precision training

  • Speeds up math-intensive operations, such as linear and convolution layers, by using Tensor Cores.
  • Speeds up memory-limited operations by accessing half the bytes compared to single-precision.
  • Reduces memory requirements for training models, enabling larger models or larger minibatches.
  • Enabling mixed precision involves two steps: porting the model to use the half-precision data type where appropriate; and using loss scaling to preserve small gradient values.

    The automatic mixed precision feature in TensorFlow, PyTorch and MXNet provides deep learning researcher and engineers with AI training speedups of up to 3X on NVIDIA Volta and Turing GPUs with adding just a few lines of code.

    amp

    Using Automatic Mixed Precision for Major Deep Learning Frameworks

    TensorFlow

    Automatic Mixed Precision feature is available inside the TensorFlow container available on NVIDIA NGC container registry. To enable this feature inside the container, simply set one environment variable:

    export TF_ENABLE_AUTO_MIXED_PRECISION=1

    As an alternative, the environment variable can be set inside the TensorFlow Python script:

    os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'

    Automatic mixed precision applies both of these steps internally in TensorFlow with a single environment variable, along with more fine-grained control when necessary.

    “TensorFlow developers will greatly benefit from NVIDIA automatic mixed precision feature. This easy integration enables them to get up to 3X higher performance with mixed precision training on NVIDIA Tensor Core GPUs while maintaining model accuracy.”

    — Rajat Monga, Engineering Director, TensorFlow, Google

    “Automated mixed precision powered by NVIDIA Tensor Core GPUs on Alibaba allows us to instantly speedup AI models nearly 3X. Our researchers appreciated the ease of turning on this feature to instantly accelerate our AI.”

    — Wei Lin,Senior Director at Alibaba Computing Platform, Alibaba


    PyTorch

    Automatic Mixed Precision feature is available in the Apex repository on GitHub. To enable, add these two lines of code into your existing training script:

    model, optimizer = amp.initialize(model, optimizer)

    with amp.scale_loss(loss, optimizer) as scaled_loss:
        scaled_loss.backward()

    MXNet

    We are in the process of building out automatic mixed precision feature for MXNet. You can find the ongoing work posted on GitHub. To enable the feature, add the following lines of code to your existing training script:

    amp.init()
    amp.init_trainer(trainer)
    with amp.scale_loss(loss, trainer) as scaled_loss:
       autograd.backward(scaled_loss)

    Additional Resources