NVIDIA Collective Communications Library (NCCL)

Multi-GPU and multi-node collective communication primitives

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.

(Click to Zoom)

Developers of deep learning frameworks can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. Leading deep learning frameworks such as Caffe,Caffe2, Chainer, MxNet, TensorFlow, and PyTorch have integrated NCCL to accelerate deep learning training on multi-GPU systems.

We strive to bring the best experiences to the developer community, as a result we are open sourcing NCCL 2.3. Open sourcing NCCL 2.3 enables us to have open discussions with the developer community as we continue to build a great product. NCCL 2.3 source is available on GitHub and NCCL 2.3 binaries can be downloaded from NVIDIA Developer Zone.

What’s New in NCCL 2.3

Deep learning frameworks using NCCL 2.3 and later can leverage new features and performance of the Volta and Turing architecture to deliver high-performance and efficient multi-node, multi-GPU scaling of deep learning training. NCCL 2.3 highlights include:

  • Improved low latency algorithms for small message sizes
  • Finer control of when to use GPU Direct P2P and GPU Direct RDMA
Read the latest NCCL release notes for a detailed list of new features and enhancements. Learn more about NCCL in the new blog: Scaling Deep Learning Training with NCCL

Key Features

  • Support multi-threaded and multi-process applications
  • Faster training of newer and deeper models with aggregated inter-GPU reduction operations.
  • Multiple ring formations for high bus utilization.
  • Support for InfiniBand verbs, RoCE and IP Socket internode communication

Additional Resources