NVIDIA Collective Communications Library (NCCL)

Multi-GPU and multi-node collective communication primitives

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.

(Click to Zoom)

Developers of deep learning frameworks can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. Leading deep learning frameworks such as Caffe,Caffe2, Chainer, MxNet, TensorFlow, and PyTorch have integrated NCCL to accelerate deep learning training on multi-GPU systems.

We strive to bring the best experiences to the developer community, as a result we have made NCCL 2.3 and later open source. This enables us to have open discussions with the developer community as we continue to build a great product. The source code for NCCL is available on GitHub and NCCL binaries can be downloaded from NVIDIA Developer Zone.

What’s New in NCCL 2.4

Deep learning frameworks using NCCL 2.4 and later can leverage new features and performance of the Volta and Turing architecture to deliver high-performance and efficient multi-node, multi-GPU scaling of deep learning training. NCCL 2.4 highlights include:

  • Tree algorithms for large scale and fast multi-GPU and multi-node deep learning training by reducing latency by upto 180x at scale. For more information, read the latest developer blog on Massively Scale Your Deep Learning Training with NCCL 2.4
  • Support for external network plug-in such as libfabric
Read the latest NCCL release notes for a detailed list of new features and enhancements.

Key Features

  • Support multi-threaded and multi-process applications
  • Faster training of newer and deeper models with aggregated inter-GPU reduction operations.
  • Multiple ring formations for high bus utilization.
  • Tree algorithm implementation for large scale multi-GPU and multi-node training reducing latency.
  • Support for InfiniBand verbs, libfabric, RoCE and IP Socket internode communication

Additional Resources