NVIDIA Collective Communications Library (NCCL)

Multi-GPU and multi-node collective communication primitives

The NVIDIA Collective Communications Library (NCCL) implements multi-GPU and multi-node collective communication primitives that are performance optimized for NVIDIA GPUs. NCCL provides routines such as all-gather, all-reduce, broadcast, reduce, reduce-scatter, that are optimized to achieve high bandwidth over PCIe and NVLink high-speed interconnect.

(Click to Zoom)

Developers of deep learning frameworks can rely on NCCL’s highly optimized, MPI compatible and topology aware routines, to take full advantage of all available GPUs within and across multiple nodes. Leading deep learning frameworks such as Caffe,Caffe2, Chainer, MxNet, TensorFlow, and PyTorch have integrated NCCL to accelerate deep learning training on multi-GPU systems.
To download earlier versions of NCCL (1.x) versions please visit the NCCL’s GitHub page.


DOWNLOAD NCCL

What’s New in NCCL 2.3

Deep learning frameworks using NCCL 2.3 and later can leverage new features and performance of the Volta and Turing architecture to deliver high-performance and efficient multi-node, multi-GPU scaling of deep learning training. NCCL 2.3 highlights include:

  • Improved low latency algorithms for small message sizes
  • Finer control of when to use GPU Direct P2P and GPU Direct RDMA
Read the latest NCCL release notes for a detailed list of new features and enhancements.

Key Features

  • Support multi-threaded and multi-process applications
  • Faster training of newer and deeper models with aggregated inter-GPU reduction operations.
  • Multiple ring formations for high bus utilization.
  • Support for InfiniBand verbs, RoCE and IP Socket internode communication

Learn More