NCCL

May 29, 2023
Turbocharging Generative AI Workloads with NVIDIA Spectrum-X Networking Platform
Large Language Models (LLMs) and AI applications such as ChatGPT and DALL-E have recently seen rapid growth. Thanks to GPUs, CPUs, DPUs, high-speed storage, and...
8 MIN READ

Oct 20, 2020
Accelerating IO in the Modern Data Center: Network IO
This is the second post in the Accelerating IO series, which describes the architecture, components, and benefits of Magnum IO, the IO subsystem of the modern...
19 MIN READ

Feb 04, 2019
Massively Scale Your Deep Learning Training with NCCL 2.4
Imagine using tens of thousands of GPUs to train your neural network. Using multiple GPUs to train neural networks has become quite common with all deep...
8 MIN READ

Sep 26, 2018
Scaling Deep Learning Training with NCCL
NVIDIA Collective Communications Library (NCCL) provides optimized implementation of inter-GPU communication operations, such as allreduce and variants....
6 MIN READ

Aug 08, 2017
NVIDIA Deep Learning SDK Update for Volta Now Available
At GTC 2017, NVIDIA announced Volta optimized updates to the NVIDIA Deep Learning SDK. Today, we’re making these updates available as free downloads to...
2 MIN READ

Apr 07, 2016
Fast Multi-GPU collectives with NCCL
Today many servers contain 8 or more GPUs. In principle then, scaling an application from one to many GPUs should provide a tremendous performance boost. But in...
10 MIN READ