Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9656:Distributed Training and Fast inter-GPU Communication with NCCL
We'll present the latest developments in the NCCL library, which provides optimized inter-GPU communication primitives to make distributed computing easy and universal. Since 2015, NCCL has enabled deep learning and HPC applcations to scale to thousands of GPUs. We'll also discuss the state of integration of NCCL in deep learning frameworks.