DEVELOPER BLOG

Tag: NVLink

AI / Deep Learning

Optimizing Data Movement in GPU Applications with the NVIDIA Magnum IO Developer Environment

Magnum IO is the collection of IO technologies from NVIDIA and Mellanox that make up the IO subsystem of the modern data center and enable applications at scale. 8 MIN READ
AI / Deep Learning

Improving GPU Application Performance with NVIDIA CUDA 11.2 Device Link Time Optimization

CUDA 11.2 features the powerful link time optimization (LTO) feature for device code in GPU-accelerated applications. Device LTO brings the performance… 14 MIN READ
HPC

Accelerating NVSHMEM 2.0 Team-Based Collectives Using NCCL

NVSHMEM 2.0 is introducing a new API for performing collective operations based on the Team Management feature of the OpenSHMEM 1.5 specification. A team is a… 9 MIN READ
Data Science

Optimizing Data Transfer Using Lossless Compression with NVIDIA nvcomp

One of the most interesting applications of compression is optimizing communications in GPU applications. GPUs are getting faster every year. For some apps… 17 MIN READ
AI / Deep Learning

Fast, Flexible Allocation for NVIDIA CUDA with RAPIDS Memory Manager

When I joined the RAPIDS team in 2018, NVIDIA CUDA device memory allocation was a performance problem. RAPIDS cuDF allocates and deallocates memory at high… 24 MIN READ
HPC

Accelerating IO in the Modern Data Center: Network IO

This is the second post in the Explaining Magnum IO series, which described the architecture, components, and benefits of Magnum IO, the IO subsystem of the… 19 MIN READ