Technical Walkthrough 0

Optimizing Data Movement in GPU Applications with the NVIDIA Magnum IO Developer Environment

Magnum IO is the collection of IO technologies from NVIDIA and Mellanox that make up the IO subsystem of the modern data center and enable applications at scale. 8 MIN READ
Technical Walkthrough 0

Improving GPU Application Performance with NVIDIA CUDA 11.2 Device Link Time Optimization

CUDA 11.2 features the powerful link time optimization (LTO) feature for device code in GPU-accelerated applications. Device LTO brings the performance… 14 MIN READ
Technical Walkthrough 0

Accelerating NVSHMEM 2.0 Team-Based Collectives Using NCCL

NVSHMEM 2.0 is introducing a new API for performing collective operations based on the Team Management feature of the OpenSHMEM 1.5 specification. A team is a… 9 MIN READ
Technical Walkthrough 0

Optimizing Data Transfer Using Lossless Compression with NVIDIA nvcomp

One of the most interesting applications of compression is optimizing communications in GPU applications. GPUs are getting faster every year. For some apps… 17 MIN READ
Image depicting NVIDIA CEO Jen-Hsun Huang explaining the importance of the RAPIDS launch demo at GTC Europe 2018.
Technical Walkthrough 0

Fast, Flexible Allocation for NVIDIA CUDA with RAPIDS Memory Manager

When I joined the RAPIDS team in 2018, NVIDIA CUDA device memory allocation was a performance problem. RAPIDS cuDF allocates and deallocates memory at high… 24 MIN READ
Technical Walkthrough 0

Accelerating IO in the Modern Data Center: Network IO

This is the second post in the Accelerating IO series, which describes the architecture, components, and benefits of Magnum IO, the IO subsystem of the modern… 19 MIN READ