Magnum IO GPUDirect Storage

A Direct Path Between Storage and GPU Memory

As datasets increase in size, the time spent loading data can impact application performance. GPUDirect® Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.

DownloadTechnical Overview
GPU direct storage
GPUDirect Storage enables a direct data path between storage and GPU memory and avoids extra copies through a bounce buffer in the CPU’s memory.

Partner Ecosystem

GA

NVIDIA GPUDirect Storage integrated solution in production.

Beta

Partners actively adopting GDS

Emerging

Partners in early stages of GDS integration or qualification

Validated HW

Key Features of GA / v1.0

The following features have been added in GA / v1.0:

  • New configuration and environment variables for the cuFile library
  • Fixed error handling behavior for Weka retriable and unsupported errors
  • Removed hard dependency on librcu-bp
  • Added read support for IBM Spectrum Scale

Software Download

GPUDirect Storage GA / v1.0 Release

NVIDIA Magnum IO GPUDirect® Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.

GDS is currently supported on Linux x86-64 distributions of RHEL8 and Ubuntu 18.04 and 20.04; it is not supported on Windows. When choosing which CUDA packages to download, please select Linux first followed by x86-64 then either RHEL or Ubuntu distributions along with the desired packaging format(s).

Download