Magnum IO GPUDirect Storage
A Direct Path Between Storage and GPU Memory
As datasets increase in size, the time spent loading data can impact application performance. GPUDirect® Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.
NVIDIA GPUDirect Storage integrated solution in production.
Partners actively adopting GDS
Key Features of GA / v1.0
The following features have been added in GA / v1.0:
- New configuration and environment variables for the cuFile library
- Fixed error handling behavior for Weka retriable and unsupported errors
- Removed hard dependency on librcu-bp
- Added read support for IBM Spectrum Scale
GPUDirect Storage GA / v1.0 Release
NVIDIA Magnum IO GPUDirect® Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.
- NVIDIA Magnum IO™ SDK
- Read the blog: Optimizing Data Movement in GPU Applications with the NVIDIA Magnum IO Developer Environment
- Read the blog: Accelerating IO in the Modern Data Center: Magnum IO Architecture
- Watch the webinar: NVIDIA GPUDirect Storage: Accelerating the Data Path to the GPU
- NVIDIA-Certified Systems Configuration Guide
- NVIDIA-Certified Systems
- Contact us at firstname.lastname@example.org