Magnum IO GPUDirect Storage
A Direct Path Between Storage and GPU Memory
As datasets increase in size, the time spent loading data can impact application performance. GPUDirect® Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.
Key Features of v1.6
The following features have been added in v1.6:
- Improved batch api performance
- Implemented threadpool in cuFile library to enable parallelism and improve throughput of a large IO request using a single user thread.
GPUDirect Storage v1.6 Release
NVIDIA Magnum IO GPUDirect® Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.
- Read the blog: Accelerating IO in the modern data center - magnum IO storage partnerships
- NVIDIA Magnum IO™ SDK
- Read the blog: Optimizing data movement in GPU applications with the NVIDIA Magnum IO developer environment
- Read the blog: accelerating IO in the modern data center: Magnum IO Architecture
- Watch the webinar: NVIDIA GPUDirect Storage: Accelerating the data path to the GPU
- NVIDIA-Certified Systems configuration guide
- NVIDIA-Certified Systems
- Contact us at email@example.com