Networking

GPUDirect Storage – Early Access Program Availability

As the computing horsepower of GPUs increases, so does the demand for input/output (I/O) throughput and the need to strong scale with low-latency, high-bandwidth communication among GPUs.

A GPU-accelerated supercomputer transforms a compute-bound problem into an I/O-bound problem. In multi-node GPU clusters, slow CPU single-thread performance is in the critical path of data access from local or remote storage devices.    

GPUDirect Storage (GDS) is part of NVIDIA’s Magnum IO architecture and leverages the cuFile API to enable a direct path to transfer data between GPU memory and local or remote storage devices by leveraging direct memory access (DMA) engines near the NIC or storage device.

GPUDirect Storage facilitates I/O transfers directly to the GPU memory, removing the expensive data path bottlenecks to and from the CPU/system memory, the latency overhead of an extra copy through system memory which relieves the CPU utilization bottleneck by operating with greater independence.

GDS enables DMA (direct memory access) between GPU memory and NVMe storage drives at full line rate. Bypassing CPU host memory.

Today, we are announcing the availability of the GPUDirect Storage early access program (v0.8) with industry partners DDN, WekaIO, VAST and more to come.

  • The release enables DDN EXAscalar, WekaFS and NFS-based VAST filesystems
  • Support for local NVMe and NVMe-oF solutions (with ext4) is available via MOFED 5.1

To learn more about the GDS early access program  click here.

Discuss (0)

Tags