NVSHMEM™ is a parallel programming interface based on OpenSHMEM that provides efficient and scalable communication for NVIDIA GPU clusters. NVSHMEM creates a global address space for data that spans the memory of multiple GPUs and can be accessed with fine-grained GPU-initiated operations, CPU-initiated operations, and operations on CUDA® streams.

Get Started

Existing communication models, such as Message-Passing Interface (MPI), orchestrate data transfers using the CPU. In contrast, NVSHMEM uses asynchronous, GPU-initiated data transfers, eliminating synchronization overheads between the CPU and the GPU.

Efficient, Strong Scaling

NVSHMEM enables long-running kernels that include both communication and computation, reducing overheads that can limit an application’s performance when strong scaling.

Low Overhead

One-sided communication primitives reduce overhead by allowing the initiating process or GPU thread to specify all information required to complete a data transfer. This low-overhead model enables many GPU threads to communicate efficiently.

Naturally Asynchronous

Asynchronous communications make it easier for programmers to interleave computation and communication, thereby increasing overall application performance.

What's new in NVSHMEM 2.1.2

  • Added a new UCX internode communication transport layer.
    Note: UCX is experimental for this release.
  • Added support for the automatic warp-level coalescing of nvshmem_g operations.
  • Added support for put-with-signal operations on CUDA streams.
  • Added support to map the symmetric heap by using the cuMem APIs.
  • Improved the performance of the single-threaded NVSHMEM put/get device API.
  • Added the NVSHMEM_MAX_TEAMS environment variable to specify the maximum number of teams that can be created.
  • Improved the host and on-stream Alltoall performance by using NCCL.
  • Fixed a bug in the compare-and-swap operation that caused several bytes of the compare operand to be lost.
  • Improved support for single-node environments without InfiniBand.
  • Added CPU core affinity to debugging output.
  • Added support for the CUDA 11.3 cudaDeviceFlushGPUDirectRDMAWrites API for consistency.
  • Improved support for the NVIDIA Tools Extension (NVTX) to enable performance analysis through NVIDIA NSight.
  • Removed the NVSHMEM_IS_P2P_RUN environment variable, because runtime automatically determines it.
  • Made improvements to NVSHMEM example codes.
  • Added the NVSHMEM_REMOTE_TRANSPORT environment variable to select the networking layer that is used for communication between nodes.
  • Set the maxrregcount to 32 for non-inlined device functions to ensure that calling these NVSHMEM functions does not negatively affect kernel occupancy.

Key Features

  • Combines the memory of multiple GPUs into a partitioned global address space that’s accessed through NVSHMEM APIs
  • Includes a low-overhead, in-kernel communication API for use by GPU threads
  • Includes stream-based and CPU-initiated communication APIs
  • Supports x86 and POWER9 processors
  • Is interoperable with MPI and other OpenSHMEM implementations

NVSHMEM Advantages

Increase Performance

Convolution is a compute-intensive kernel that’s used in a wide variety of applications, including image processing, machine learning, and scientific computing. Spatial parallelization decomposes the domain into sub-partitions that are distributed over multiple GPUs with nearest-neighbor communications, often referred to as halo exchanges.

In the Livermore Big Artificial Neural Network (LBANN) deep learning framework, spatial-parallel convolution is implemented using several communication methods, including MPI and NVSHMEM. The MPI-based halo exchange uses the standard send and receive primitives, whereas the NVSHMEM-based implementation uses one-sided put, yielding significant performance improvements on Lawrence Livermore National Laboratory’s Sierra supercomputer.

Efficient Strong-Scaling on Sierra Supercomputer

Efficient Strong-Scaling on NVIDIA DGX SuperPOD

Accelerate Time to Solution

Reducing the time to solution for high-performance, scientific computing workloads generally requires a strong-scalable application. QUDA is a library for lattice quantum chromodynamics (QCD) on GPUs, and it’s used by the popular MIMD Lattice Computation (MILC) and Chroma codes.

NVSHMEM-enabled QUDA avoids CPU-GPU synchronization for communication, thereby reducing critical-path latencies and significantly improving strong-scaling efficiency.

Watch the GTC 2020 Talk

Simplify Development

The conjugate gradient (CG) method is a popular numerical approach to solving systems of linear equations, and CGSolve is an implementation of this method in the Kokkos programming model. The CGSolve kernel showcases the use of NVSHMEM as a building block for higher-level programming models like Kokkos.

NVSHMEM enables efficient multi-node and multi-GPU execution using Kokkos global array data structures without requiring explicit code for communication between GPUs. As a result, NVSHMEM-enabled Kokkos significantly simplifies development compared to using MPI and CUDA.

Productive Programming of Kokkos CGSolve

Ready to start developing with NVSHMEM?

Get Started