NVIDIA cuFFT

NVIDIA cuFFT, a library that provides GPU-accelerated Fast Fourier Transform (FFT) implementations, is used for building applications across disciplines, such as deep learning, computer vision, computational physics, molecular dynamics, quantum chemistry, and seismic and medical imaging.

Available in the CUDA Toolkit

cuFFT

Divide-and-conquer algorithms for computing discrete Fourier transformers. Multi-GPU support for FFT calculations on up to 16 GPUs in a single node.

Available in the HPC SDK

cuFFT

Divide-and-conquer algorithms for computing discrete Fourier transformers. Multi-GPU support for FFT calculations on up to 16 GPUs in a single node.

cuFFTMp

Multi-node support for FFTs in exascale problems.

Available as Standalone

cuFFTDx Device APIs

cuFFT Device Extensions for performing FFT calculations inside a CUDA kernel.


cuFFT

The FFT is a divide-and-conquer algorithm for efficiently computing discrete Fourier transforms of complex or real-valued datasets. It’s one of the most important and widely used numerical algorithms in computational physics and general signal processing. The cuFFT library provides a simple interface for computing FFTs on an NVIDIA GPU, which allows users to quickly leverage the GPU’s floating-point power and parallelism in a highly optimized and tested FFT library.

When calculations are distributed across GPUs, cuFFT supports using up to 16 GPUs connected to a CPU to perform Fourier Transforms through its cuFFTXt APIs. Performance is a function of the bandwidth between the GPUs, the computational ability of the individual GPUs, and the type and number of FFTs to be performed.

HPC SDKCUDA Toolkit
  • 1D, 2D, and 3D transforms of complex and real data types

  • Familiar APIs similar to the advanced interface of the Fastest Fourier Transform in the West (FFTW)

  • Flexible data layouts allowing arbitrary strides between individual elements and array dimensions

  • Streamed asynchronous execution

  • Half-, single-, and double-precision transforms

  • Batch execution

  • In-place and out-of-place transforms

  • Support for up to 16-GPU systems

  • Thread-safe and callable from multiple host threads

The cuFFT library is highly optimized for performance on NVIDIA GPUs. The chart below displays the performance boost achieved by moving to newer hardware—with zero code changes.

1D Single-Precision FFT

A line chart displays performance boost achieved by moving to newer hardware with no code changes

The chart below compares the performance of 16 NVIDIA Volta™ GV100 Tensor Core GPUs to the performance of eight NVIDIA Ampere architecture GA100 Tensor Core GPUs for 3D C2C FP32 FFTs.



cuFFTDx Device Extensions

cuFFT Device Extensions (cuFFTDx) enable users to perform FFT calculations inside their CUDA kernel. Fusing numerical operations can decrease latency and improve the performance of their application.

Download cuFFTDx
  • FFT embeddable into a CUDA kernel

  • High-performance, no-unnecessary data movement from and to global memory

  • Customizable with options to adjust selection of FFT routine for different needs (size, precision, batches, etc.)

  • Ability to fuse FFT kernels with other operations, saving global memory trips

  • Compatible with future versions of the CUDA Toolkit

  • Support for Windows

The chart below shows how cuFFTDx can provide over a 2X performance boost compared with cuFFT host calls when executing convolution with 1D FFTs.


cuFFTMp Multi-Node Support

The multi-node FFT functionality, available through the cuFFTMp API, enables scientists and engineers to solve distributed 2D and 3D FFTs in exascale problems. The library handles all the communications between machines, allowing users to focus on other aspects of their problems.

HPC SDK
  • 2D and 3D distributed-memory FFTs

  • Slabs (1D) and pencils (2D) data decomposition, with arbitrary block sizes

  • Message Passing Interface (MPI) compatible

  • Low-latency implementation using NVSHMEM, optimized for single-node and multi-node FFTs

Below compares multi-node weak-scaling performance for distributed 3D FFT by precision, as the problem size and number of GPUs increase. The benchmark was achieved on the NVIDIA Selene supercomputer. Note that, for FP64 and size 1,6384 3, the data didn’t fit on the system.

The chart compares multi-node weak scaling performance for distributed 3D FFT by precision

cuFFT LTO EA Preview [Deprecated]

LTO-enabled callback is available as a fully supported feature in cuFFT 12.6 update 2 or later versions. cuFFT LTO Early Access version is deprecated.

This early-access preview of the cuFFT library contains support for the new and enhanced LTO-enabled callback routines for Linux and Windows. LTO-enabled callbacks bring callback support for cuFFT on Windows for the first time. These new and enhanced callbacks offer a significant boost to performance in many use cases.

This preview builds upon nvJitLink, a library introduced in the CUDA Toolkit 12.0, to leverage just-in-time link-time optimization (JIT LTO) for callbacks by enabling runtime fusion of user callback code and library kernel code.

Download Now
  • Extension to the callback APIs to support LTO callback routines

  • No offline device linking required to use callbacks

  • Flexible data layouts allowing arbitrary strides between individual elements and array dimension sides between individual elements and array dimensions

  • Adds callback support to the dynamic cuFFT library

  • Adds callback support to Windows

  • Compatible with existing callback device code

  • Increased performance versus the non-LTO callback routines for many cases

The chart below compares the performance of running complex-to-complex FFTs with minimal load and store callbacks between cuFFT LTO EA preview and cuFFT in the CUDA Toolkit 11.7 on an NVIDIA A100 Tensor Core 80GB GPU.

The chart compares multi-node weak scaling performance for distributed 3D FFT by precision

Resources


Decorative image representing Developer Forums

Visit the Forums

Decorative image representing contact NVIDIA

Contact Us