The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.
Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks, including Caffe, TensorFlow, Theano, Torch, and CNTK. See supported frameworks for more details.
cuDNN is freely available to members of the Accelerated Computing Developer Program
Data scientists and researchers can take advantage of cuDNN by downloading a Deep Learning frameworks or NVIDIA DIGITS. DIGITS lets you interactively manage data, perform training on multiple GPUs, and export the best performing model for deployment without the need to write code.
Visit the What’s New page to explore top features from previous releases of cuDNN.
cuDNN is supported on Windows, Linux and MacOS systems with Pascal, Kepler, Maxwell, Tegra K1 or Tegra X1 GPUs.
We are amazed by the steady stream of improvements made to the NVIDIA Deep Learning SDK and the speedups that they deliver. This new version of the SDK, significantly improves our convolution algorithms, and goes so far as to accelerate the 3D convolution by a factor of 3x! On top of that, we are excited about their decision to provide tools for other models such as LSTM, RNN and GRU in this new version.Frédéric Bastien, Team Lead - Software Infrastructure at MILA
Watch the GPU-Accelerated Deep Learning with cuDNN webinar to learn more about cuDNN.