Deep learning algorithms use large amounts of data and the computational power of the GPU to learn information directly from data such as images, signals, and text. Deep learning frameworks offer flexibility with designing and training custom deep neural networks and provide interfaces to common programming language. For developers the NVIDIA Deep Learning SDK offers powerful tools and libraries for the development of deep learning frameworks such as Caffe2, Cognitive toolkit, MXNet, PyTorch, TensorFlow and others.
Deep learning frameworks offer building blocks for designing, training and validating deep neural networks, through a high level programming interface. Widely used deep learning frameworks such as Caffe2, Cognitive toolkit, MXNet, PyTorch, TensorFlow and others rely on GPU-accelerated libraries such as cuDNN and NCCL to deliver high-performance multi-GPU accelerated training.
To learn more about these popular deep learning frameworks and to get started, visit the Deep Learning Frameworks page
The NVIDIA Deep Learning SDK provides powerful tools and libraries for designing and deploying GPU-accelerated deep learning applications. It includes libraries for deep learning primitives, inference, video analytics, linear algebra, sparse matrices, and multi-GPU communications.
Kubernetes on NVIDIA GPUs and GPU Container Runtime enables enterprises to scale up training and inference deployment to multi-cloud GPU clusters seamlessly. Developers can wrap their GPU-accelerated applications along with its dependencies into a single package and deploy with Kubernetes and deliver the best performance on NVIDIA GPUs, regardless of the deployment environment.
Learn more about containers and orchestrators