AI and HPC Containers
Develop and deploy applications faster with GPU-optimized containers from the NVIDIA NGC™ catalog.
What Are Containers?
A container is a portable unit of software that combines the application and all its dependencies into a single package that’s agnostic to the underlying host OS. It removes the need to build complex environments and simplifies the application development-to-deployment process.
The NVIDIA NGC catalog contains a host of GPU-optimized containers for deep learning, machine learning, visualization, and high-performance computing (HPC) applications that are tested for performance, security, and scalability.Browse NGC Containers
Benefits of Containers from the NGC Catalog
Built-in libraries and dependencies allow you to easily deploy and run applications. Deploy AI/ML containers to Vertex AI using the quick deploy feature in the NGC catalog.
NVIDIA AI containers like TensorFlow and PyTorch provide performance-optimized monthly releases for faster AI training and inference.
Deploy the containers on multi-GPU/multi-node systems anywhere—in the cloud, on premises, and at the edge—on bare metal, virtual machines (VMs), and Kubernetes.
Deploy with Confidence
Containers are scanned for common vulnerabilities and exposures (CVEs), come with security reports, and are backed by optional enterprise support through NVIDIA AI Enterprise.
Optimized for Performance
NVIDIA-built docker containers are updated monthly and third-party software is updated regularly to deliver the features needed to extract maximum performance from your existing infrastructure and reduce time to solution.
BERT-Large for Natural Language Processing
BERT-Large leverages mixed precision arithmetic and Tensor Cores on Volta V100 and Ampere A100 GPUs for faster training times while maintaining target accuracy.
BERT-Large and Training performance with TensorFlow on a single node 8x V100 (16GB) & A100 (40GB). Mixed Precision. Batch size for BERT: 3 (V100), 24 (A100)
ResNet50 v1.5 for Image Processing
This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures for faster training.
ResNet 50 performance with TensorFlow on single-node 8x V100 (16GB) and A100 (40 GB). Mixed Precision. Batch size for ResNet50: 26
Matlab for Deep Learning
Continuous development of Matlab’s Deep Learning container improves performance for training and inference
Windows 10, Intel Xeon E5-2623 @2.4GHz, NVIDIA Titan V 12GB GPUs
The quick deploy feature in the NGC catalog automatically sets up the Vertex AI instance with an optimal configuration, preloads the dependencies, runs the software from NGC without any need to set up the infrastructure.
Deploy popular DL and ML containers, models, and SDKs directly from the NGC catalog.
Containers for Diverse Workloads
Get started today by selecting from over 80 containerized software applications and SDKs, developed by NVIDIA and our ecosystem of partners.
TensorFlow is an open-source software library for high-performance numerical computation.Explore container
PyTorch is a GPU-accelerated tensor computational framework with a Python front end.Explore container
NVIDIA Triton Inference Server
NVIDIA Triton™ Inference Server is an open-source inference solution that maximizes utilization of and performance on GPUs.Explore container
NVIDIA TensorRT® is a C++ library that facilitates high-performance inference on NVIDIA GPUs.Explore container
NVIDIA Clara™ Train for medical imaging is an application framework with over 20 state-of-the-art pre-trained models, transfer learning and federated learning tools, AutoML, and AI-assisted annotation.Explore container
DeepStream is the streaming analytics toolkit for AI-based video, audio, and image understanding for multi-sensor processing.Explore container
NVIDIA Riva, is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.Explore container
Merlin HugeCTR, a component of NVIDIA Merlin™, is a deep neural network training framework designed for recommender systems.Explore container
NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.Explore container
GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.Explore container
RELION implements an empirical Bayesian approach for analysis of cryogenic electron microscopy (cryo-EM).Explore container
NVIDIA HPC SDK
The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for building, deploying, and managing HPC applications.Explore container
Frequently Asked Questions
- A diverse set of containers span a multitude of use cases with built-in libraries and dependencies for easy compiling of custom applications.
- They offer faster training with Automatic Mixed Precision (AMP) and minimal code changes.
- Reduced time to solution with the ability to scaleup from single-node to multi-node systems.
- Extremely portable, allowing you to develop faster by running containers in the cloud, on premises, or at the edge.
Containers from the NGC catalog make it seamless for machine learning engineers and IT to deploy to production.
- They are tested on various platforms and architectures, enabling seamless deployment on a wide variety of systems and platforms.
- They can be deployed to run on bare metal, virtual machines (VMs), and Kubernetes, including various architectures such as x86, ARM, and IBM Power.
- They can run easily on various container runtimes such as Docker, Singularity, cri-o, and containerd.
- The container images are scanned for common vulnerabilities and exposures (CVEs) and are backed by optional enterprise support to troubleshoot issues for NVIDIA-built software.
NGC Catalog Resources
Learn how to use the NGC catalog with these step-by-step instructions.
Read about the latest NGC catalog updates and announcements.
Watch all the top NGC sessions on demand.
Walk through how to use the NGC catalog with these video tutorials.
Accelerate your AI development with Containers from the NGC catalog.