NGC CONTAINERS

Develop and deploy applications faster with GPU-optimized containers.

Learn how to use an NVIDIA NGC Jupyter notebook for medical imaging in our upcoming webinar.Register Now

Simplifying AI and HPC Workflows

A container is a portable unit of software that combines the application and all its dependencies into a single package that is agnostic to the underlying host OS, removing the need to build complex environments and simplifying the application development-to-deployment process.

The NVIDIA® NGC™ catalog contains a host of GPU-optimized containers for deep learning, machine learning, visualization, and high-performance computing (HPC) applications that are tested for performance, security, and scalability.

For Data Scientists, Researchers, and Developers

Develop Faster with Containers

NGC containers allow you to focus on application development instead of building the environment needed to run your applications.

  • Diverse set of containers spanning a multitude of use cases
  • Built-in libraries and dependencies for easy compiling of custom applications
  • Faster training with Automatic Mixed Precision (AMP) and minimal code changes
  • Reduced time to solution by scaling up from single-node to multi-node systems
  • Extremely portable, allowing you to develop on the cloud, on premises, or at the edge
NGC containers allow you to focus on your research & development.


For Machine Learning Engineers and IT

NNGC containers can run on-premise, in the cloud, or at the edge.

Seamlessly Deploy to Production

The containers are tested on various platforms and architectures, enabling seamless deployment on a wide variety of systems and platforms.

  • Flexible to run on bare metal, virtual machines (VMs), and Kubernetes, including various architectures such as x86, ARM, and IBM Power
  • Highly versatile with support for various container runtimes such as Docker, Singularity, cri-o, and containerd
  • Enterprise-ready with containers scanned for common vulnerabilities and exposures (CVEs)
  • Backed by optional enterprise support to troubleshoot issues for NVIDIA-built software

Performance-optimized

NVIDIA-built containers are updated monthly and third-party software is updated regularly to deliver the features needed to extract maximum performance from your existing infrastructure and reduce time to solution.

BERT-Large for Natural Language Processing

BERT-Large leverages mixed precision arithmetic and Tensor Cores on Volta V100 and Ampere A100 GPUs for faster training times while maintaining target accuracy.

BERT-Large and Training performance with TensorFlow on a single node 8x V100 (16GB) & A100 (40GB). Mixed Precision. Batch size for BERT: 3 (V100), 24 (A100)


Explore BERT-Large for PyTorch Explore BERT-Large for TensorFlow

ResNet50 v1.5 for Image Processing

This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures for faster training.

ResNet 50 performance with TensorFlow on single-node 8x V100 (16GB) and A100 (40 GB). Mixed Precision. Batch size for ResNet50: 26


Explore ResNet50 for PyTorch Explore ResNet50 for TensorFlow

Matlab for Deep Learning

Continuous development of Matlab’s Deep Learning container improves performance for training and inference

Windows 10, Intel Xeon E5-2623 @2.4GHz, NVIDIA Titan V 12GB GPUs


Explore Matlab


Built By Developers For Developers

Get started today by selecting from over 80 containerized software applications and SDKs, developed by NVIDIA and our ecosystem of partners.


TensorFlow logo

TensorFlow

TensorFlow is an open-source software library for high-performance numerical computation.

PyTorch logo

PyTorch

PyTorch is a GPU-accelerated tensor computational framework with a Python front end.

NVIDIA triton logo

NVIDIA Triton Inference Server

NVIDIA Triton™ Inference Server is an open-source inference solution that maximizes utilization of and performance on GPUs.

NVIDIA clara guardian

NVIDIA TensorRT

NVIDIA TensorRT® is a C++ library that facilitates high-performance inference on NVIDIA GPUs.


NVIDIA clara guardian

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

GROMACS logo

GROMACS

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Relion logo

RELION

RELION implements an empirical Bayesian approach for analysis of cryogenic electron microscopy (cryo-EM).

NVIDIA hpc ngc catalog lockup

NVIDIA HPC SDK

The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for building, deploying, and managing HPC applications.


NVIDIA clara ngc catalog lockup

NVIDIA Clara

NVIDIA Clara™ Train for medical imaging is an application framework with over 20 state-of-the-art pre-trained models, transfer learning and federated learning tools, AutoML, and AI-assisted annotation.

NVIDIA deepstream ngc catalog lockup

DeepStream

DeepStream is the streaming analytics toolkit for AI-based video, audio, and image understanding for multi-sensor processing.

NVIDIA nemo ngc catalog lockup

NVIDIA Riva

NVIDIA Riva, is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.

NVIDIA merlin ngc catalog lockup

HugeCTR

HugeCTR, a component of NVIDIA Merlin™, is a deep neural network training framework that is capable of distributed training across multiple GPUs and nodes for maximum performance.



NGC Catalog Resources

Developer Blogs

Learn how to use the NGC catalog with these step-by-step instructions.



Explore Technical Blogs

GTC Sessions

Watch all the top NGC sessions on-demand.



Watch GTC Sessions

Webinars

Walk through how to use the NGC catalog with these video tutorials.



Watch Webinars