NVIDIA Virtual Machine Image (VMI)

Develop once and deploy on all major cloud service providers (CSPs).

gtc-22-demo-vmi-illustration-slide-6.jpg

What is NVIDIA VMI?

VMI in a cloud instance is akin to the operating system on a laptop. VMIs contain runtimes, libraries, guest OS, drivers for CPUs, GPUs, networking, and other essential software for developers to build and deploy their applications on Virtual Machines.

Explore NVIDIA’S VMI Offerings

Explore NVIDIA’S VMI Offerings

Deploy and Run Application Icon

Maximum Portability

NVIDIA VMIs simplify multi-cloud adoption by providing a standardized software stack. Users can develop on one cloud platform and seamlessly deploy on any cloud.

Interface Icon

Higher Productivity

NVIDIA VMIs eliminate the need to manually install and configure complex software packages by providing a comprehensive, ready-to-use AI stack.

Check Mark Approve Icon

Optimized Performance

VMIs are updated every two months with the latest software stack, providing higher performance over time on the same infrastructure. NVIDIA AI software from the NGC catalog runs out-of-the-box.

Check Mark Security Icon

Enterprise Support

Paid support with NVIDIA AI Enterprise enables developers to focus on building their applications and outsource operational issues.

Check Mark Security Icon

Out-of-the-Box Experience

NVIDIA VMIs provide an out-of-the-box experience for containerized NVIDIA AI software, including popular deep learning frameworks like PyTorch, TensorFlow, RAPIDS™, and NVIDIA Triton™ Inference Server.

Optimized for Performance

NVIDIA-built docker containers are updated monthly and third-party software is updated regularly to deliver the features needed to extract maximum performance from your existing infrastructure and reduce time to solution.

BERT-Large for Natural Language Processing

BERT-Large leverages mixed precision arithmetic and Tensor Cores on Volta V100 and Ampere A100 GPUs for faster training times while maintaining target accuracy.

BERT-Large and Training performance with TensorFlow on a single node 8x V100 (16GB) & A100 (40GB). Mixed Precision. Batch size for BERT: 3 (V100), 24 (A100)


Explore BERT-Large for PyTorch Explore BERT-Large for TensorFlow

ResNet50 v1.5 for Image Processing

This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures for faster training.

ResNet 50 performance with TensorFlow on single-node 8x V100 (16GB) and A100 (40 GB). Mixed Precision. Batch size for ResNet50: 26


Explore ResNet50 for PyTorch Explore ResNet50 for TensorFlow

Matlab for Deep Learning

Continuous development of Matlab’s Deep Learning container improves performance for training and inference

Windows 10, Intel Xeon E5-2623 @2.4GHz, NVIDIA Titan V 12GB GPUs


Explore Matlab

Containers for Diverse Workloads

Get started today by selecting from over 80 containerized software applications and SDKs, developed by NVIDIA and our ecosystem of partners.

AI Containers

TensorFlow

TensorFlow is an open-source software library for high-performance numerical computation.

Explore Container

PyTorch

PyTorch is a GPU-accelerated tensor computational framework with a Python front end.

Explore Container

NVIDIA Triton Inference Server

NVIDIA Triton™ Inference Server is an open-source inference solution that maximizes utilization of and performance on GPUs.

Explore Container

NVIDIA TensorRT

NVIDIA TensorRT® is a C++ library that facilitates high-performance inference on NVIDIA GPUs.

Explore Container

Application Frameworks

NVIDIA Clara

NVIDIA Clara™ Train for medical imaging is an application framework with over 20 state-of-the-art pre-trained models, transfer learning and federated learning tools, AutoML, and AI-assisted annotation.

Explore Container

DeepStream

DeepStream is the streaming analytics toolkit for AI-based video, audio, and image understanding for multi-sensor processing.

Explore Container

NVIDIA Riva

NVIDIA Riva, is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.

Explore Container

Merlin Training

Merlin HugeCTR, a component of NVIDIA Merlin™, is a deep neural network training framework designed for recommender systems.

Explore Container

HPC Containers

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

Explore Container

GROMACS

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Explore Container

RELION

RELION implements an empirical Bayesian approach for analysis of cryogenic electron microscopy (cryo-EM).

Explore Container

NVIDIA HPC SDK

The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for building, deploying, and managing HPC applications.

Explore Container

Frequently Asked Questions

  • A diverse set of containers span a multitude of use cases with built-in libraries and dependencies for easy compiling of custom applications.
  • They offer faster training with Automatic Mixed Precision (AMP) and minimal code changes.
  • Reduced time to solution with the ability to scaleup from single-node to multi-node systems.
  • Extremely portable, allowing you to develop faster by running containers in the cloud, on premises, or at the edge.

Containers from the NGC catalog make it seamless for machine learning engineers and IT to deploy to production.

  • They are tested on various platforms and architectures, enabling seamless deployment on a wide variety of systems and platforms.
  • They can be deployed to run on bare metal, virtual machines (VMs), and Kubernetes, including various architectures such as x86, ARM, and IBM Power.
  • They can run easily on various container runtimes such as Docker, Singularity, cri-o, and containerd.
  • The container images are scanned for common vulnerabilities and exposures (CVEs) and are backed by optional enterprise support to troubleshoot issues for NVIDIA-built software.

NGC Catalog Resources

Developer Blogs

Learn how to use the NGC catalog with these step-by-step instructions.



Explore technical blogs

Developer News

Read about the latest NGC catalog updates and announcements.



Read news

GTC Sessions

Watch all the top NGC sessions on demand.



Watch GTC Sessions

Webinars

Walk through how to use the NGC catalog with these video tutorials.



Watch Webinars

Accelerate your AI development with Containers from the NGC catalog.

Get Started