NGC | Catalog

Modulus

Logo for Modulus
Features
Description
NVIDIA Modulus is a toolkit for developing AI enabled physics-ML applications.
Publisher
NVIDIA
Latest Tag
24.01
Modified
February 1, 2024
Compressed Size
14.36 GB
Multinode Support
Yes
Multi-Arch Support
Yes
24.01 (Latest) Security Scan Results

Linux / arm64

Sorry, your browser does not support inline SVG.

Linux / amd64

Sorry, your browser does not support inline SVG.

What is Modulus?

NVIDIA Modulus is a toolkit for developing AI enabled physics-ML applications.

With NVIDIA Modulus, we aim to provide researchers and industry specialists, with various tools that will help accelerate your development of such models for the scientific discipline of your need.

Visit the NVIDIA Modulus for more information.

Modulus Documentation

Running Modulus Using Docker

If you have Docker 19.03 or later, a typical command to launch the container with an interactive bash terminal is:

docker run --gpus all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --runtime nvidia --rm -it nvidia/modulus/modulus:xx.xx bash

Where xx.xx is the container version. For example, 24.01.

Once inside the container, you can clone the Modulus repositories from GitHub and use the samples and examples provided to get started with Modulus. Refer the Getting Started Guide for more details.

Modulus on DGX Cloud (Base Command Platform)

Jobs using the Modulus NGC Container on Base Command Platform clusters can be launched either by using the NGC CLI tool or by using the Base Command Platform Web UI. To use the NGC CLI tool, configure the Base Command Platform user, team, organization, and cluster information using the ngc config command as described here.

An example command to launch the container on a single-GPU instance is:

ngc batch run --name "My-1-GPU-Modulus-job" --instance dgxa100.80g.1.norm --commandline "sleep 30" --result /results --image "nvidia/modulus/modulus:24.01"

For details on running Modulus in Multi-GPU/Multi-Node configuration, refer this Technical Blog and Modulus Documentation

For more details on running on DGX Cloud, please refer NVIDIA BCP User Guide

Modulus on Public Cloud instances (AWS, GCP, and Azure)

Modulus can be used on public cloud instances like AWS, GCP, and Azure. To run Modulus,

  1. Get your GPU instance on AWS, GCP, or Azure.
  2. Use the NVIDIA GPU-Optimized VMI on the cloud instance. For detailed instructions on setting up VMI refer NGC Certified Public Clouds.
  3. Once the instance spins up, follow the above instructions to load the Modulus Docker container.

For key features, refer NVIDIA Modulus Release Notes

Modulus Forum

Please visit the Modulus Forum for :

  • Latest news and announcements on Modulus
  • Technical support
  • Report a bug
  • Customer success stories

License

By pulling and using the container, you accept the terms and conditions of this SOFTWARE DEVELOPER KITS, SAMPLES AND TOOLS LICENSE AGREEMENT.