NVIDIA Modulus


A Framework for Developing Physics Machine Learning Neural Network Models

NVIDIA Modulus is a neural network framework that blends the power of physics in the form of governing partial differential equations (PDEs) with data to build high-fidelity, parameterized surrogate models with near-real-time latency. Whether you’re looking to get started with AI-driven physics problems or designing digital twin models for complex non-linear, multi-physics systems, NVIDIA Modulus can support your work.

DOWNLOAD NOW   NGC Download

[Demo]: Accelerating Extreme Weather Prediction with FourCastNet

[Demo]: Siemens Energy HRSG Digital Twin Simulation Using NVIDIA Modulus and Omniverse

Benefits

Scalable Performance

Solves larger problems faster by scaling from single-GPU to multi-node implementations.

AI Toolkit

Offers building blocks for developing physics machine learning surrogate models that combine both physics and data. The framework is generalizable to different domains and use cases—from engineering simulations to life sciences and from forward simulations to inverse/data assimilation problems.

Near-real-Time Inference

Provides parameterized system representation that solves for multiple scenarios in near real time, letting you train once offline to infer in real time repeatedly.

Easy to Adopt

Includes APIs for domain experts to work at a higher level of abstraction. Extensible to new applications with detailed reference applications serving as starting points.

Modulus Multi-GPU and Multi-Node Performance

NVIDIA Modulus supports multi-GPU and multi-node scaling using Horovod. This allows for multiple processes, each targeting a single GPU, with collective communication using the NVIDIA Collective Communications Library (NCCL) and Message Passing Interface (MPI).

This plot shows the weak scaling performance of Modulus on a field-programmable gate array (FPGA) test problem running on up to 32 NVIDIA V100 Tensor Core GPUs in four NVIDIA DGX™-1 systems. The scaling efficiency from one to 32 GPUs is more than 85 percent. This data was collected using Modulus v. 21.06.

Modulus Weak Scaling Across Multiple GPUs


Features

Modulus is a multi-physics framework that’s generalizable to multiple configurations enabled by parameterized geometry. With parameterized geometry, Modulus allows rapid design space exploration with single training for all configurations.

Training Pipeline for PhysicsML

Modulus provides a framework to model PDEs along with boundary conditions. It provides the end-to-end pipeline, from setting up the input tensor from geometry to training at scale.

Explicit Parameterization

It provides explicit parameter specifications for training the surrogate model with a range of values to learn for the design space and for inferring multiple scenarios simultaneously.

Novel Neural Network Architectures

It includes curated neural network architectures that are effective for physics-informed machine learning, such as Fourier feature networks, sinusoidal representation networks, or Fourier neural operators and adaptive Fourier neural operators.

For more details on Modulus (previously known as SimNet), please refer to NVIDIA SimNet: An AI-Accelerated Multi-Physics Simulation Framework. This paper reviews the neural network solver methodology, the Modulus architecture, and the features needed to effectively solve PDEs. It also includes real-world use cases that range from forward, parameterized, multi-physics simulations with turbulence and complex 3D geometries for industrial design optimization to inverse and data assimilation problems that aren’t addressed efficiently by traditional solvers.


DOCUMENTATION

Omniverse Integration

Available in v. 22.03

Modulus is now integrated with NVIDIA Omniverse™ via the Modulus extension that can be used to visualize the outputs of a Modulus-trained model. The extension enables you to import the output results into a visualization pipeline for common output scenarios, such as streamlines and iso-surfaces. It also provides an interface that enables interactive exploration of design variables and parameters for inferring new system behavior and visualizing it in near real time.

What Others Are Saying

“[Modulus]’ clear APIs, clean and easily navigable code, environment, and hardware configurations well handled with dockers, scalability, ease of deployment, and the competent support team made it easy to adopt and has provided some very promising results. This has been great so far, and we look forward to using [Modulus] on problems with much larger dimensions.”


— Cedric Frances, PhD Student, Stanford University


[Using Physics-Informed Deep Learning for Transport in Porous Media]

“[Modulus] is an AI-based physics simulation toolkit that has the potential to unlock amazing capabilities in industrial and scientific simulation.”


— Christopher Lamb, VP of Computing Software, NVIDIA


[The NextPlatform Video]

“We believe that [Modulus] has some unique features like parameterized geometries for multi-physics problems and multi-GPU/multi-node neural network implementation. We are looking forward to incorporating [Modulus] in our research and teaching activities.”


— Professor Hadi Meidani, Civil and Environmental Engineering, University of Illinois at Urbana-Champaign

The collaboration between Siemens Gamesa and NVIDIA has meant a great step forward in accelerating the computational speed and the deployment speed of our latest algorithms development in such a complex field as computational fluid dynamics.

— Sergio Dominguez, Siemens Gamesa


[NVIDIA Blog]

Accelerated computing with AI at data center scale has the potential to deliver millionfold increases in performance to tackle challenges, such as mitigating climate change, discovering drugs and finding new sources of renewable energy. NVIDIA’s AI-enabled framework for scientific digital twins equips researchers to pursue solutions to these massive problems.


— Ian Buck, VP Accelerated Computing NVIDIA


[NVIDIA Press Release]

Modulus Featured Content

Download NVIDIA Modulus Today