Simulation / Modeling / Design

Enhancing Digital Twin Models and Simulations with NVIDIA Modulus v22.09

The latest version of NVIDIA Modulus, an AI framework that enables users to create customizable training pipelines for digital twins, climate models, and physics-based modeling and simulation, is now available for download. 

This release of the physics-ML framework, NVIDIA Modulus v22.09, includes key enhancements to increase composition flexibility for neural operator architectures, features to improve training convergence and performance, and most importantly, significant improvements to the user experience and documentation. 

You can download the latest version of the Modulus container from DevZone, NGC, or access the Modulus repo on GitLab.

Neural network architectures

This update extends the Fourier Neural Operator (FNO), physics-informed neural operator (PINO), and DeepONet network architecture implementations to support customization using other built-in networks in Modulus. More specifically, with this update, you can:

  • Achieve better initialization, customization, and generalization across problems with improved FNO, PINO, and DeepONet architectures.
  • Explore new network configurations by combining any point-wise network within Modulus such as Sirens, Fourier Feature networks, and Modified Fourier Feature networks for the decoder portion of FNO/PINO with the spectral encoder.
  • Use any network in the branch net and trunk net of DeepONet to experiment with a wide selection of architectures. This includes the physics-informed neural networks (PINNs) in the trunk net. FNO can be used in the branch net of DeepONet as well.
  • Demonstrate DeepONet improvements with a new DeepONet example for modeling Darcy flow through porous media.

Model parallelism has been introduced as a beta feature with model-parallel AFNO. This enables parallelizing the model across multiple GPUs along the channel dimension. This decomposition distributes the FFTs and IFFTs in a highly parallel fashion. The matrix multiplies are partitioned so each GPU holds a different portion of each MLP layer’s weights with appropriate gather, scatter, reductions, and other communication routines implemented for the forward and backward passes.

In addition, support for the self-scalable tanh (Stan) activation function is now available. Stan has been known to show better convergence characteristics and increase accuracy for PINN training models.

Finally, support for kernel fusion of the Sigmoid Linear Unit (SiLU) through TorchScript is now added with upstream changes to the PyTorch symbolic gradient formula. This is especially useful for problems that require computing higher-order derivatives for physics-informed training, providing up to 1.4x speedup in such instances.

Modeling enhancements and training features

Each NVIDIA Modulus release improves the modeling aspects to better map the partial differential equations (PDEs) to neural network models as well as improve the training convergence. 

New recommended practices in Modulus are available to facilitate scaling and nondimensionalizing PDEs to help you properly scale your system’s units, including: 

  • Defining a physical quantity with its value and its unit
  • Instantiating a nondimensionalized object to scale the quantity 
  • Tracking the nondimensionalized quantity through the algebraic manipulations
  • Scaling back the nondimensionalized quantity to any target quantity with user-specified units for post-processing purposes

You now also have the ability to effectively handle different scales within a system with Selective Equations Term Suppression (SETS). This enables you to create different instances of the same PDE and freeze certain terms in a PDE. That way, the losses for the smaller scales are minimized, improving convergence on stiff PDEs in the PINNs. 

In addition, new Modulus APIs, configured in the Hydra configuration YAML file, enable the end user to terminate the training based on convergence criteria like total loss or individual loss terms or another metric that they can specify.

The new causal weighting scheme addresses the bias of continuous time PINNs that violate physical causality for transient problems. By reformulating the losses for the residual and initial conditions, you can get better convergence and better accuracy of PINNS for dynamic systems.

Modulus training performance, scalability, and usability 

Each NVIDIA Modulus release focuses on improving training performance and scalability. With this latest release, FuncTorch was integrated into Modulus for faster gradient calculations in PINN training. Regular PyTorch Autograd uses reverse mode automatic differentiation and has to calculate Jacobian and Hessian terms row by row in a for loop. FuncTorch removes unnecessary weight gradient computations and can calculate Jacobian and Hessian more efficiently using a combination of reverse and forward mode automatic differentiation, thereby improving the training performance.

The Modulus v22.09 documentation improvements provide more context and detail about the key concepts of the framework’s workflow to help new users. 

Enhancements have been made to the Modulus Overview with more example-guided workflows for physics-only driven, purely data-driven, and both physics– and data-driven modeling approaches. Modulus users can now follow improved introductory examples to build step-by-step in line with each workflow’s key concepts. 

Get more details about all Modulus functionalities by visiting the Modulus User Guide and the Modulus Configuration page. You can also provide feedback and contributions as part of the Modulus GitLab repo

Check out the NVIDIA Deep Learning Institute self-paced course, Introduction to Physics-Informed Machine Learning with ModulusJoin us for these GTC 2022 featured sessions to learn more about NVIDIA Modulus research and breakthroughs.

Discuss (0)

Tags