NVIDIA SimNet

AI-Accelerated Simulation Toolkit


Simulations are pervasive in science and engineering. They are computationally expensive and don't easily accommodate measured data coming from sources such as sensors or cameras. NVIDIA SimNet™ is a simulation toolkit, which addresses these challenges using AI and physics. Whether you're looking to get started with AI-driven physics simulations or working on complex nonlinear physics problems, NVIDIA SimNet is your toolkit for solving forward, inverse, or data assimilation problems.





Scalable Performance

Solves larger problems faster with accelerated linear algebra (XLA) and automatic mixed precision (AMP) support and multi-GPU/multi-node implementation.

Broad Applicability

Models multiple physics types in forward and inverse simulations with accuracy and convergence.

Fast Turnaround Time

Provides parameterized system representation that solves for multiple scenarios simultaneously.

Easy to Adopt

Provides application programming interfaces (APIs) for implementing new physics and geometry and detailed user guide examples.



SimNet Multi-GPU/Multi-Node Performance



NVIDIA SimNet supports multi-GPU and multi-node scaling using Horovod. This allows for multiple processes, each targeting a single GPU, with collective communication using the NVIDIA Collective Communications Library (NCCL) and MPI.

This plot shows the weak scaling performance of SimNet on an FPGA test problem running on up to the 32 V100 GPUs in four DGX-1 systems. The scaling efficiency from one to 32 GPUs is more than 85%.

SimNet Weak Scaling Across Multiple GPUs



Features


Novel Neural Network Architecture

SimNet provides a framework to model partial differential equations (PDEs) along with boundary and initial conditions. A key concept to solving the flow problems involves modeling the mass balance condition as a hard constraint as well as a global constraint. This improves accuracy as well as convergence characteristics. Additionally, for multi-physics problems spanning multiple domains, separate networks for the different physics with coupling at the domain interfaces work well.

Design Space Exploration

While traditional numerical solvers are designed to solve one configuration at a time, SimNet is able to work with multiple single geometries or parameterized geometry. The neural networks can be trained on multiple scenarios simultaneously and can evaluate each configuration in real time during inference. This allows the design space to be explored more efficiently.

Optimized for Multi-Physics Problems

SimNet is not only able to solve problems with multiple physics more efficiently with use of parameterized geometries but is also able to expand the scope of traditional simulations beyond currently solvable use cases. For example, the network can retain the knowledge gained during training and later solve the learned scenarios in real time. Similarly, the data assimilation and inverse problems that aren’t solved by the numerical solvers can be easily tackled by the neural networks.





What Others Are Saying


“We believe that SimNet has some unique features like parameterized geometries for multi-physics problems and multi-GPU/multi-node neural network implementation. We are looking forward to incorporating SimNet in our research and teaching activities.”

Professor Hadi Meidani, Civil and Environmental Engineering, University of Illinois at Urbana-Champaign

“SimNet is an AI based physics simulation toolkit that has the potential to unlock amazing capabilities in industrial and scientific simulation.”

Christopher Lamb, VP of Computing Software, NVIDIA



Please send feedback and comments to the NVIDIA SimNet team



To request access to NVIDIA SimNet, please join our early access program.

NOTE: Please ensure that you register using your organization's email address. We are unable to admit members using personal emails. If you're already registered, you can change your email address on your account by clicking on your name in the upper right hand corner when you are logged in, then click on ‘Edit profile’, then look for the ‘Change email address’ button. Alternatively, you can log out and create a second account with your organization email address.


Join now




References:

[1] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014

[2] Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (Part I): Data-driven solutions of nonlinear partial differential equations, 2017

[3] Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics constrained deep learning without simulation data. Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020

[4] DL Young, CH Tsai, and CS Wu. A novel vector potential formulation of 3d navier–stokes equations with through-flow boundaries by a local meshless method. Journal of Computational Physics, 300:219–240, 2015

[5] John H Halton. On the efficiency of certain quasi-random sequences of points in evaluating multi-dimensional integrals. Numerische Mathematik, 2(1):84–90, 1960

[6] Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. In International Conference on Machine Learning, pages 5301–5310, 2019

[7] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. arXiv preprint arXiv:2003.08934, 2020

[8] Matthew Tancik, Pratul P Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains. arXiv preprint arXiv:2006.10739, 2020

[9] Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient pathologies in physics informed neural networks. arXiv preprint arXiv:2001.04536, 2020

[10] Vincent Sitzmann, Julien NP Martel, Alexander W Bergman, David B Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. arXiv preprint arXiv:2006.09661, 2020

[11] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017

[12] Ameya D Jagtap, Kenji Kawaguchi, and George Em Karniadakis. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. Journal of Computational Physics, 404:109136, 2020

[13] Justin Sirignano and Konstantinos Spiliopoulos. Dgm: A deep learning algorithm for solving partial differential equations. Journal of computational physics, 375:1339–1364, 2018

[14] Zachary C Lipton, John Berkowitz, and Charles Elkan. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019, 2015

[15] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014