GTC Silicon Valley-2019: Learning-Based Predictive Models: A New Approach to Integrating Large-Scale Simulations and Experiments
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9565:Learning-Based Predictive Models: A New Approach to Integrating Large-Scale Simulations and Experiments
Brian VanEssen(Lawrence Livermore National Laboratory)
Large-scale scientific endeavors often focus on improving predictive capabilities by challenging theory-driven simulations with experimental data. We'll describe our work at LLNL using advances in deep learning, computational workflows, and computer architectures to develop an improved predictive model the learned predictive model. We'll discuss necessary advances in machine learning architectures and methods to handle the challenges of ICF science, including rich, multimodal data (images, scalars, time series) and strong nonlinearities. These include advances in the scalability of our deep learning toolkit LBANN, an optimized asynchronous, GPU-Aware communication library, and a state-of-the-art scientific workflows. We'll also how the combination of high-performance NVLINK and the rich GPU architecture of Sierra enables us to train neural networks efficiently and begin to develop learned predictive models based on a massive data set.