GTC Silicon Valley-2019: Demystifying Deep Learning Infrastructure Choices Using MLPerf Benchmark Suite
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9553:Demystifying Deep Learning Infrastructure Choices Using MLPerf Benchmark Suite
Lizy John(University of Texas),Ramesh Radhakrishnan(Dell EMC)
We'll describe a new benchmark suite proposed by the Deep Learning community for machine learning workloads. We'll present a quantitative analysis of an early version (0.5) of benchmark known as MLPerf and explain performance impact of NVIDIA GPU architecture across a range of DL applications. This work includes evaluating MLPerf performance on Turing, Volta, and Pascal to demonstrate the performance impact of NVIDIA GPU architecture across a range of DL applications. We'll evaluate the impact of system-level technologies Nvlink vs. PCIe topology using server- and workstation-class platforms to show how system architecture impacts DL training workloads. We also plan to discuss our work to characterize MLPerf benchmark performance using profiling tools (GPU, CPU, memory & I/O), our hyperparameter tuning study (batch size, learning rate, SGD optimizer) on MLPerf performance, and map real world application use cases to MLPerf suite and how to quantify results for specific DL practitioner use cases.