GTC Silicon Valley-2019: Faster Neural Nets with Hardware-Aware Architecture Learning
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9645:Faster Neural Nets with Hardware-Aware Architecture Learning
Elad Eban(Google)
Academic design of deep neural networks has historically focused on maximizing accuracy at any cost. However, many practical applications have real-world constraints such as model size, computational complexity (FLOPs), or inference latency, as well as physical hardware performance, that need to be considered. We'll discuss our MorphNet solution, an approach to automate the design of neural nets with constraint-specific and hardware-specific tradeoffs while being lightweight and scalable to large data sets. We show how MorphNet can be used to design neural nets that reduce model size, FLOP count, or inference latency with the same accuracy across different domains such as ImageNet, OCR, and AudioSet. Finally, we show how MorphNet designs different architectures when optimizing for P100 and V100 platforms.