GTC Silicon Valley-2019: CNN Inference with cuDNN: Common Pitfalls and Best Practices
GTC Silicon Valley-2019 ID:S9644:CNN Inference with cuDNN: Common Pitfalls and Best Practices
You may already use NVIDIA's cuDNN library to accelerate your deep neural network inference, but are you getting the most out of it to truly unleash the tremendous performance of NVIDIA's newest GPU architectures, Volta and Turing? We'll discuss how to avoid the most common pitfalls in porting your CPU-based inference to the GPU and demonstrate best practices in a step-by-step optimization of an example network. Learn how to deploy your deep neural network inference in both the fastest and most memory-efficient way, using cuDNN and Tensor Cores, NVIDIA's revolutionary technology that delivers groundbreaking performance in FP16, INT8 and INT4 inference on Volta and Turing.