GTC 2020: Workstation Inference with TensorRT, cuDNN, and WinML
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Workstation Inference with TensorRT, cuDNN, and WinML
Chris Hebert, NVIDIA | Stefan Schoenefeld, NVIDIA | , NVIDIA | Chris Alvarez-Russell, NVIDIA | Sven Middelberg, NVIDIA | Tim Biedert, NVIDIA | Don Brittain, NVIDIA
Our experts are highly experienced with moving AI Inference models from research to production environments and are happy to share these experiences, tools, and techniques with you, including topics such as: - Moving from research to production - Minimizing device memory usage - Performance optimization - Integration with existing code bases Join us to learn more about the constraints related to deployment of AI inference models on Windows workstations using a local GPU.