After clicking “Watch Now” you will be prompted to login or join.


Click “Watch Now” to login or join the NVIDIA Developer Program.


NVIDIA TensorRT Workflows; Importing from Frameworks into your Inference Solution

Craig-Wittenbrink, NVIDIA | Kismat-Singh, NVIDIA | Pravnav-Marathe, NVIDIA | Rajeev-Rao, NVIDIA | Dilip Sequeira, NVIDIA

GTC 2020

TensorRT Inference Library is most easily used by importing trained models through ONNX. In this session, we plan to go over fundamentals of the workflow to import and put into production deep learning models using TensorRT's Parsing. We'll discuss end-to-end solutions from training to export and import into TensorRT and deployment with TensorRT-Inference Server.

View More GTC 2020 Content