How to Speed Up Deep Learning Inference Using TensorRT
Artificial Intelligence, C++, Deep Learning, Development Tools and Libraries, Featured, Inference, machine learning and AI, ONNX, ResNet50, TensorRT
Nadeem Mohammad, posted Nov 08 2018
Welcome to this introduction to TensorRT, our platform for deep learning inference. You will learn how to deploy a deep learning application onto a GPU, increasing throughput and reducing latency during inference.