Technical Walkthrough 1

Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT

Deep learning is revolutionizing the way that industries are delivering products and services. These services include object detection, classification, and... 17 MIN READ
Technical Walkthrough 0

Int4 Precision for AI Inference

INT4 Precision Can Bring an Additional 59% Speedup Compared to INT8 If there’s one constant in AI and deep learning, it’s never-ending optimization to wring... 5 MIN READ
Technical Walkthrough 1

MLPerf Inference: NVIDIA Innovations Bring Leading Performance

New TensorRT 6 Features Combine with Open-Source Plugins to Further Accelerate Inference  Inference is where AI goes to work. Identifying diseases. Answering... 7 MIN READ
Technical Walkthrough 0

Object Detection on GPUs in 10 Minutes

Object detection remains the primary driver for applications such as autonomous driving and intelligent video analytics. Object detection applications require... 21 MIN READ
Technical Walkthrough 0

Tips for Optimizing GPU Performance Using Tensor Cores

Our most popular question is "What can I do to get great GPU performance for deep learning?" We’ve recently published a detailed Deep Learning Performance... 13 MIN READ