Technical Walkthrough 0

Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT

○ TensorRT is an SDK for high-performance deep learning inference and with TensorRT 8.0, you can import models trained using Quantization Aware Training (QAT)… 17 MIN READ
Technical Walkthrough 0

Int4 Precision for AI Inference

INT4 Precision Can Bring an Additional 59% Speedup Compared to INT8 If there’s one constant in AI and deep learning, it’s never-ending optimization to wring… 5 MIN READ
Technical Walkthrough 0

MLPerf Inference: NVIDIA Innovations Bring Leading Performance

New TensorRT 6 Features Combine with Open-Source Plugins to Further Accelerate Inference Inference is where AI goes to work. Identifying diseases. 7 MIN READ
Technical Walkthrough 0

Object Detection on GPUs in 10 Minutes

Object detection remains the primary driver for applications such as autonomous driving and intelligent video analytics. Object detection applications require… 21 MIN READ
Technical Walkthrough 0

Tips for Optimizing GPU Performance Using Tensor Cores

Our most popular question is "What can I do to get great GPU performance for deep learning?" We’ve recently published a detailed Deep Learning Performance Guide… 13 MIN READ