The latest update to the NVIDIA Deep Learning SDK includes the NVIDIA TensorRT deep learning inference engine (formerly GIE) and the new NVIDIA Deep Stream SDK.
TensorRT delivers high performance inference for production deployment of deep learning applications. The latest release delivers up to 3x more throughput, using 61% less memory with new INT8 optimized precision.
NVIDIA DeepStream SDK simplifies development of high performance video analytics applications powered by deep learning. Using a high-level C++ API and high performance runtime, developers can use the SDK to rapidly integrate advanced video inference capabilities including optimized precision and GPU-accelerated transcoding to deliver faster, more responsive AI-powered services such as real-time video categorization.
Learn more about the NVIDIA Deep Learning SDK >
New Update to the NVIDIA Deep Learning SDK Now Help Accelerate Inference

Sep 13, 2016
Discuss (0)
AI-Generated Summary
- The latest NVIDIA Deep Learning SDK update includes the NVIDIA TensorRT deep learning inference engine and the new NVIDIA DeepStream SDK.
- NVIDIA TensorRT provides high-performance inference for deep learning applications, with the latest release offering up to 3x more throughput while using 61% less memory with INT8 optimized precision.
- The NVIDIA DeepStream SDK enables developers to quickly build high-performance video analytics applications with advanced video inference capabilities and GPU-accelerated transcoding.
AI-generated content may summarize information incompletely. Verify important information. Learn more