NVIDIA DeepStream SDK

High performance deep learning inference for video analytics

NVIDIA DeepStream simplifies development of high performance video analytics applications powered by deep learning. Using a high-level C++ API and high performance runtime, developers can use the SDK to rapidly integrate advanced video inference capabilities including optimized precision and GPU-accelerated transcoding to deliver faster, more responsive AI-powered services such as real-time video categorization.

For real-time video content analysis, DeepStream SDK uses TensorRT to deliver fast INT8 precision inference, significantly reducing latency and making responsive and interactive AI-powered video services cost efficient and practical. Developers can use the DeepStream SDK to efficiently process and understand video frames at scale in hyperscale data centers, deployments to meet the needs of even the most demanding throughput requirements.

NVIDIA DeepStream SDK will be available as part of the NVIDIA Deep Learning SDK through an Early Access (EA) program in October. Please apply today to be considered for the EA program and, if accepted, you will be notified when it becomes available.

APPLY NOW

Key Features

  • Deploy widely-used neural network models such as GoogLeNet and AlexNet for real-time image classification and video categorization
  • Support for common video formats such as H.264, HEVC/H.265, MPEG-2, MPEG-4, VP9, and VC1
  • Integrate into existing workflows that use FFMPEG, RESTful services or custom C++ libraries
  • Deploy neural networks in full (FP32) or optimized precision (INT8, FP16)