DeepStream SDK 5.0.1
  • Integration with Triton Inference Server (previously TensorRT Inference Server) enables developers to deploy a model natively in TensorFlow, TensorFlow-TensorRT, PyTorch, or ONNX in the DeepStream pipeline
  • Smart recording on edge
  • Python development support with sample apps
  • Build and deploy apps natively through RHEL
  • Secure communication between edge and cloud using SASL/Plain based authentication and TLS authentication
  • IoT Capabilities
  • DeepStream app control from edge or cloud with bi-directional IoT messaging
  • Dynamic AI model update on the go to reduce app downtime
  • Interoperability with models from Transfer Learning Toolkit 2.0
  • Jetson Xavier NX support

Jetson T4 (x86)
Operating System Ubuntu 18.04 Ubuntu 18.04
Dependencies CUDA: 10.2
cuDNN: 8.0.0
TensorRT: 7.1.0
JetPack: 4.4
CUDA: 10.2
cuDNN: 7.6.5+
TensorRT: 7.0.0
Driver: R440+

Getting Started Resources


I Agree To the Terms of the NVIDIA DeepStream SDK 5.0.1 Software License Agreement

DeepStream 4.0 applications are fully compatible with DeepStream 5.0. Please read the migration guide for more information.

Python Sample Apps & Bindings

Python bindings is now integrated in the DeepStream SDK

Visit the DeepStream Python Apps Github page for documentation and sample apps.

Check out the DeepStream SDK technical FAQ for questions commonly asked.


  • Check out the frequently asked questions on DeepStream SDK in the technical FAQ

Documentation & Forums

Reference Implementations

Blogs & Tutorials

Beginner Friendly Free Self-Paced DLI Online Courses

  • Learn how to build end-to-end intelligent video analytics pipelines using DeepStream and Jetson Nano >> Enroll now
  • Learn how to get started with AI using Jetson Nano >> Enroll now


Additional Resources

Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.