Developer Preview: DeepStream SDK 5.0
    Highlights:
  • Integration with Triton Inference Server (previously TensorRT Inference Server) enables developers to deploy a model natively from TensorFlow, TensorFlow-TensorRT, PyTorch, or ONNX in the DeepStream pipeline
  • Python development support with sample apps
  • IoT Capabilities
  • DeepStream app control from edge or cloud with bi-directional IoT messaging
  • Dynamic AI model update on the go to reduce app downtime
  • Interoperability with Transfer Learning Toolkit 2.0 (developer preview)
  • Secure communication between edge and cloud using Kafka message broker and SSL communication
  • Jetson Xavier NX support

Jetson T4 (x86)
Operating System Ubuntu 18.04 Ubuntu 18.04
Dependencies CUDA: 10.2
cuDNN: 8.0.0
TensorRT: 7.1.0
JetPack: 4.4
CUDA: 10.2
cuDNN: 7.6.5+
TensorRT: 7.0.0
Driver: R440+


Getting Started Resources



Downloads

I Agree To the Terms of the NVIDIA DeepStream SDK 5.0 Software License Agreement


Python Sample Apps & Bindings

I Agree To the Terms of the NVIDIA DeepStream SDK Python Software License Agreement




Coming Soon

DeepStream SDK 5.0 General Availability (Q3, 2020)

Get Notified >>


FAQ

  • Check out the frequently asked questions on DeepStream SDK in the technical FAQ

Documentation & Forums

Sample Apps

Free Self-Paced DLI Online Courses

  • Learn how to build end-to-end intelligent video analytics pipelines using DeepStream and Jetson Nano >> Enroll now
  • Learn how to get started with AI using Jetson Nano >> Enroll now

GitHub Repository

Blogs & Tutorials

Webinars




Additional Resources








Ethical AI

NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.