Join our webinar and explore how the generative AI, LLMs, and vision transformers will revolutionize vision AI solutions

NVIDIA DeepStream SDK

DeepStream’s multi-platform support gives you a faster, easier way to develop vision AI applications and services. You can even deploy them on-premises, on the edge, and in the cloud with the click of a button.

Get started  Try on LaunchPad

What is NVIDIA DeepStream?

There are billions of cameras and sensors worldwide, capturing an abundance of data that can be used to generate business insights, unlock process efficiencies, and improve revenue streams. Whether it’s at a traffic intersection to reduce vehicle congestion, health and safety monitoring at hospitals, surveying retail aisles for better customer satisfaction, or at a manufacturing facility to detect component defects, every application demands reliable, real-time Intelligent Video Analytics (IVA). NVIDIA’s DeepStream SDK is a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding. It’s ideal for vision AI developers, software partners, startups, and OEMs building IVA apps and services. Developers can now create stream processing pipelines that incorporate neural networks and other complex processing tasks such as tracking, video encoding/decoding, and video rendering. DeepStream pipelines enable real-time analytics on video, image, and sensor data.

What is DeepStream and how does the software stack look like
DeepStream is also an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixel and sensor data to actionable insights.

Key Benefits

 DeepStream is a powerful and flexible SDK

Powerful and Flexible SDK

DeepStream SDK is suitable for a wide range of use-cases across a broad set of industries.

DeepStream provides multiple programming options

Multiple Programming Options

Create powerful vision AI applications using C/C++, Python, or Graph Composer’s simple and intuitive UI.

DeepStream allows you to gather real-time insights

Real-Time Insights

Understand rich and multi-modal real-time sensor data at the edge.

DeepStream helps you create managed AI services

Managed AI Services

Deploy AI services in cloud native containers and orchestrate them using Kubernetes.

DeepStream helps you reduced total development cost

Reduced TCO

Increase stream density by training, adapting, and optimizing models with TAO toolkit and deploying models with DeepStream.

Unique Capabilities

Enjoy Seamless Development From Edge to Cloud

Developers can build seamless streaming pipelines for AI-based video, audio, and image analytics using DeepStream. It ships with 30+ hardware-accelerated plug-ins and extensions to optimize pre/post processing, inference, multi-object tracking, message brokers, and more. DeepStream also offers some of the world's best performing real-time multi-object trackers.

DeepStream is built for both developers and enterprises and offers extensive AI model support for popular object detection and segmentation models such as state of the art SSD, YOLO, FasterRCNN, and MaskRCNN. You can also integrate custom functions and libraries.

DeepStream introduces new REST-APIs for different plug-ins that let you create flexible applications that can be deployed as SaaS while being controlled from an intuitive interface. This means it’s now possible to add/delete streams and modify “regions-of-interest” using a simple interface such as a web page.

Learn more

DeepStream helps developers build seamless streaming pipeline for AI based video analytics
DeepStream helps developers build high performance cloud native AI applications

Get Cloud-Native

The use of cloud-native technologies gives you the flexibility and agility needed for rapid product development and continuous product improvement over time. Organizations now have the ability to build applications that are resilient and manageable, thereby enabling faster deployments of applications.

Developers can use the DeepStream Container Builder tool to build high-performance, cloud-native AI applications with NVIDIA NGC containers. The generated containers are easily deployed at scale and managed with Kubernetes and Helm Charts.



Learn more

Build End-to-End AI Solutions

Speed up overall development efforts and unlock greater real-time performance by building an end-to-end vision AI system with NVIDIA Metropolis. Start with production-quality vision AI models, adapt and optimize them with TAO Toolkit, and deploy using DeepStream.

Get incredible flexibility–from rapid prototyping to full production level solutions–and choose your inference path. With native integration to NVIDIA Triton™ Inference Server, you can deploy models in native frameworks such as PyTorch and TensorFlow for inference. Using NVIDIA TensorRT™ for high-throughput inference with options for multi-GPU, multi-stream, and batching support also helps you achieve the best possible performance.



Learn more
DeepStream is integrated with NVIDIA Metropolis for complete end-to-end AI solutions
DeepStream is bundled with multiple reference applications

Access Reference Applications

DeepStream SDK is bundled with 30+ sample applications designed to help users kick-start their development efforts. Most samples are available in C/C++, Python, and Graph Composer versions and run on both NVIDIA Jetson™ and dGPU platforms. Reference applications can be used to learn about the features of the DeepStream plug-ins or as templates and starting points for developing custom vision AI applications.



DeepStream also now offers integration with Basler cameras for industrial inspection and lidar support for a wide range of applications.

Learn more

Work With Graph Composer

Graph Composer gives DeepStream developers a powerful, low-code development option. A simple and intuitive interface makes it easy to create complex processing pipelines and quickly deploy them using Container Builder.

Graph Composer abstracts much of the underlying DeepStream, GStreamer, and platform programming knowledge required to create the latest real-time, multi-stream vision AI applications.Instead of writing code, users interact with an extensive library of components, configuring and connecting them using the drag-and-drop interface.



Learn more
GraphComposer_Demo_AdobeExpress-larger_file.gif
Integrate with Realtime Systems

Integrate with Realtime Systems

Tight scheduling control, custom schedulers and efficient resource management are a must have to integrate with deterministic systems such as robotic arms and automated quality control lines.

With the introduction of Graph eXecution Format (GXF), it is easy to integrate with control signals that operate on a different time domain than the vision streaming sensors being processed by a DeepStream pipeline.

GXF is a framework built from the ground up that enables new capabilities to the DeepStream developers.

Learn more

Production-Ready Solution for Vision AI

DeepStream is available as a part of NVIDIA AI Enterprise, an end-to-end, secure, cloud-native AI software platform optimized to accelerate enterprises to the leading edge of AI.

NVIDIA AI Enterprise delivers validation and integration for NVIDIA AI open-source software, access to AI solution workflows to speed time to production, certifications to deploy AI everywhere, and enterprise-grade support, security, and API stability to mitigate the potential risks of open-source software.

Learn more

 DeepStream SDK  is part of NVIDIA AI Enterprise to help deploy AI anywhere.

Explore Multiple Programming Options

C/C++

Create applications in C/C++, interact directly with GStreamer and DeepStream plug-ins, and use reference applications and templates.




Learn more about C/C++

Python

DeepStream pipelines can be constructed using Gst Python, the GStreamer framework's Python bindings. The source code for the binding and Python sample applications are available on GitHub.


Learn more about Python

Graph Composer

Graph Composer is a low-code development tool that enhances the DeepStream user experience. Using a simple, intuitive UI, processing pipelines are constructed with drag-and-drop operations.


Learn More About Graph Composer

Improve Accuracy and Real-Time Performance

DeepStream offers exceptional throughput for a wide variety of object detection, image processing, and instance segmentation AI models. The following table shows the end-to-end application performance from data ingestion, decoding, and image processing to inference. It takes multiple 1080p/30fps streams as input. Note that running on the DLAs for Jetson devices frees up the GPU for other tasks. For performance best practices, watch this video tutorial.


Jetson Orin Nano
Jetson Orin NX
Jetson Orin AGX™
T4
A2
A10
A30
A100
H100
L40
L4
RTX* (A6000)
Application
Models
Tracker
Infer Resolution
Precision
GPU
GPU
DLA1
DLA2
GPU
DLA1
DLA2
GPU
GPU
GPU
GPU
GPU
GPU
GPU
GPU
GPU
People Detect
PeopleNet-ResNet34
(Version 2.6)
No Tracker
960x544
INT8
111
136
68
68
456
140
140
418
235
1001
1518
2610
3565
2013
834
1458
People Detect
PeopleNet-ResNet34
(Version 2.6)
NvDCF (Accuracy)
960x544
INT8
51
79
60
60
234
133
133
262
150
639
837
1358
1917
1369
481
863
People Detect
PeopleNet-ResNet34
(Version 2.6)
NvDCF (Performance)
960x544
INT8
100
127
68
68
416
140
140
410
230
1002
1448
2425
3260
1959
805
1409
License Plate Recognition
TrafficCamNet
LPDNet
LPRNet
NvDCF
960x544
640x480
96x48
INT8
INT8
FP16
102
164
-
-
369
-
-
451
294
1144
1330
2209
2698
2230
777
1519
3D Body Pose Estimation
PeopleNet-ResNet34 BodyPose3D
NvDCF
960x544
192x256
INT8
FP16
22
31
-
-
64
-
-
86
60
139
170
146
214
153
177
155
Action Recognition
ActionRecognitionNet (3DConv)
No Tracker
224x224x3x32
FP16
24
36
-
-
109
-
-
124
76
354
401
713
962
827
253
496

RTX GPUs performance is only reported for flagship product(s). All SKUs support DeepStream.


The DeepStream SDK lets you apply AI to streaming video and simultaneously optimize video decode/encode, image scaling, and conversion and edge-to-cloud connectivity for complete end-to-end performance optimization.


To learn more about the performance using DeepStream, check the documentation.

Read Customer Stories

OneCup AI Customer Story

OneCup AI

OneCup AI’s computer vision system tracks and classifies animal activity using NVIDIA pretrained models, TAO Toolkit, and DeepStream SDK, significantly reducing their development time from months to weeks.


Learn more about OneCup AI
KoiReader  Customer Story

KoiReader

KoiReader developed an AI-powered machine vision solution using NVIDIA developer tools including DeepStream SDK to help PepsiCo achieve precision and efficiency in dynamic distribution environments.



Learn more about KoiReader
Trifork Customer Story

Trifork

Trifork jumpstarted their AI model development with NVIDIA DeepStream SDK, pretrained models, and TAO Toolkit to develop their AI-based baggage tracking solution for airports.



Learn more about Trifork

General FAQ

DeepStream is a closed-source SDK. Note that sources for all reference applications and several plugins are available.

The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are retail analytics, parking management, managing logistics, optical inspection, robotics, and sports analytics.

Yes, that’s now possible with the integration of the Triton Inference server. Also with DeepStream 6.1.1, applications can communicate with independent/remote instances of Triton Inference Server using gPRC.

DeepStream supports several popular networks out of the box. For instance, DeepStream supports MaskRCNN. Also, DeepStream ships with an example to run the popular YOLO models, FasterRCNN, SSD and RetinaNet.

Yes, DS 6.0 or later supports the Ampere architecture.

Yes, audio is supported with DeepStream SDK 6.1.1. To get started, download the software and review the reference audio and Automatic Speech Recognition (ASR) applications. Learn more by reading the ASR DeepStream Plugin.

Build high-performance vision AI apps and services using DeepStream SDK.

Get Started