NVIDIA DeepStream SDK
Rapidly develop and deploy Vision AI applications and services. DeepStream provides multi-platform, scalable, TLS-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
There are billions of cameras and sensors worldwide, capturing an abundance of data that can be used to generate business insights, unlock process efficiencies and improve revenue streams. Whether it’s at a traffic intersection to reduce vehicle congestion, health and safety monitoring at hospitals, surveying retail aisles for better customer satisfaction, sports analytics or at a manufacturing facility to detect component defects- every application demands reliable, real-time Intelligent Video Analytics (IVA).
Powerful & Flexible SDK
A unified SDK suitable for a multitude of use-cases across a broad set of industries.
Create powerful Vision AI applications using Graph Composer’s simple and intuitive UI.
Real-time Insights Managed AI services Reduced TCO
NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing, video, audio and image understanding.
DeepStream is for vision AI developers, software partners, startups and OEMs building IVA apps and services.
Achieving Higher Accuracy & Real-Time Performance Using DeepStream
DeepStream offers exceptional throughput for a wide variety of object detection, image classification and instance segmentation based AI models. To reduce development efforts and increase throughput, developers can use highly accurate pre-trained models from TAO Toolkit and deploy with DeepStream. The following table shows the end-to-end application performance from data ingestion, decoding, image processing to inference. It takes multiple 1080p/30fps streams as input. Note that running on the DLAs for Jetson Xavier NX and Jetson AGX Xavier frees up GPU for other tasks.
* FP16 inference on Jetson Nano and Tx2
With DeepStream SDK you can apply AI to streaming video and can simultaneously optimize video decode/encode, image scaling and conversion and edge-to-cloud connectivity for complete end-to-end performance optimization. This plot summarizes stream density achieved at 1080p/30 FPS across various NVIDIA products. You can learn more about the performance using DeepStream in the documentation.
This plot summarizes stream density achieved at 1080p/30 FPS across various NVIDIA products.
For performance best practices watch this video tutorial.
Numbers generated using the DeepStream reference app
Why Use DeepStream SDK?
Developers can build seamless streaming pipelines for AI-based video, audio, and image analytics using DeepStream. DeepStream brings development flexibility by giving developers the option to develop in C/C++,Python, or use low-code graphical programming with Graph Composer. DeepStream ships with various hardware accelerated plugins and extensions.
DeepStream is built for both developers and enterprises and offers extensive AI model support for popular object detection and segmentation models such as state of the art SSD, YOLO, FasterRCNN, and MaskRCNN. You can also integrate custom functions and libraries in DeepStream.
Deepstream offers the flexibility from rapid prototyping to full production level solutions. It also allows you to choose your inference path. With native integration to NVIDIA Triton Inference Server, you can deploy models in native frameworks such as PyTorch and TensorFlow for inference. Using NVIDIA TensorRT for high throughput inference with options for multi-GPU, multi-stream and batching support option, you can achieve the best possible performance.
In addition to supporting native inference, DeepStream applications can communicate with independent/remote instances of Triton Inference Server using gRPC, allowing the implementation of distributed inference solutions.
New to DeepStream 6.0: Low-Code Programming with Graph Composer
With Graph Composer, DeepStream developers now have a powerful low-code graphical programming option. A simple and intuitive interface makes it easy to create complex processing pipelines and quickly deploy them using Container Builder.
Graph Composer abstracts much of the underlying DeepStream, GStreamer, and platform programming knowledge required for creating designs that address the latest requirements in real-time, multi-stream Vision AI applications.
Instead of writing code, the user interacts with a library of extensions, configuring and connecting them using the drag-and-drop interface. Users can use NVIDIA’s repository of optimized extensions for different hardware platforms or create their own.
Securely Manage Apps & Services
For a real world IVA app/ service deployment, remote management and control of applications is critical. DeepStream SDK can run in any cloud and at the edge. It handles IoT requirements such as effective bi-directional messaging between edge and the cloud, security, smart recording and Over-the-Air AI model update.
- With bi-directional messaging between edge and cloud, you can add greater control for use-cases such as remote triggers for event recording, change operating parameters and app configurations or request system logs.
- The smart record feature in DeepStream app allows you to save valuable disk space on the edge with selective recording that enables faster searchability. You can use cloud-to-edge messaging to quickly trigger recording from the cloud.
- Seamless Over-the-Air (OTA) update for the entire app or individual AI models from any cloud registry to continuously improve accuracy with zero downtime.
- For secure IoT device communication, DeepStream provides two-way TLS authentication based on SSL certificates and encrypted communication based on public key authentication
DeepStream offers an IoT integration interface with Redis, Kafka, MQTT, and AMQP and turnkey integration with AWS IoT and Microsoft Azure IoT.
You can build high performance DeepStream cloud native applications with NVIDIA NGC containers. By using DeepStream, you can deploy at scale and manage containerized apps with Kubernetes and Helm Charts.
Powerful End-to-End AI Solutions
Speed up overall development efforts and unlock greater real-time performance by building an end-to-end vision AI system with NVIDIA TAO Toolkit, production quality vision AI models and deploying at the edge using DeepStream. DeepStream offers turnkey integration of several detection and segmentation models trained with TAO Toolkit including SSD, MaskRCNN, YOLOv3, RetinaNet and more.
DeepStream SDK is bundled with 30+ sample applications designed to help users kick-start their development efforts. Most samples are available in C/C++, Python, and Graph versions and run on both NVIDIA Jetson and dGPU platforms. Reference applications can be used to learn about the features of the DeepStream plugins or as templates and starting points for developing custom Vision AI applications.
DeepStream SDK Plug-ins
- H.264 and H.265 video decoding
- Stream aggregation and batching
- TensorRT-based inferencing for detection, classification and segmentation
- Object tracking reference implementation
- On-screen display API for highlighting objects and text overlay
- Frame rendering from multi-source into a 2D grid array
- Accelerated X11/EGL-based rendering
- Filtering based on Region of Interest (ROI)
- JPEG decoding
- Scaling, format conversion, and rotation
- Dewarping for 360-degree camera input
- Metadata generation and encoding
- Messaging to cloud
- Audio/Video Template Plug-In
Improving operational efficiency and reducing loss are key issues facing many retailers. Today’s large supermarkets have numerous in-store cameras, which can be used to mitigate these problems, but real-time video processing of so many streams can be a challenge. By leveraging NVIDIA T4 GPUs, DeepStream and TensorRT, Malong’s state-of-the-art Intelligent Video Analytics (IVA) solution achieves 3X higher throughput with industry-leading accuracy to help their retail customers significantly improve their business performance.
Extracting actionable insights from a sea of data created by the world’s billions of cameras and sensors is a huge task, and maintaining a connection from these devices to the cloud for processing may be overly expensive or infeasible due to security, regulatory, or bandwidth restrictions. Microsoft Azure IoT Edge deploys applications and services built using DeepStream to edge devices, allowing organizations to process data locally to trigger alerts and take actions automatically and to upload to the cloud when needed. Combining Azure IoT Edge, NVIDIA DeepStream and Azure IoT Central brings device management, monitoring and custom business logic to millions of edge devices for real-time insights and easy deployment.
As a leader in fulfillment and logistics management, SF Technology needed to track goods and vehicles across tens of thousands of locations. Every site requires detailed analytics around fleet management, loading times, and other operational activities. Using DeepStream and NVIDIA GPUs, they were able to increase the efficiency of AI Argus; an intelligent video analytics product that brings smarter video insights and can process 32 video streams simultaneously. The company is also looking at using next-generation GPUs, which is expected to increase the number of video streams processed.
We are bringing AI and machine learning to the trade sector with a fleet of real-time analytics based products that help businesses secure the cash point area and carefully supervise store entry/exit to prevent loss of goods. By switching to a DeepStream-based solution running on Jetson Nano, we achieved 5X stream density increasing the platform efficiency, reducing hardware and installation costs.
DeepStream is a closed source SDK. Note that source for all reference applications and several plugins are available.
The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are: retail analytics, parking management, managing logistics, robotics, optical inspection and managing operations.
Yes, that is now possible with the integration of the Triton Inference server. Also with DeepStream 6.0 applications can communicate with independent/remote instances of Triton Inference Server using gPRC.
To learn more about deploying TAO Toolkit models with DeepStream, click here.
DeepStream supports several popular networks out of the box such as YOLO, FasterRCNN, SSD, RetinaNet and MaskRCNN.
Yes, DeepStream 5.1 is supported on Ampere GPUs.
Latest Product News
Performance Optimization Video Tutorial
Learn how to optimize your DeepStream application using NVIDIA T4 or Jetson platforms for maximum performance.
Lexmark slashes AI Design Cycles by 25%
Lexmark uses pre-trained models, TAO Toolkit ,and DeepStream to reduce AI skills design cycle by 25%
INEX Revolutionizes Toll Road Systems
INEX leverages pre-trained models, TAO Toolkit ,and DeepStream to reduce development time and cost of toll road systems.
Feature Explainer Blog
Dive deeper and learn how DeepStream 5.0 powerful features can help build your next AI app
NVIDIA expert will show you how to build state-of-the-art Vision AI apps in no time in this upcoming webinar.
Free Online DLI Course
Learn how to use Jetson Nano and DeepStream to extract meaningful insights using IVA.