What is NVIDIA DeepStream?
There are billions of cameras and sensors worldwide, capturing an abundance of data that can be used to generate business insights, unlock process efficiencies, and improve revenue streams. Whether it’s at a traffic intersection to reduce vehicle congestion, health and safety monitoring at hospitals, surveying retail aisles for better customer satisfaction, or at a manufacturing facility to detect component defects, every application demands reliable, real-time Intelligent Video Analytics (IVA). NVIDIA’s DeepStream SDK is a complete streaming analytics toolkit based on GStreamer for AI-based multi-sensor processing, video, audio, and image understanding. It’s ideal for vision AI developers, software partners, startups, and OEMs building IVA apps and services. Developers can now create stream processing pipelines that incorporate neural networks and other complex processing tasks such as tracking, video encoding/decoding, and video rendering. DeepStream pipelines enable real-time analytics on video, image, and sensor data.
Powerful and Flexible SDK
DeepStream SDK is suitable for a wide range of use-cases across a broad set of industries.
Multiple Programming Options
Create powerful vision AI applications using C/C++, Python, or Graph Composer’s simple and intuitive UI.
Understand rich and multi-modal real-time sensor data at the edge.
Managed AI Services
Deploy AI services in cloud native containers and orchestrate them using Kubernetes.
Increase stream density by training, adapting, and optimizing models with TAO toolkit and deploying models with DeepStream.
Enjoy Seamless Development From Edge to Cloud
Developers can build seamless streaming pipelines for AI-based video, audio, and image analytics using DeepStream. It ships with 30+ hardware-accelerated plug-ins and extensions to optimize pre/post processing, inference, multi-object tracking, message brokers, and more. DeepStream also offers some of the world's best performing real-time multi-object trackers.
DeepStream is built for both developers and enterprises and offers extensive AI model support for popular object detection and segmentation models such as state of the art SSD, YOLO, FasterRCNN, and MaskRCNN. You can also integrate custom functions and libraries.
DeepStream introduces new REST-APIs for different plug-ins that let you create flexible applications that can be deployed as SaaS while being controlled from an intuitive interface. This means it’s now possible to add/delete streams and modify “regions-of-interest” using a simple interface such as a web page.
The use of cloud-native technologies gives you the flexibility and agility needed for rapid product development and continuous product improvement over time. Organizations now have the ability to build applications that are resilient and manageable, thereby enabling faster deployments of applications.
Developers can use the DeepStream Container Builder tool to build high-performance, cloud-native AI applications with NVIDIA NGC containers. The generated containers are easily deployed at scale and managed with Kubernetes and Helm Charts.
Build End-to-End AI Solutions
Speed up overall development efforts and unlock greater real-time performance by building an end-to-end vision AI system with NVIDIA Metropolis. Start with production-quality vision AI models, adapt and optimize them with TAO Toolkit, and deploy using DeepStream.
Get incredible flexibility–from rapid prototyping to full production level solutions–and choose your inference path. With native integration to NVIDIA Triton™ Inference Server, you can deploy models in native frameworks such as PyTorch and TensorFlow for inference. Using NVIDIA TensorRT™ for high-throughput inference with options for multi-GPU, multi-stream, and batching support also helps you achieve the best possible performance.
Access Reference Applications
DeepStream SDK is bundled with 30+ sample applications designed to help users kick-start their development efforts. Most samples are available in C/C++, Python, and Graph Composer versions and run on both NVIDIA Jetson™ and dGPU platforms. Reference applications can be used to learn about the features of the DeepStream plug-ins or as templates and starting points for developing custom vision AI applications.
DeepStream also now offers integration with Basler cameras for industrial inspection and lidar support for a wide range of applications.
Work With Graph Composer
Graph Composer gives DeepStream developers a powerful, low-code development option. A simple and intuitive interface makes it easy to create complex processing pipelines and quickly deploy them using Container Builder.
Graph Composer abstracts much of the underlying DeepStream, GStreamer, and platform programming knowledge required to create the latest real-time, multi-stream vision AI applications.Instead of writing code, users interact with an extensive library of components, configuring and connecting them using the drag-and-drop interface.
Accelerate DeepStream Apps With NVIDIA AI Enterprise
NVIDIA AI Enterprise is an end-to-end, secure, cloud-native suite of AI software. It delivers key benefits including validation and integration for NVIDIA AI open-source software, and access to AI solution workflows to accelerate time to production.
Enterprise support is included with NVIDIA AI Enterprise to help you develop your applications powered by DeepStream and manage the lifecycle of AI applications with global enterprise support. This helps ensure that your business-critical projects stay on track.
Explore Multiple Programming Options
DeepStream pipelines can be constructed using Gst Python, the GStreamer framework's Python bindings. The source code for the binding and Python sample applications are available on GitHub.
Learn more about Python
Graph Composer is a low-code development tool that enhances the DeepStream user experience. Using a simple, intuitive UI, processing pipelines are constructed with drag-and-drop operations.
Learn More About Graph Composer
Improve Accuracy and Real-Time Performance
DeepStream offers exceptional throughput for a wide variety of object detection, image processing, and instance segmentation AI models. The following table shows the end-to-end application performance from data ingestion, decoding, and image processing to inference. It takes multiple 1080p/30fps streams as input. Note that running on the DLAs for Jetson devices frees up the GPU for other tasks. For performance best practices, watch this video tutorial.
RTX GPUs performance is only reported for flagship product(s). All SKUs support DeepStream.
The DeepStream SDK lets you apply AI to streaming video and simultaneously optimize video decode/encode, image scaling, and conversion and edge-to-cloud connectivity for complete end-to-end performance optimization.
Read Customer Stories
OneCup AI’s computer vision system tracks and classifies animal activity using NVIDIA pretrained models, TAO Toolkit, and DeepStream SDK, significantly reducing their development time from months to weeks.
Learn more about OneCup AI
Fingermark used NVIDIA TAO Toolkit and DeepStream SDKs to accelerate the development of their industrial vision AI solutions that improve worker safety.
Learn more about Fingermark
Trifork jumpstarted their AI model development with NVIDIA DeepStream SDK, pretrained models, and TAO Toolkit to develop their AI-based baggage tracking solution for airports.
Learn more about Trifork
DeepStream is a closed-source SDK. Note that sources for all reference applications and several plugins are available.
The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are retail analytics, parking management, managing logistics, optical inspection, robotics, and sports analytics.
Yes, that’s now possible with the integration of the Triton Inference server. Also with DeepStream 6.1.1, applications can communicate with independent/remote instances of Triton Inference Server using gPRC.
DeepStream supports several popular networks out of the box. For instance, DeepStream supports MaskRCNN. Also, DeepStream ships with an example to run the popular YOLO models, FasterRCNN, SSD and RetinaNet.
Yes, DS 6.0 or later supports the Ampere architecture.
Build high-performance vision AI apps and services using DeepStream SDK.