NVIDIA DRIVE™ OS is the reference operating system and associated software stack designed specifically for developing and deploying autonomous vehicle applications on DRIVE AGX-based hardware. NVIDIA DRIVE OS delivers a safe and secure execution environment for safety-critical applications, with services such as secure boot, security services, firewall and over-the-air (OTA) updates.

This foundational software stack for autonomous vehicles (AV) consists of an embedded real-time operating system (RTOS), NVIDIA Hypervisor, NVIDIA® CUDA® libraries, NVIDIA TensorRT™ and other components optimized to provide direct access to DRIVE AGX hardware acceleration engines.

NVIDIA DRIVE™ OS Software Development Kit (SDK) consists of all required software, libraries, and tools to build, debug, profile, and deploy applications for autonomous vehicles and self-driving cars across the CPU, GPU and other DRIVE AGX hardware acceleration engines. These development tools provide optimized workflows for parallel computing and deep learning development.

In order to maximize productivity, NVIDIA DRIVE OS SDK leverages industry standard tools, technologies, and APIs to provide a familiar and comfortable high-productivity development environment.

DRIVE OS QNX for Safety leverages Blackberry QNX OS for Safety to satisfy Functional Safety (FuSa) requirements.

Learn More about NVIDIA DRIVE Safety



CUDA® is a parallel computing platform and programming model developed by NVIDIA for general-purpose computing on GPUs. CUDA supports aarch64 (the 64-bit state introduced in the armv8-A architecture) platforms, and with parallelizable compute workloads, developers can achieve dramatic speedups on NVIDIA DRIVE AGX.


NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a hardware-aware deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. On NVIDIA DRIVE AGX, TensorRT targets specialized deep learning accelerators (DLA) on the Xavier SoC.

Autonomous vehicles are complex, requiring fast and accurate perception of their surroundings to make decisions in real-time. This capability calls for high-performance AI compute to enable the wide range of tasks required for autonomous driving, including multi-stage sensor data processing, obstacle detection, as well as traffic sign and lane recognition.

The combination of CUDA and TensorRT enables DNNs running on the vehicle to process sensor inputs at high speeds. This makes it possible for vehicles to process data from a variety of sensors in real-time for Level 4 and Level 5 autonomous driving capability, which requires no supervision from a human driver.


NvMedia is a set of highly optimized APIs providing direct access to hardware-accelerated compute engines and sensors, including encoders/decoders, sensor input processing, image manipulation and more.

NvMedia provides the following key benefits:
  • Capture of up to 16 camera sensors
  • Up to two processed outputs from Xavier ISP HW and raw captures
  • Proprietary auto exposure (AE) and auto white balance (AWB) algorithms optimized for NVIDIA-recommended camera modules
  • Heterogeneous sensor configuration within the same quad/trio/duo SERDES


NvStreams is a highly efficient API that provides access to high-speed data transports, enabling the complex processing pipelines required for autonomous vehicles.

NvStreams provides the following key benefits:
  • Efficient sharing of buffers between engines and their respective APIs
  • Synchronization and data flow control
  • Data dependency input and output buffer availability
  • Control dependency task ordering, non-interference and time-synchronization
  • Intra-thread, inter-thread, inter-process and inter-VM

Developer Tools

Nsight™ Systems

This system-wide performance analysis tool is designed to visualize application algorithms, select the largest opportunities for optimization, and tune to scale efficiently across CPUs and GPUs on the DRIVE platform.

Nsight Graphics

A standalone developer tool that lets you debug, profile and export frames built with Direct3D (11, 12, DXR), Vulkan (1.1, NV Vulkan Ray Tracing Extension), OpenGL, OpenVR and the Oculus SDK.

Nsight Eclipse Edition

The Nsight IDE can be used to develop CUDA applications and create a homogeneous development environment for heterogeneous platforms. It enables users to seamlessly debug CPU and CUDA code, profile CUDA kernels, and efficiently refactor the code to take advantage of the GPU.

Nsight Compute

Nsight Compute is an interactive kernel profiler for CUDA applications that provides detailed performance metrics and API debugging via a user interface and command line tool. In addition, its baseline feature allows users to compare results within the tool.


CUDA-GDB provides a console-based debugging interface for use from the command line on the local system or a remote system with Telnet or SSH access. It delivers a seamless debugging experience for simultaneous debugging the CPU and GPU portions of the application.


CUDA-MEMCHECK detects the source and cause of memory access errors in GPU code, allows locating errors quickly and reports runtime execution errors to identify situations that may result in an “unspecified launch failure” error when the application is running.


The NVIDIA CUDA Profiler Tools Interface (CUPTI) is a dynamic library that enables the creation of profiling and tracing tools that target CUDA applications. CUPTI provides a set of APIs targeted at ISVs creating profilers and other performance optimization tools.

Developing Software for DRIVE OS

DRIVE OS provides numerous paths for development, including desktop and embedded Linux, as well as embedded QNX. DRIVE OS Linux and DRIVE OS QNX SDKs are used primarily for development and prototyping using the suite of development tools listed above. DRIVE OS QNX for Safety provides a safety-certified production leveraging Blackberry QNX for Safety. Please see NVIDIA DRIVE Safety to learn more

The following diagram demonstrates the DRIVE OS development flow:

Additional Resources: