NVIDIA DRIVE - Software
DRIVE Software Stack
Software
DRIVE OS
NVIDIA DRIVE™ OS is a foundational software stack consisting of an embedded Real Time OS (RTOS), hypervisor, NVIDIA® CUDA® libraries, NVIDIA Tensor RT™, and other modules that give you access to the hardware engines. DRIVE OS offers a safe and secure execution environment for applications such as secure boot, security services, firewall, and over-the-air updates. Plus, it offers a real-time environment with an RTOS and a hypervisor for Quality of Service (QoS). The RTOS, AUTOSAR, and hypervisor are ASIL-D components.
Features:
- Multiple guest operating systems
- 64-bit user space and runtime libraries
- NvMedia APIs for hardware-accelerated multimedia and camera input processing
- CUDA parallel computing platform
- Graphics APIs: OpenGL, OpenGL ES, EGL with EGLStream extensions
- Deep learning: TensorRT, cuDNN
DriveWorks
NVIDIA DriveWorks SDK provides reference applications, tools, and a comprehensive library of modules that take advantage of the computing power of the DRIVE AGX platform.
Details:- DRIVE Core: Acquire and process sensor data and interface with the vehicle.
- DRIVE Calibration: Measure and correct sensor calibration parameters and model the vehicle’s motion.
- DRIVE Networks: Use deep neural networks (DNNs) to detect obstacles, drivable paths, and conditions that require the vehicle to stop or slow.
DRIVE AV
DRIVE AV provides modules for perception, mapping, and planning that use the DriveWorks SDK.
Details:- DRIVE Perception: Detect, track, and estimate distances using DNNs and sensor data for obstacle, path, and wait perception.
- DRIVE Mapping: Create and update HD maps and localize the vehicle to a map.
- DRIVE Planning: Plan and control the vehicle’s motion, including path, lane, and behavioral planning.
DRIVE IX
DRIVE IX: Provides algorithms to visualize the vehicle’s surroundings, perform AI-based driver monitoring, and in-cabin assistance.
NVIDIA Developer Tools
Deep Learning Libraries
TensorRT
This is a high-performance neural network inference engine for production deployment of deep learning applications. Use TensorRT to optimize, validate, and deploy a trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms.
cuDNN
NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
Software Tools
Nsight™ Systems
This system-wide performance analysis tool is designed to visualize application algorithms, select the largest opportunities for optimization, and tune to scale efficiently across CPUs and GPUs on the DRIVE platform.
Nsight Graphics
This is a standalone developer tool that lets you debug, profile, and export frames built with Direct3D (11, 12, DXR), Vulkan (1.1, NV Vulkan Ray Tracing Extension), OpenGL, OpenVR, and the Oculus SDK.
Nsight Eclipse Edition
Use the Nsight IDE to develop CUDA applications and create a homogeneous development environment for heterogeneous platforms. Seamlessly debug CPU and CUDA code, profile CUDA kernels, and efficiently refactor the code to take advantage of the GPU.
Nsight Compute
Nsight Compute is an interactive kernel profiler for CUDA applications that provides detailed performance metrics and API debugging via a user interface and command line tool. In addition, its baseline feature allows users to compare results within the tool.
CUDA GDB
CUDA-GDB provides a console-based debugging interface for use from the command line on the local system or a remote system with Telnet or SSH access. It delivers a seamless debugging experience for simultaneous debugging the CPU and GPU portions of the application.
CUDA MEMCHECK
CUDA-MEMCHECK detects the source and cause of memory access errors in GPU code, allows locating errors quickly, and reports runtime execution errors to identify situations that may result in an “unspecified launch failure” error when the application is running.
CUPTI
The NVIDIA CUDA Profiler Tools Interface (CUPTI) is a dynamic library that enables the creation of profiling and tracing tools that target CUDA applications. CUPTI provides a set of APIs targeted at ISVs creating profilers and other performance optimization tools.
CUDA nvprof
Profile your CUDA application with this command line profiling tool to quickly collect CUDA kernel performance data and hardware performance counters.