NVIDIA DRIVE - Software
NVIDIA DRIVE™ OS is a foundational software stack consisting of an embedded Real Time OS (RTOS), hypervisor, NVIDIA® CUDA® libraries, NVIDIA Tensor RT™, and other modules that give you access to the hardware engines. DRIVE OS offers a safe and secure execution environment for applications such as secure boot, security services, firewall, and over-the-air updates. Plus, it offers a real-time environment with an RTOS and a hypervisor for Quality of Service (QoS). The RTOS, AUTOSAR, and hypervisor are ASIL-D components.
- Multiple guest operating systems.
- 64-bit user space and runtime libraries.
- NvMedia APIs for hardware-accelerated multimedia and camera input processing.
- CUDA parallel computing platform.
- Graphics APIs: OpenGL, OpenGL ES, EGL with EGLStream extensions.
- Deep learning libraries: TensorRT, cuDNN.
NVIDIA® DriveWorks SDK enables developers to implement AV solutions by providing a comprehensive library of modules, developer tools, and reference applications that take advantage of the computing power of the NVIDIA DRIVE™ platform. It is designed to achieve the full throughput limits of the computer, enabling real-time self-driving applications.
- Efficient utilization of the many processors inside the NVIDIA DRIVE™ platform.
- Optimization of data communication formats between hardware engines.
- Minimization of data copies.
- Implementation and utilization of the most efficient algorithms.
NVIDIA DRIVE™ AV provides perception, mapping, and planning modules that utilize the DriveWorks SDK.Details:
- DRIVE Perception: Detect, track, and estimate distances using DNNs and sensor data for obstacle, path, and wait perception.
- DRIVE Mapping: Create and update HD maps and localize the vehicle to a map.
- DRIVE Planning: Plan and control the vehicle’s motion, including path, lane, and behavioral planning.
NVIDIA DRIVE™ IX is an open software platform that provides full cabin interior sensing capabilities needed to enable innovative AI cockpit solutions. DRIVE IX provides APIs to access features, and DNNs to realize features needed for advanced driver monitoring capabilities, occupant monitoring capabilities, AR/VR visualization and natural language interactions between the vehicle and its occupants.
NVIDIA Developer Tools
Deep Learning Libraries
This is a high-performance neural network inference engine for production deployment of deep learning applications. Use TensorRT to optimize, validate, and deploy a trained neural network for inference to hyperscale data centers, embedded, or automotive product platforms.
NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.
This system-wide performance analysis tool is designed to visualize application algorithms, select the largest opportunities for optimization, and tune to scale efficiently across CPUs and GPUs on the DRIVE platform.
This is a standalone developer tool that lets you debug, profile, and export frames built with Direct3D (11, 12, DXR), Vulkan (1.1, NV Vulkan Ray Tracing Extension), OpenGL, OpenVR, and the Oculus SDK.
Nsight Eclipse Edition
Use the Nsight IDE to develop CUDA applications and create a homogeneous development environment for heterogeneous platforms. Seamlessly debug CPU and CUDA code, profile CUDA kernels, and efficiently refactor the code to take advantage of the GPU.
Nsight Compute is an interactive kernel profiler for CUDA applications that provides detailed performance metrics and API debugging via a user interface and command line tool. In addition, its baseline feature allows users to compare results within the tool.
CUDA-GDB provides a console-based debugging interface for use from the command line on the local system or a remote system with Telnet or SSH access. It delivers a seamless debugging experience for simultaneous debugging the CPU and GPU portions of the application.
CUDA-MEMCHECK detects the source and cause of memory access errors in GPU code, allows locating errors quickly, and reports runtime execution errors to identify situations that may result in an “unspecified launch failure” error when the application is running.
The NVIDIA CUDA Profiler Tools Interface (CUPTI) is a dynamic library that enables the creation of profiling and tracing tools that target CUDA applications. CUPTI provides a set of APIs targeted at ISVs creating profilers and other performance optimization tools.
Profile your CUDA application with this command line profiling tool to quickly collect CUDA kernel performance data and hardware performance counters.