What is DriveOS SDK?#

The DriveOS SDK is a comprehensive development toolkit that enables developers to create, test, and deploy applications directly onto the DriveOS platform. Equipped with a range of libraries, APIs, and tools, the SDK allows developers to build sophisticated applications that leverage NVIDIA’s GPU-accelerated architecture and deep learning capabilities, including the integration of large language models (LLMs). These LLMs can assist with tasks like natural language processing for in-vehicle assistants, real-time data annotation, and even complex multi-modal data interpretation across different sensors, enabling vehicles to better understand and respond to the environment. The SDK’s modular approach allows developers to selectively integrate LLM capabilities depending on their application’s specific needs.

For developers, DriveOS presents several key advantages, including high-performance computing, comprehensive safety features, and support for an expansive ecosystem of tools, including LLMs. This compatibility with LLMs allows for applications that can respond dynamically to spoken commands, provide real-time contextual awareness, and even generate system insights through natural language, facilitating more seamless interactions between humans and autonomous vehicles. Additionally, DriveOS supports seamless scalability, enabling deployment across a range of vehicle models and configurations. With DriveOS, developers can concentrate on innovating and enhancing autonomous vehicle technology, building on a stable, high-performance foundation equipped to handle the future of intelligent mobility.

Additional Platform Components#

In addition to the Foundation services and DriveOS-specific components, other components are available separately for customizing platform development and include the following:

Components

Description

CUDA

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

Consult the CUDA Samples provided as an educational resource.

Consult the CUDA Computing Platform Development Guide for general purpose computing development.

cuDNN

The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. cuDNN is part of the NVIDIA Deep Learning SDK.

Consult the cuDNN Deep Neural Network Library of primitives for deep neural network development.

TensorRT

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Consult the TensorRT Documentation for deep learning development.