NVIDIA TensorRT for DRIVE OS - Last updated November 15, 2023


NVIDIA TensorRT 8.6.12 Release Notes for DRIVE OS
NVIDIA TensorRT is a C++ library that facilitates high performance inference on NVIDIA GPUs. It is designed to work in connection with deep learning frameworks that are commonly used for training. TensorRT focuses specifically on running an already trained network quickly and efficiently on a GPU for the purpose of generating a result; also known as inferencing. These release notes describe the key features, software enhancements and improvements, and known issues for the TensorRT 8.6.12 product package.

Inference Library

API Reference
This is the API Reference documentation for the NVIDIA DRIVE OS 6.0.9 TensorRT 8.6.12 library. This API covers both the standard TensorRT release and the safe TensorRT release and provides information on individual functions, classes, and methods.
NVIDIA TensorRT 8.6.12 Developer Guide for DRIVE OS
This NVIDIA TensorRT 8.6.12 Developer Guide for DRIVE OS demonstrates how to use the C++ and Python APIs for implementing the most common deep learning layers. It shows how you can take an existing model built with a deep learning framework and build a TensorRT engine using the provided parsers. The Developer Guide also provides step-by-step instructions for common user tasks such as creating a TensorRT network definition, invoking the TensorRT builder, serializing and deserializing, and how to feed the engine with data and perform inference; all while using either the C++ or Python API.