Simulation / Modeling / Design

How to Optimize Self-Driving DNNs with TensorRT

When it comes to autonomous vehicle development, to ensure the highest level of safety, one of the most important areas of evaluation is performance.

High-performance, energy-efficient compute enables developers to balance the complexity, accuracy and resource consumption of the deep neural networks (DNN) that run in the vehicle. Getting the most out of hardware computing power requires optimized software.

NVIDIA TensorRT is an SDK for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications, such as autonomous driving.

You can register for our upcoming webinar on Feb. 3 to learn how to use TensorRT to optimize autonomous driving DNNs for robust autonomous vehicle development.

Manage Massive Workloads

DNN-based workloads in autonomous driving are incredibly complex, with a variety of computation-intensive layer operations just to perform computer vision tasks. 

Managing these types of operations requires optimized compute performance, however, it isn’t always the case that the theoretical peak performance of hardware translates to any software achievable execution. TensorRT ensures developers can tackle these massive workloads without leaving any performance on the table.

By performing optimization at every stage of processing — from tooling, to ingesting DNNs, to inference — TensorRT ensures the most efficient operations possible.

The SDK is also seamless to use, allowing developers to toggle different settings depending on the platform. For example, lower precision, i.e., FP16 or INT8, is used to enable higher compute throughput and lower memory bandwidth on Tensor Core. In addition, workloads can be shifted from the GPU to the deep learning accelerator (DLA).

Master the Model Backbone

This webinar will show how TensorRT for AV development works in action, tackling one of the chunkiest portions in the inference pipeline — the model backbone.

Many developers use off-the-shelf model backbones (for example, ResNets or EfficientNets) to get started on solving computer vision tasks such as object detection or semantic segmentation. However, these backbones aren’t always performance-optimized, creating bottlenecks down the line. TensorRT addresses these problems by optimizing trained neural networks to generate deployment-ready inference engines that maximize GPU inference performance and power efficiency.

Learn from NVIDIA experts how to leverage these tools in autonomous vehicle development. Register today for the Feb 3rd webinar, plus catch up on past TensorRT and DriveWorks webinars.

Discuss (0)

Tags