Simulation / Modeling / Design

5 Cool Automotive Sessions at GTC 2019

NVIDIA GTC is the premier event for AI-driven automotive innovation. See the latest deep learning breakthroughs that are revolutionizing the transportation industry, from the DRIVE AutoPilot Level 2+ solution and cutting-edge simulation to open and flexible self-driving software.

In the video, see a preview of some of the sessions that you can attend.

Autonomous Parking on NVIDIA DRIVE


The average American spends 17 hours every year searching for parking. This session will detail how Volvo Cars is working to help drivers find parking spots more efficiently by using computer vision and deep learning. We’ll cover how we’re working with NVIDIA to deliver an autonomous parking system using the NVIDIA DRIVE Xavier AGX architecture.

Beyond Supervised Driving

Toyota Research Institute

The Toyota Research Institute is going beyond supervised learning for automated driving and exploring problems that affect research and development of long-term, large-scale autonomous robots. These problems include unsupervised domain adaptation, self-supervised learning, and robustness to edge cases. This session will dive into robotics systems, especially end-to-end vs. modular design and human-robot interaction. It will also include some of TRI’s related research directions, especially those around world-scale cloud robotics.

Fast Neural Network Inference with TensorRT on Autonomous Vehicles


Autonomous driving systems use various neural network models that require extremely accurate and efficient computation on GPUs. This session will outline how Zoox employs two strategies to improve inference performance (i.e., latency) of trained neural network models without loss of accuracy: (1) inference with NVIDIA TensorRT, and (2) inference with lower precision (i.e., Fp16 and Int8). We will share our learned lessons about neural network deployment with TensorRT and our current conversion workflow to tackle limitations.

The Road to 1,000 Meters: New Perception System Breakthrough


A large truck driving on an interstate at 65 miles per hour requires about 100 meters to execute an emergency stop, so perception range is critical for safety. This session will outline how TuSimple achieved an unprecedented autonomous truck perception length of 1,000 meters, more than three times what’s possible with current lidar systems. We’ll talk about the challenges of achieving this benchmark and why a new standard must be set for autonomous truck perception-system range.

The Grocery Run of the Future: How Camera-First Self-Driving Technology Is Changing Retail


Learn how AutoX’s camera-first self-driving technology is changing grocery shopping with autonomous food delivery. This session will introduce AutoX’s autonomous vehicle technology, designed with deep learning, robotics and computer vision incorporated into its AI. We will explain how we use cameras, sensor fusion, large-scale high-definition 3-D mapping and simulation to make ultra-robust autonomous vehicle software and hardware. We have combined our proprietary camera-first technology with our in-house sensor fusion technology to recreate the way humans see and drive.

See all GTC content >>

Discuss (0)