Simulation / Modeling / Design

DRIVE Labs: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all of our automotive posts, here.

By Neda Cvijetic

Lidar can give autonomous vehicles laser focus. By bouncing laser signals off the surrounding environment, these sensors can enable a self-driving car to construct a detailed and accurate 3D picture of what’s around it.

However, traditional methods for processing lidar data pose significant challenges. These include limitations in the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness.

In this DRIVE Labs episode, we introduce our multi-view LidarNet deep neural network, which uses multiple perspectives, or views, of the scene around the car to overcome the traditional limitations of lidar-based processing.

AI in the form of DNN-based approaches has become the go-to solution to address traditional lidar perception challenges.

One AI method uses lidar DNNs that perform top‐down or “bird’s eye view” (BEV) object detection on lidar point cloud data. A virtual camera positioned at some height above the scene, similar to a bird flying overhead, reprojects 3D coordinates of each data point into that virtual camera view via orthogonal projection.

Read the full post, Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars, on the NVIDIA Blog.

Discuss (0)

Tags