GTC Silicon Valley-2019: Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera
Note: This video may require joining the NVIDIA Developer Program or login
GTC Silicon Valley-2019 ID:S9317:Sparse-to-Dense: Self-supervised Depth Completion from LiDAR and Monocular Camera
Fangchang Ma(Massachusetts Institute of Technology)
We'll present our research work on self-supervised depth completion the technique of predicting a dense depth image from only sparse depth measurements (e.g., from LiDAR), which has applications in robotics and autonomous driving. To address the problem of depth completion, we develop a deep regression model to learn the mapping. Our model was the winning approach on the KITTI depth completion competition in 2018. Beyond that work, we propose a self-supervised training framework for training the depth completion neural network that that would require only a sequence of color and sparse depth images, without the need for any dense ground truth depth labels, which are difficult to obtain. Our experiments demonstrate that the self-supervised framework outperforms a number of existing solutions trained with semi-dense ground truth annotations.