Researchers from the recently expanded Ford Research and Innovation Center in Palo Alto, California developed a new sub-centimeter accurate approach to estimate a moving vehicle’s position within a lane in real-time. To achieve this level of precision the researchers trained a deep neural network, aptly named DeepLanes, to process input images from two laterally-mounted down-facing cameras – each recording at an average 100 frames/s.
The team trained their neural network on an NVIDIA DIGITS DevBox with the cuDNN-accelerated Caffe deep learning framework.
“Our unified framework approach is a simple, end-to-end solution that does not depend on tedious pre-processing, post-processing or hand-crafted features,” says the team of researchers. But it was only after a thorough evaluation of the results that they could proudly claim, “we are able to estimate the lane position in 99% of the cases with less than five pixel error”.
In the coming years the team expects their speedy and scalable DeepLanes technique can be applied to a variety of other automotive functions as well – anything from improved real-time navigation systems to fully automated driving features.
Read more >>
Ford Using Deep Learning for Lane Detection
Jun 28, 2016
Discuss (0)

Related resources
- GTC session: Improving Road Safety with AI-Based Stereo Camera Object Detection (Spring 2023)
- GTC session: Scaling Autonomous Vehicle Simulation with NVIDIA DRIVE Sim and Omniverse (Spring 2023)
- SDK: DRIVE Constellation
- Webinar: Integrating DNN Inference into Autonomous Vehicle Applications with NVIDIA DriveWorks SDK (EMEA & APAC)
- Webinar: Integrating DNN Inference into Autonomous Vehicle Applications with NVIDIA DriveWorks SDK (NA & EMEA)
- Webinar: Optimizing DNN Inference using CUDA and TensorRT on NVIDIA DRIVE AGX