Through a project called Brain4Cars, Stanford and Cornell researchers released a new architecture consisting of Recurrent Neural Networks (RNNs) that use Long Short-Term Memory (LSTM) units to predict driving maneuvers several seconds in advance. This enables assistive cars to alert drivers before they make a dangerous maneuver. Maneuver anticipation complements existing Advance Driver Assistance Systems (ADAS) by giving drivers more time to react to road situations and thereby can prevent many accidents.
Using a Tesla K40, the researchers trained their deep learning architecture in a sequence-to-sequence prediction manner, and it explicitly learns to predict the future given only a partial temporal context. We further introduce a novel loss layer for anticipation which prevents over-fitting and encourages early anticipation. They use their architecture to anticipate driving maneuvers several seconds before they happen on a natural driving data set of 1180 miles. The context for maneuver anticipation comes from multiple sensors installed on the vehicle. The approach shows significant improvement over the state-of-the-art in maneuver anticipation by increasing the precision from 77.4% to 90.5% and recall from 71.2% to 87.4%.
Read the entire research paper >>
Related resources
- GTC session: Automotive Design Fueled by Generative AI
- GTC session: Accelerating the New Era of Autonomous Vehicles With Generative AI
- GTC session: Fueling the Future: How GM Motorsports Accelerates High Speed Racing with AI Physics
- SDK: cuVSLAM
- SDK: DRIVE Constellation
- Webinar: Accelerate AV Development with DGX Cloud and NVIDIA AI Enterprise