Computer Vision / Video Analytics

Autonomous AI Outraces Gran Turismo World Champs

Gran Turismo (GT) Sport competitors are facing a new, AI-supercharged contender thanks to the latest collaborative effort from Sony AI, Sony Interactive Entertainment (SIE), and Polyphony Digital Inc., the developers behind GT Sport. 

The autonomous AI racing agent, known as Gran Turismo Sophy (GT Sophy), recently beat the world’s best drivers in GT Sport. Published in Nature, the work introduces a novel deep reinforcement-learning platform used to create GT Sophy and could spur new AI-powered experiences for players across the globe.

“Sony’s purpose is to ‘fill the world with emotion, through the power of creativity and technology,’ and Gran Turismo Sophy is a perfect embodiment of this,” Kenichiro Yoshida, Chairman, President and CEO, of Sony Group Corporation said in a press release.

“This group collaboration in which we have built a game AI for gamers is truly unique to Sony as a creative entertainment company. It signals a significant leap in the advancement of AI while also offering enhanced experiences to GT fans around the world.”

Smart gaming

AI is not new to gaming. In 2017, the Alpha Zero program from DeepMind made news when it learned to play and conquer chess, shogi (Japanese chess), and Go using deep reinforcement learning (deep RL.) 

An offset of machine learning, deep RL in basic terms uses a computational RL agent to make decisions by trial and error to solve a problem. With the introduction of deep learning into the algorithm, the agent makes decisions from very large datasets and decides on actions to reach its goal efficiently.

The Alpha Zero program used an algorithm where an untrained neural network played millions of games against itself, adjusting play based on its outcome.

Racing AI, however, poses more complicated inference needs with innumerable variables from different cars, tracks, drivers, weather, and opponents. As one of the most realistic driving simulators, GT Sport uses authentic race car and track dimensions, reproducing racing environments by also accounting for factors such as air resistance and tire friction.  

Reinforcing good behavior

Creating a racing agent capable of adjusting to real-time factors, the team trained GT Sophy on three specific skills—race car control, racing tactics, and racing etiquette using a newly developed deep RL algorithm. According to the project’s website, the algorithm uses the latest in reinforcement-learning techniques, to train a racing agent with rewards or penalties based on its actions.

“One of the advantages of using deep RL to develop a racing agent is that it eliminates the need for engineers to program how and when to execute the skills needed to win the race—as long as it is exposed to the right conditions, the agent learns to do the right thing by trial and error,” the researchers write in the study.

The team custom-built a web-based Distributed, Asynchronous Rollouts and Training (DART) platform to train GT Sophy on PlayStation 4 consoles using SIE’s worldwide cloud infrastructure researchers then used DART for collecting training data and evaluating versions of the agent.

Using this system, the researchers specify an experiment, run it automatically, and view data in a web browser. Each experiment uses a single trainer on a compute node with the cuDNN-accelerated TensorFlow deep learning framework and an NVIDIA V100 GPU, or half of an NVIDIA A100 GPU coupled with around eight vCPUs and 55 GiB of memory.  

“The system allows Sony AI’s research team to seamlessly run hundreds of simultaneous experiments while they explore techniques that would take GT Sophy to the next level,” according to the project’s website.

Supercharged GT Sophy 

In 2021, four of the world’s best GT Sport drivers competed against GT Sophy in two separate events. These competitions featured three racecourses, and four GT Sophy agents and cars. In its debut, GT Sophy excelled in timed trials but didn’t perform as well when challenging racers on the same track. 

The team made improvements based on the results of the first race, upgrading the training regime, increasing network size, adjusting features and rewards, and enhancing the opponents. 

The result led to a racing agent that could pass a human driver around a sharp corner, handle crowded starts, make slingshot passes out of the slipstream, and executive defensive maneuvers. The agent did this all while abiding by the subtle sportsmanship considerations human drivers understand and practice. It also bested top human drivers in timed trials and in an FIA-Certified Gran Turismo championship series. 

The paper reports that GT Sophy learns to get around a track in just a few hours. In about 2 days, it can beat about 95% of human players. Give it 10 to 12 days, about 45,00 driving hours, and GT Sophy equals or exceeds the top drivers in the world.  

With its racing prowess, the aim of GT Sophy is to make GT Sport more enjoyable, competitive, and educational. Some of the experts that competed against GT Sophy reported learning new approaches to turns and driving techniques.

The researchers also see the potential for deep RL to improve real-world applications of systems such as collaborative robotics, drones, or autonomous vehicles. 

The approximate Python code is available in the supplementary information section of the study.

Read the paper in Nature. >>
Read more. >>

Discuss (0)

Tags