For the first time, a computer has beaten a human professional at the game of Go — an ancient board game that has long been viewed as one of the greatest challenges for Artificial Intelligence.
Google DeepMind’s GPU-accelerated AlphaGo program beat Fan Hui, the European Go champion, five times out of five in tournament conditions.
Demis Hassabis, who oversees DeepMind, mentioned in a recent article that DeepMind’s deep learning system works pretty well on a single computer equipped with a decent number of GPU accelerators, but for the match against Fan Hui, the researchers used a larger network of computers that spanned about 170 GPUs. This larger computer network both trained the system and played the actual game, drawing on the results of the training.
The team confirmed they will use the same setup when they take on the Go world champion in South Korea.
Rémi Coulom, the French researcher behind what was previously the world’s top artificially intelligent Go player, has spent the past decade trying to build a system capable of beating the world’s best players, and now, he believes that system is here. “I’m busy buying some GPUs,” he says.
Read more >>
Google AI Algorithm Masters Ancient Game of Go
Jan 28, 2016
Discuss (0)

Related resources
- GTC session: Enterprises Share Their Experience with DGX Cloud (Spring 2023)
- GTC session: How to Build an AI Platform that Unifies Today’s Hybrid and Multi-Cloud Data Centers (Spring 2023)
- GTC session: Developer Breakout: What's New in NVIDIA AI 3.0 and vSphere 8 (Spring 2023)
- NGC Containers: MATLAB
- Webinar: S22704 - More Powerful, Secure AI at the Edge with NVIDIA EGX
- Webinar: Accelerating the Hybrid Cloud for AI and Data Analytics