The Tokyo Institute of Technology announced they will be using NVIDIA’s accelerated computing platform to build Japan’s fastest AI supercomputer.
TSUBAME3.0, is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5, and will be equipped with Pascal-based Tesla P100 GPUs. The supercomputer will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer. It would rank among the world’s 10 fastest systems according to the latest TOP500 list.
Tokyo Tech’s Satoshi Matsuoka, a professor of computer science who is building the system said, “NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems.”
Read more >
Tokyo Tech Building Fastest AI Supercomputer With NVIDIA Technology
Feb 17, 2017
Discuss (0)

Related resources
- GTC session: How to Design an AI Supercomputer for Fast Distributed Training, and its Use Cases (Spring 2023)
- GTC session: Next Generation AI Enabled Edge Systems Delivering Unparalleled Performance (Presented by Supermicro) (Spring 2023)
- GTC session: Advances in Accelerated Computing for AI and Scientific Computing (Spring 2023)
- Webinar: How the New NVIDIA Metropolis Program Will Supercharge Your Business
- Webinar: NVIDIA Inception Israel Startups Webinar
- Webinar: S22704 - More Powerful, Secure AI at the Edge with NVIDIA EGX