The Tokyo Institute of Technology announced they will be using NVIDIA’s accelerated computing platform to build Japan’s fastest AI supercomputer.
TSUBAME3.0, is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5, and will be equipped with Pascal-based Tesla P100 GPUs. The supercomputer will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer. It would rank among the world’s 10 fastest systems according to the latest TOP500 list.
Tokyo Tech’s Satoshi Matsuoka, a professor of computer science who is building the system said, “NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems.”
Read more >
Tokyo Tech Building Fastest AI Supercomputer With NVIDIA Technology
Feb 17, 2017
Discuss (0)
Related resources
- GTC session: AI Supercomputing: Pioneering the Future of Computational Science
- GTC session: Empower Large-Scale AI Workloads With Google Cloud AI Hypercomputer Supercomputing Architecture (Presented by Google Cloud)
- GTC session: Global Innovators: Scaling Innovation With NVIDIA AI
- NGC Containers: ACE Agent Sample Frontend
- NGC Containers: AlphaFold2
- SDK: NVIDIA Fleet Command