After surprising some of the top AI researchers with the first Tesla V100 GPU accelerators last month at CVPR, NVIDIA struck again by handing out 15 more to researchers at the International Conference on Machine Learning (ICML) in Sydney.
Given out at a meetup for participants in our NVIDIA AI Labs program at ICML, and signed by NVIDIA founder and CEO Jensen Huang, the V100s are the world’s most powerful GPUs, offering more than 100 teraflops of deep learning performance.
“We are going to melt this with our algorithms, then we are going to melt the world,” said the University of Washington’s Pedro Domingos.
Recipients of the V100s at the meetup included representatives from Carnegie Mellon University, Chinese Academy of Sciences (CAS), IDSIA – Swiss AI Lab, Massachusetts Institute of Technology, MPI Tübingen, Montreal Institute for Learning Algorithms, National Taiwan University, Oxford University, Peking University, Stanford University, Tsinghua University, University of California Berkeley, University of Tokyo, University of Toronto, and the University of Washington.
“We are very much reliant on NVIDIA technology,” said Aaron Courville, of the Montreal Institute for Learning Algorithms. “More GPUs is always a very good thing, and very important for us.”
Another surprise at the meetup was the launch of the NVIDIA Pioneering Research Awards, a program to celebrate the acceptance of NVAIL partners’ research papers at conferences such as ICML. Award recipients received a plaque featuring the first page of their papers. Inaugural winners include:
- Carnegie Mellon University: Improved Variational Autoencoders for Text Modeling using Dilated Convolutions
- IDSIA/Istituto Dalle Molle di Studi sull’Intelligenza Artificiale: Recurrent Highway Networks
- Massachusetts Institute of Technology: Coresets for Vector Summarization with Applications to Network Graphs
- Montreal Institute for Learning Algorithms: A Closer Look at Memorization in Deep Networks
- Tsinghua University: Identify the Nash Equilibrium in Static Games with Random Payoffs
- University of California, Berkeley: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
- University of Tokyo: Asymmetric Tri-training for unsupervised domain adaptation
- University of Toronto: Deep Spectral Clustering Learning