In this week’s edition of the Developer Top 5 we revisit the top stories of the week.
From GPU-accelerated weather forecasts, Stephen Curry and robots, to new GPU availability on Google Cloud.
Watch via the link below.
5 – New GPU-accelerated Weather Forecasting System Dramatically Improves Accuracy
At CES in Las Vegas, Nevada, The Weather Company, an IBM subsidiary, announced a new GPU-accelerated global weather forecasting system that uses crowdsourced data to deliver hourly weather updates worldwide.
Read more>
4 – AI Generates Images of a Finished Meal Using Only a Written Recipe
In computer vision, creating an image of a long list of text is complicated. To help accelerate research in this field, a team from Tel Aviv University in Israel developed a deep learning-based system that can automatically generate pictures of a finished meal from a simple text-based recipe.
Read more>
3 – Transforming Paintings and Photos Into Animations With AI
Researchers from the University of Washington and Facebook recently released a paper that shows a deep learning-based system that can transform still images and paintings into animations. The algorithm called Photo Wake-Up uses a convolutional neural network to animate a person or character in 3D from a single still image.
Read more>
2 – NVIDIA Opens Robotics Research Lab in Seattle
NVIDIA is opening a new robotics research lab in Seattle near the University of Washington campus led by Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering.
Read more>
1 – Google Cloud Makes NVIDIA GPUs Available for First Time in Brazil, India, Tokyo and Singapore
“The T4 joins our NVIDIA K80, P4, P100, and V100 GPU offerings, providing customers with a wide selection of hardware-accelerated compute options,” said Chris Kleban, Product Manager at Google Cloud. “The T4 is the best GPU in our product portfolio for running inference workloads. Its high-performance characteristics for FP16, INT8, and INT4 allow you to run high-scale inference with flexible accuracy/performance tradeoffs that are not available on any other accelerator.”
Read more>