Supercomputer Helps Understand How Jupiter Evolved

Research, Astronomy & Astrophysics, Cluster / Supercomputing, cu, Government / National Labs, Higher Education / Academia, Tesla

Nadeem Mohammad, posted Nov 02 2016

Researchers from ETH Zürich and the Universities of Zürich and Bern simulated different scenarios on the computing power of the GPU-accelerated Swiss National Supercomputing Centre (CSCS) to find out how young giant planets exactly form and evolve. “We pushed our simulations to the limits in terms of the complexity of the physics added to the

Read more

Intelligent Micro-Machines Swarm Out of Sci-Fi, Into GTCs

Deep Learning, Embedded Computing, Artificial Intelligence, Events, GTC

Lan Malin, posted Nov 01 2016

Robots that count cattle from the sky. Smart cameras that fly themselves. Plug-and-play, GPU-powered brains that turn drones into robots that are as home in your house as they are in the open skies. This isn’t science-fiction. It’s the robots on display at this year’s lineup of global GPU Technology Conferences. From Europe to Asia, […]

Read more

AI-Powered Crib Cam Monitors Your Baby

News, Research, Cloud, DIGITS, Image Recognition, Internet / Communications, Internet of Things, Tesla

Nadeem Mohammad, posted Oct 31 2016

BabbyCam is a new deep learning baby monitor that recognizes your baby, monitors their emotions and will alert you if their face is covered. As a new parent himself, the developer of the camera was in search for a solution with the ability to identify if the infant was on its stomach, one of the

Read more

Automated Analysis of Disaster Damage

Research, cuDNN, GeForce, Higher Education / Academia, Image Recognition

Nadeem Mohammad, posted Oct 28 2016

Researchers from Purdue University are using deep learning to dramatically reduce the time it takes for engineers to assess damage to buildings after disasters. Engineers need to quickly document the damage to buildings, bridges and pipelines after a disaster. “These teams of engineers take a lot of photos, perhaps 10,000 images per day, and these

Read more

AI-Powered ‘Nightmare Machine’ Generates Horrifying Images

Research, cuDNN, GeForce, Higher Education / Academia, Image Recognition, Machine Learning & Artificial Intelligence, Media & Entertainment

Nadeem Mohammad, posted Oct 27 2016

MIT researchers developed an algorithm trained to generate horrifying images in an attempt to find the scariest faces and locations possible, and then rely on humans to see which approach makes the freakiest images. Using TITAN X GPUs and cuDNN to train their deep learning models, the researchers used the infamous style transfer technique and

Read more