NVIDIA VRWorks support for Unity

GameWorks, VRWorks, VR, Ansel

David Coombes, posted Nov 02 2016

Today NVIDIA and Unity Technologies announced a pathway for developers who want to use VRWorks to accellerate rendering for VR applications developed using the Unity Game Engine. VR applications require stereo rendering at 90 fps to give users a smooth experience and this requires a lot of performance. VRWorks unlocks GPU performance so developers can concentrate on making great content.

Read more

Supercomputer Helps Understand How Jupiter Evolved

Research, Astronomy & Astrophysics, Cluster / Supercomputing, cu, Government / National Labs, Higher Education / Academia, Tesla

Nadeem Mohammad, posted Nov 02 2016

Researchers from ETH Zürich and the Universities of Zürich and Bern simulated different scenarios on the computing power of the GPU-accelerated Swiss National Supercomputing Centre (CSCS) to find out how young giant planets exactly form and evolve. “We pushed our simulations to the limits in terms of the complexity of the physics added to the

Read more

AI-Powered Crib Cam Monitors Your Baby

News, Research, Cloud, DIGITS, Image Recognition, Internet / Communications, Internet of Things, Tesla

Nadeem Mohammad, posted Oct 31 2016

BabbyCam is a new deep learning baby monitor that recognizes your baby, monitors their emotions and will alert you if their face is covered. As a new parent himself, the developer of the camera was in search for a solution with the ability to identify if the infant was on its stomach, one of the

Read more

Automated Analysis of Disaster Damage

Research, cuDNN, GeForce, Higher Education / Academia, Image Recognition

Nadeem Mohammad, posted Oct 28 2016

Researchers from Purdue University are using deep learning to dramatically reduce the time it takes for engineers to assess damage to buildings after disasters. Engineers need to quickly document the damage to buildings, bridges and pipelines after a disaster. “These teams of engineers take a lot of photos, perhaps 10,000 images per day, and these

Read more

AI-Powered ‘Nightmare Machine’ Generates Horrifying Images

Research, cuDNN, GeForce, Higher Education / Academia, Image Recognition, Machine Learning & Artificial Intelligence, Media & Entertainment

Nadeem Mohammad, posted Oct 27 2016

MIT researchers developed an algorithm trained to generate horrifying images in an attempt to find the scariest faces and locations possible, and then rely on humans to see which approach makes the freakiest images. Using TITAN X GPUs and cuDNN to train their deep learning models, the researchers used the infamous style transfer technique and

Read more