Artificial Intelligence Software Easily Generates Digital Art

Research, CUDA, cuDNN, GeForce, Higher Education / Academia, Image Recognition, Machine Learning & Artificial Intelligence, Media & Entertainment

Nadeem Mohammad, posted Sep 15 2016

Researchers from Adobe and University of California, Berkeley developed software that automatically generates images inspired by the color and shape of the digital brushstroke. The software uses deep neural networks to learn the features of landscapes and architecture, like the appearance of grass or blue skies. Drawing a dark-colored, upside-down V triggers the AI to

Read more

Rover Trained on GPUs Wins $750k at NASA’s Autonomous Robotics Challenge

News, Research, CUDA, cuDNN, Higher Education / Academia, Image Recognition, Machine Learning & Artificial Intelligence, Robotics, Tesla

Nadeem Mohammad, posted Sep 14 2016

The team from West Virginia University took home the largest prize awarded in the five-year long NASA Sample Return Robot Challenge. This challenge began in 2012 with more than 50 teams and to qualify for the final level, the team’s autonomous robot had to return a single sample in 30 minutes. Using CUDA, and a

Read more

Using Virtual Reality at the IBM Watson Image Recognition Hackathon

News, Research, Gaming, GeForce, Image Recognition, Machine Learning & Artificial Intelligence, Media & Entertainment, Signal / Audio Processing, Virtual Reality

Nadeem Mohammad, posted Sep 13 2016

Five teams of developers gathered at the Silicon Valley Virtual Reality (SVVR) headquarters in California last month to learn about the new features of IBM Watson’s Visual Recognition service, like the ability to train and retrain custom classes on top of the stock API, that allow the service to have new and interesting use cases

Read more

New Update to the NVIDIA Deep Learning SDK Now Help Accelerate Inference

Features, News, Research, Automotive, Autonomous, Embedded, Image Recognition, Machine Learning & Artificial Intelligence, Signal / Audio Processing

Nadeem Mohammad, posted Sep 13 2016

The latest update to the NVIDIA Deep Learning SDK includes the NVIDIA TensorRT deep learning inference engine (formerly GIE) and the new NVIDIA Deep Stream SDK. TensorRT delivers high performance inference for production deployment of deep learning applications. The latest release delivers up to 3x more throughput, using 61% less memory with new INT8 optimized

Read more