AI Helps Amputee Play Piano for First Time Since 2012

Artificial Intelligence, cuDNN, GeForce, Healthcare & Life Sciences, Higher Education/Academia, Media & Entertainment

Nadeem Mohammad, posted Dec 13 2017

Researchers at the Georgia Institute of Technology developed an ultrasonic sensor using GPUs and deep learning that allows amputees to control individual fingers on a prosthetic hand.

Read more

Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Artificial Intelligence, Autonomous Vehicles, Automotive, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 12 2017

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.

Read more

Algorithm Successfully Diagnoses Pneumonia at Radiologist-Level Accuracy

Artificial Intelligence, cuDNN, GeForce, Healthcare & Life Sciences, Higher Education/Academia, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 12 2017

A team of Stanford researchers developed a deep learning-based algorithm that evaluates chest X-rays for signs of disease.

Read more

NVIDIA TITAN V Transforms the PC into AI Supercomputer

Artificial Intelligence, Features, CUDA, cuDNN, GeForce, Higher Education/Academia, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 08 2017

NVIDIA introduced TITAN V, the world’s most powerful GPU for the PC, driven by the world’s most advanced GPU architecture, NVIDIA Volta.

Read more

CUTLASS: Fast Linear Algebra in CUDA C++

Artificial Intelligence, cuDNN, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 07 2017

Matrix multiplication is a key computation within many scientific applications, particularly those in deep learning. Many operations in modern deep neural networks are either defined as matrix multiplications or can be cast as such.

Read more

NVIDIA SDK Updated With New Releases of TensorRT, CUDA, and More

Accelerated Computing, Artificial Intelligence, Features, Robotics, Cloud, CUDA, cuDNN, Higher Education/Academia, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 06 2017

At NIPS 2017, NVIDIA announced new software releases for deep learning and HPC developers.  The latest SDK updates include new capabilities and performance optimizations to TensorRT, CUDA toolkit and the new project CUTLASS library.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Features, Cloud, Data Center, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 05 2017

Once you have built, trained, tweaked and tuned your deep learning model, you need an inference solution that you need to deploy to a datacenter or to the cloud, and you need to get the maximum possible performance.

Read more

NVIDIA at NIPS 2017

Artificial Intelligence, Features, Cloud, GeForce, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 04 2017

NVIDIA is headed to NIPS (Neural Information Processing Systems) and we can’t wait to show you our latest AI innovations.

Read more

NVIDIA Deep Learning Inference Platform Performance Study

Artificial Intelligence, Cloud, Cluster/Supercomputing, CUDA, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 04 2017

The NVIDIA deep learning platform spans from the data center to the network’s edge.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Artificial Intelligence, Image Recognition, Machine Learning & Artificial Intelligence, Tesla

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more