Recursive Neural Networks with PyTorch

Features, Deep Learning, LSTM, Natural Language Processing, Python, PyTorch, Torch

Nadeem Mohammad, posted Apr 09 2017

From Siri to Google Translate, deep neural networks have enabled breakthroughs in machine understanding of natural language. Most of these models treat language as a flat sequence of words or characters, and use a kind of model called a recurrent neural network (RNN) to process this sequence. But many linguists think that language is best […]

Read more

Deep Learning Predicts the Look of Cells

Features, Research, cuDNN, GeForce, Healthcare & Life Sciences, Image Recognition, Machine Learning & Artificial Intelligence, Medical Imaging

Nadeem Mohammad, posted Apr 07 2017

The Allen Institute for Cell Science launched a one-of-a-kind online portal of 3D cell images called Allen Cell Explorer that were produced using deep learning. The website combines large-scale 3D imaging data, the first application of deep learning to create predictive models of cell organization, and a growing suite of powerful tools. “This is the

Read more

Mastering StarCraft with AI

Research, CUDA, cuDNN, GeForce, Higher Education / Academia, Machine Learning & Artificial Intelligence, Media & Entertainment

Nadeem Mohammad, posted Apr 05 2017

Researchers from Alibaba and University College London developed a deep learning-based system that learned how to execute a number of strategies for the popular real-time strategy game StarCraft. Using CUDA, TITAN X and GTX 1080 GPUs and cuDNN with the TensorFlow deep learning framework, the large-scale multiagent system used reinforcement learning to learn strategies employed

Read more

NVIDIA DGX-1: The Fastest Deep Learning System

Features, Deep Learning, DGX-1, NVLink, pascal, Tesla P100

Nadeem Mohammad, posted Apr 05 2017

One year ago today, NVIDIA announced the NVIDIA® DGX-1™, an integrated system for deep learning. DGX-1 (shown in Figure 1) features eight Tesla P100 GPU accelerators connected through NVLink, the NVIDIA high-performance GPU interconnect, in a hybrid cube-mesh network. Together with dual socket Intel Xeon CPUs and four 100 Gb InfiniBand network interface cards, DGX-1 […]

Read more

Get the Best Performance for Your Neural Networks with TensorRT

Research, Automotive, Embedded, GeForce, Healthcare & Life Sciences, Image Recognition, Machine Learning & Artificial Intelligence, Tesla

Nadeem Mohammad, posted Apr 03 2017

NVIDIA TensorRT is a high-performance deep learning inference library for production environments. Power efficiency and speed of response are two key metrics for deployed deep learning applications, because they directly affect the user experience and the cost of the service provided. Tensor RT automatically optimizes trained neural networks for run-time performance, delivering up to 16x higher energy

Read more