RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Features, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more

NVIDIA at NIPS 2017

Artificial Intelligence, Features, Cloud, GeForce, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 04 2017

NVIDIA is headed to NIPS (Neural Information Processing Systems) and we can’t wait to show you our latest AI innovations.

Read more

NVIDIA Deep Learning Inference Platform Performance Study

Artificial Intelligence, Cloud, Cluster/Supercomputing, CUDA, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 04 2017

The NVIDIA deep learning platform spans from the data center to the network’s edge.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Artificial Intelligence, Image Recognition, Machine Learning & Artificial Intelligence, Tesla

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Features, Deep Learning, Inference, TensorFlow, TensorRT, Volta

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more