RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Features, Cloud, Data Center, Machine Learning & Artificial Intelligence, TensorRT

Nadeem Mohammad, posted Dec 05 2017

Once you have built, trained, tweaked and tuned your deep learning model, you need an inference solution that you need to deploy to a datacenter or to the cloud, and you need to get the maximum possible performance.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Features, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more

NVIDIA at NIPS 2017

Artificial Intelligence, Features, Cloud, GeForce, Machine Learning & Artificial Intelligence

Nadeem Mohammad, posted Dec 04 2017

NVIDIA is headed to NIPS (Neural Information Processing Systems) and we can’t wait to show you our latest AI innovations.

Read more

NVIDIA Deep Learning Inference Platform Performance Study

Artificial Intelligence, Cloud, Cluster/Supercomputing, CUDA, Machine Learning & Artificial Intelligence, TensorRT, Tesla

Nadeem Mohammad, posted Dec 04 2017

The NVIDIA deep learning platform spans from the data center to the network’s edge.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Artificial Intelligence, Image Recognition, Machine Learning & Artificial Intelligence, Tesla

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more