Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Artificial Intelligence, Autonomous Vehicles, DP4A, Inference, Mixed Precision, TensorRT

Nadeem Mohammad, posted Dec 11 2017

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more

TensorRT 3: Faster TensorFlow Inference and Volta Support

Artificial Intelligence, Deep Learning, Inference, TensorFlow, TensorRT, Volta

Nadeem Mohammad, posted Dec 04 2017

NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications.

Read more