Fast INT8 Inference for Autonomous Vehicles with TensorRT 3

Artificial Intelligence, Autonomous Vehicles, DP4A, Inference, Mixed Precision, TensorRT

Nadeem Mohammad, posted Dec 11 2017

Autonomous driving demands safety, and a high-performance computing solution to process sensor data with extreme accuracy.

Read more

RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud

Artificial Intelligence, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT

Nadeem Mohammad, posted Dec 05 2017

You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.

Read more