RESTful Inference with the TensorRT Container and NVIDIA GPU Cloud
Artificial Intelligence, Containers, Docker, Inference, NVIDIA GPU Cloud, REST, TensorRT
Nadeem Mohammad, posted Dec 05 2017
You’ve built, trained, tweaked and tuned your model. Finally, you have a Caffe, ONNX or TensorFlow model that meets your requirements.