NVIDIA TensorRT Inference Server Boosts Deep Learning Inference
Artificial Intelligence, Cloud Services, Containers, Deep Learning, Featured, GPU, machine learning and AI, NGC, TensorRT, TensorRT Inference Server
Nadeem Mohammad, posted Sep 12 2018
You’ve built, trained, tweaked and tuned your model. You finally create a TensorRT, TensorFlow or ONNX model that meets your requirements. Now you need an inference solution, deployable to a datacenter or to the cloud.