NVIDIA TensorRT is a high-performance deep learning inference optimizer and runtime that delivers low latency and high-throughput. TensorRT can import trained models from every deep learning framework to easily create highly efficient inference engines that can be incorporated into larger applications and services. This video demonstrates how to configure a simple Recurrent Neural Network (RNN)

The post Video Tutorial: Introduction to Recurrent Neural Networks in TensorRT appeared first on NVIDIA Developer News Center.