AT GTC Japan, NVIDIA announced the latest version of the TensorRT’s high-performance deep learning inference optimizer and runtime. Today we are releasing the TensorRT 5 Release Candidate.  TensorRT 5 supports the new Turing architecture, provides new optimizations, and INT8 APIs achieving up to 40x faster inference over CPU-only platforms. This latest version also dramatically speeds up

The post TensorRT 5 RC Now Available appeared first on NVIDIA Developer News Center.