GTC 2020: FastSpeech and Its Acceleration of Training and Inference on GPU
After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
FastSpeech and Its Acceleration of Training and Inference on GPU
Dabi Ahn, NVIDIA
We'll focus on the concept of FastSpeech, and how it can be accelerated during inference. FastSpeech is a state-of-the-art text-to-speech model developed by Microsoft Research Asia and accepted in Neurips 2019. It achieved much faster inference speed than Tacotron2. Fast inference is one of the most important requirements in industry because all kinds of conversational AI, including AI speaker, requires low latency in a production setting. You'll need basic knowledge of deep learning and TTS.