After clicking “Watch Now” you will be prompted to login or join.
Click “Watch Now” to login or join the NVIDIA Developer Program.
Case Study of Deploying Text-to-Speech Services on GPU
Peter Huang, NVIDIA
Based on collaboration with customers, we'll go through the key phases to deploy text-to-speech services including use-case survey, model selection, data preparation, model training, and, most importantly, optimizing model inference on Tesla GPU products. After introducing the background, related models, and tricks for training the model, we'll take a deep dive into TensorRT based Tacotron and WaveGlow acceleration work-study, and then touch upon methods to accelerate other VoCoders, such as WaveRNN and BERT's potential usage for TTS.