As the world continues to evolve and become more digital, conversational AI is increasingly used as a means for automation. This technology has been shown to improve customer experience and efficiency, across various industries and applications.
Learn how to quickly build and deploy production-quality conversational AI applications with real-time transcription and natural language processing capabilities. You’ll integrate NVIDIA Riva automatic speech recognition (ASR) and named entity recognition (NER) models, with a web-based application to produce transcriptions of audio inputs and highlighted relevant text.
You can then customize the NER model, using NVIDIA TAO Toolkit to provide different targeted highlights for the application. Finally, explore the production-level deployment performance and scaling considerations of Riva services with Helm Charts and Kubernetes clusters.
Riva provides a complete, GPU-accelerated software stack. This makes it easy for developers to quickly create, deploy, and run end-to-end, real-time conversational AI applications unique to a company and its customers.
The Riva framework includes pretrained conversational AI models, tools, and optimized services for speech, vision, and natural language understanding tasks. With Riva, you can create customized language-based AI services for intelligent virtual assistants, virtual customer service agents, real-time transcription, multiuser diarization, chatbots, and much more.
Get hands-on training and learn:
- How to deploy and enable pretrained ASR and NER models on Riva for a conversational AI application.
- How to fine-tune and deploy domain-specific models with TAO Toolkit.
- How to deploy a production-level conversational AI application with a Helm Chart for scaling in Kubernetes clusters.
- Thursday, Oct. 28, 9 a.m.–5 p.m. PDT, UTC-7
- Wednesday, Nov. 24, 9 a.m.–5 p.m. CET/EMEA, UTC+1
Space is limited, register now >>
Interested in bringing this training to your organization? Get in touch with a DLI training advisor.