NVIDIA Avatar Cloud Engine (ACE) is a cloud API that provides a suite of real-time AI solutions for building and deploying intelligent game characters, interactive avatars, and digital humans in applications at scale.
Apply for Early Access
The Key Benefits of ACE
Simple to Build - Highly Customizable
Enterprises, service providers and developers can customize and fine tune AI models and microservices without the need for specialized expertise, equipment, or manually intensive workflows. These ACE modules can easily be integrated into your offering of choice to bring realistic avatars to life.
Secure and Consistent Results
ACE offers AI models and microservices such as Audio2Face, NVIDIA Live Portrait, NVIDIA Voice Font, NVIDIA Riva and NVIDIA NeMo, all trained on safe and secure data. These solutions work seamlessly with third party applications such as Unreal Engine 5 and enable avatars to be accurate, appropriate, on-topic and secure no matter the user’s input.
Flexible Deployment Options
Build, configure, and deploy your avatar application across any engine in any public or private cloud or through Windows PC. Whether you have real-time or offline requirements, ACE provides developers the option to handle customization and inference through the cloud, PC or a mix of both.
ACE End-to-End Development Suite
ACE is a cloud API that consists of customizable microservices based on NVIDIA’s Unified Compute Framework, full-stack AI platform, and NVIDIA RTX™ technology. These components, along with domain-specific AI workflows and low-code tooling, make up an end-to-end avatar development suite. ACE brings the benefits of Universal Scene Description (OpenUSD) to avatar workflows.
Apply for Early Access
Bring avatars to life with realistic 2D and 3D character animation based on video or audio input.
- 3D animation AI uses Audio2Face generative AI to bring to life characters in Unreal Engine and other rendering tools by creating realistic facial animations from just an audio file. Animation Graph allows realistic body, head, and eye movements.
- 2D animation AI, called NVIDIA Live Portrait, enables easy animation of 2D portraits or stylized human faces using live video feeds.
Speech and Translation AI
Create deeper user engagement with multilingual avatars that can hear, understand, and speak multiple languages in real time by leveraging NVIDIA Riva microservices and APIs.
- Automatic Speech Recognition (ASR) delivers high transcription accuracy in 14 languages including English and Chinese.
- Text-to-speech (TTS) synthesizes engaging speech from raw transcripts without any additional information.
- Neural Machine Translation (NMT) includes up to 31 text-to-text, speech-to-text, and speech-to-speech English bidirectional and unidirectional translation.
Develop fully autonomous, intelligent avatars using pretrained generative AI language models.
- NVIDIA NeMo provides foundation language models and model customization tools so you can further tune the models for specific use cases such as gaming.
- NeMo Guardrails adds programmable rules that assist in building accurate, appropriate, on-topic and secure avatars.
- NeMo SteerLM is a fine-tuning technique that allows you to control responses during inference.
- ACE Agent is a world-class dialog experience that supports dialog management and connectivity between AI microservices.
Universal Scene Description
OpenUSD is an open and extensible framework for describing, composing, simulating, and collaborating within 3D worlds.
Unified Compute Framework
ACE is built on NVIDIA’s Unified Compute Framework (UCF), a low-code framework for developing cloud-native, real-time, and multimodal AI applications. This enables developers to seamlessly integrate NVIDIA’s avatar technologies into their applications without the need for low-level domain or platform knowledge.
Learn More About UCF Consult the UCF Resources
NVIDIA ACE Agent (Previously NVIDIA Bot Maker)
NVIDIA ACE Agent is a microservice that provides connectivity across ACE modules, enabling intelligent avatars. This world-class dialog management system is customizable with your domain-specific datasets, supports NeMo Guardrails and interfaces efficiently into your pipeline.
Apply for Early Access
AI Avatar Workflows
Developers can leverage ACE to build their own avatar solutions from the ground up, or use NVIDIA’s suite of domain-specific AI workflows for next-generation customer engagement, marketing and branding content generation, and live communications, intelligent AI-service agents, and more.
NVIDIA Kairos demonstrates intelligent non-playable characters (NPCs) capable of natural language interactions. Developers of middleware, tools, and games can use NVIDIA ACE to build and deploy customized speech, conversation, and animation AI models in their games through NVIDIA NeMo, Riva, and Audio2Face. NVIDIA has partnered with companies like Convai and Eleven Labs to help bring generative AI based NPCs into Unreal Engine 5.
NVIDIA Tokkio is a virtual assistant AI workflow built with ACE bringing AI-powered virtual assistance from retail stores, restaurants and many other industry applications. It comes to life using NVIDIA AI models and technology like NVIDIA Metropolis, Riva, and NeMo. Since it runs on NVIDIA’s unified computing framework (UCF), NVIDIA Tokkio can scale out from the cloud and go wherever customers need helpful avatars with responsive and natural behavior.
Collaboration with global audiences can be dramatically improved when speaking in their language. NVIDIA Maxine integrates Riva’s real-time translation and text-to-speech with real-time “live portrait” photo animation and eye contact to enable better communication and understanding.
Stay up to Date on NVIDIA Avatar News
NVIDIA Avatar Sessions On-Demand
Stay up-to-date on the latest NVIDIA ACE news