NVIDIA ACE for Games
NVIDIA ACE is a suite of digital human technologies for middleware and game developers that powers knowledgeable, actionable and conversational game characters using generative AI. ACE provides ready-to-integrate cloud and on-device AI models for each aspect of digital humans—from speech to intelligence to animation.

Key Benefits
Game Ready AI Models
ACE offers a curated suite of AI models—from speech, vision and intelligence to realistic animation and behavior built to enhance game assistants, actors and agents
Optimized for On-Device Inference
AI models fine-tuned and optimized for gaming hardware, provide high accuracy and low latency within a small memory footprint.
Inference Alongside Graphics
NVIDIA In-Game Inferencing (NVIGI) plugins schedule AI inference for different models and inference backends across complex graphics workloads to maximize performance and the user experience.
Partner Experiences Powered by AI
NVIDIA ACE is being used by industry-leading game developers and ISVs to build autonomous game characters that inhabit living, breathing worlds and AI assistants that provide tips and guidance to gamers and creators.
Autonomous Agents
KRAFTON’s inZOI features Smart Zois, AI-driven agents that plan, act and reflect on their decisions for unique character dynamics.
Autonomous Companions
KRAFTON’s PUBG introduces Co-Player Characters (CPC), AI-driven allies that communicate with natural language and act autonomously like a human teammate.
Autonomous Enemies
Wemade Next’s MIR5 introduces AI-powered bosses that continuously learn from previous player tactics to adapt and provide unique fights every run.
Conversational Game Characters
Dead Meat is a first of its kind murder mystery interrogation game where players can ask the suspect absolutely anything using their own words.
AI Assistants
Streamlabs and Inworld AI introduce an intelligent streaming assistant that serves as a producer, technical assistant and 3D sidekick.
Get Started with NVIDIA ACE
The NVIDIA In-Game Inferencing (NVIGI) SDK offers a streamlined and high performance path to integrate locally run AI models into games and applications via in-process (C++) execution and CUDA in Graphics. NVIGI supports all major inference backends, across different hardware accelerators (GPU, NPU, CPU), so developers can take advantage of the full range of available system resources on a user’s PC.
Download NVIGI SDKDocumentationCompatibility MatrixArchive
NVIDIA® Riva ASR
Takes an audio stream as input and returns a text transcript in real time. It’s NVIDIA GPU-accelerated for maximum performance and accuracy.
Whisper ASR
Takes an audio stream as input and returns a text transcript in real time. It’s compatible with NVIDIA GPUs and any CPUs.
Riva TTS
Takes a text output and converts it into natural and expressive voices in multiple languages in real time. Built for agentic workflows and compatible with NVIDIA GPUs and any CPUs.
Mistral-Nemo-Minitron Family
Agentic small language models that enable better role-play, retrieval-augmented generation (RAG) and function calling capabilities. They come in 8B, 4B and 2B parameter models to fit your VRAM and performance requirements. The on-device models run on NVIDIA GPUs and any CPU.
Llama3.2-3B-Instruct
Agentic small language model that enables better role-play, retrieval-augmented generation (RAG) and function calling capabilities. This model works across any GPU architecture that supports ONNX Runtime and DirectML.
Nemovision-4B-Instruct
Agentic vision-language model that combines visual understanding of on-screen elements and actions and reasons for better context aware responses. The on-device models run on NVIDIA GPUs and any CPU.
Audio2Face-3D SDK
Use AI to convert streaming audio to facial blendshapes for real-time lip-syncing and facial animations on-device or in the cloud. The SDK contains C++ and Python source code through the MIT license.
Audio2Face-3D Models
Audio2Face-3D regression (2.3) and diffusion (3.0) to generate lip-sync. Open weights in ONNX-TRT format available through the NVIDIA Open Model License.
Download Audio2Face 3.0 Unreal Engine Models
Download Audio2Face 2.3 Unreal Engine Models
Download Audio2Face-3D 3.0 Open Source Models
Audio2Emotion-3D Models
Audio2Emotion production (2.2) and experimental (3.0) models to infer emotional state from audio. Open weights in ONNX-TRT format available through a custom license.
Download Audio2Emotion 3.0 Models
Audio2Face-3D Plugins
The Audio2Face-3D plug-in for Unreal Engine 5 alongside a configuration sample enhances your Metahuman experience. The Autodesk Maya ACE plugin generates high-quality, audio-driven facial animation offline. Both plugins are available under the MIT license.
Download Unreal Engine Gaming Sample
Download Unreal Engine 5.6 Plugin
Download Unreal Engine 5.5 Plugin
Audio2Face-3D Training
Audio2Face-3D training framework allows developers to create Audio2Face-3D models with your data. Source code is available in Python through the Apache license. Leverage audio files, blendshape data, animated geometry caches, geometry files and transform files to get started with the training framework. The sample data is available through a custom license for evaluation only.
DocumentationMore Resources
On-Demand Sessions
Ethical AI
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards. Please report security vulnerabilities or NVIDIA AI Concerns here.
Ready to try NVIDIA ACE?