A new GPU-based facial reenactment technique tracks the expression of a source actor and transfers it to a target actor in real-time – which translates into you being able to control another human’s expressions. The project is a collaboration of researchers from Stanford University, Max Planck Institute for Informatics and University of Erlangen-Nuremberg.
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.
The video demo uses a setup consisting of a GeForce GTX 980 GPU and is something definitely worth watching – it’s a matter of time before Disney adopts this technology!
You can read more about the project in their paper titled “Real-time Expression Transfer for Facial Reenactment.“
Real-time Facial Expression Transfer
Oct 20, 2015
Discuss (0)

Related resources
- GTC session: Generative AI Text-to-Video: Humanizing the Way We Interact with Machines (Spring 2023)
- GTC session: Advancing 3D Avatar Generation for the Next Generation of Creators (Spring 2023)
- GTC session: AI-Powered, Real-Time, Markerless: The New Era of Motion Capture (Spring 2023)
- Webinar: Inception Workshop 101 - Getting Started with Vision AI
- Webinar: Inception Workshop 101 - Getting Started with Conversational AI
- Webinar: Omniverse for Digital Twins Building