A new GPU-based facial reenactment technique tracks the expression of a source actor and transfers it to a target actor in real-time – which translates into you being able to control another human’s expressions. The project is a collaboration of researchers from Stanford University, Max Planck Institute for Informatics and University of Erlangen-Nuremberg.
The novelty of the approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video.
The video demo uses a setup consisting of a GeForce GTX 980 GPU and is something definitely worth watching – it’s a matter of time before Disney adopts this technology!
You can read more about the project in their paper titled “Real-time Expression Transfer for Facial Reenactment.“
Real-time Facial Expression Transfer
Oct 20, 2015
Discuss (0)
Related resources
- GTC session: Generative AI Theater: The Future - Video Conversation With Digital Humans
- GTC session: Creating Lifelike Expressions in Digital Humans
- GTC session: Navigating the Photorealistic AI Revolution
- NGC Containers: MATLAB
- NGC Containers: Eye Contact
- Webinar: Bringing Generative AI to Life with NVIDIA Jetson