Matthias Niessner of Stanford University shares how his team of researchers are using TITAN X GPUs and CUDA to manipulate YouTube videos with real-time facial reenactment that works with any commodity webcam.
The project called ‘Face2Face’ captures the facial expressions of both the source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, their approach re-renders the synthesized target face on top of the corresponding video stream that seamlessly blends with the real-world illumination.
For more details, read the research paper ‘Face2Face: Real-time Face Capture and Reenactment of RGB Videos’.
Share your GPU-accelerated science with us at http://nvda.ly/Vpjxr and with the world on #ShareYourScience.
Watch more scientists and researchers share how accelerated computing is benefiting their work at http://nvda.ly/X7WpH
Share Your Science: Real-Time Facial Reenactment of YouTube Videos
Apr 06, 2016
Discuss (0)

Related resources
- GTC session: Advancing 3D Avatar Generation for the Next Generation of Creators (Spring 2023)
- GTC session: Breathing Life into the Metaverse: What It Takes to Create and Deploy Avatars at Scale (Spring 2023)
- GTC session: Generative AI Text-to-Video: Humanizing the Way We Interact with Machines (Spring 2023)
- Webinar: Isaac Developer Meetup #2 - Build AI-Powered Robots with NVIDIA Isaac Replicator and NVIDIA TAO
- Webinar: The Challenges in Creating an Autonomous Digital Human
- Webinar: Simplifying End-To-End Data Science Workflows