Researchers from Korea University, Clova AI Research (NAVER), The College of New Jersey, and Hong Kong University of Science & Technology developed a Generative Adversarial Networks (GAN)-based approach that transforms the facial expressions of still images.
Using an NVIDIA Tesla GPU and the cuDNN-accelerated PyTorch deep learning framework, the team trained their models on the CelebFaces Attributes (CelebA) dataset and the Radboud Faces Database (RaFD) that includes of a variety facial expressions. Their framework named StarGAN is able to perform multi-domain image-to-image translation results on the CelebA dataset via transferring the knowledge it learned from the RaFD dataset – this means they can take an input image of a neutral celebrity face and synthesize the facial expressions (angry, happy, and fearful).
The researchers claim this work is the first to successfully perform multi-domain image translation across different datasets.
The framework is also able to transfer facial attributes to an input image to automatically age someone, or change the color of their hair.
Read more >
Turning Frowns Into Smiles with Artificial Intelligence
Nov 29, 2017
Discuss (0)

Related resources
- GTC session: Detecting Skin Diseases using AI (Spring 2023)
- GTC session: The Indefinable Moods of Artificial Intelligence (Spring 2023)
- GTC session: Unlocking AI to Build the Metaverse (Spring 2023)
- Webinar: Building Smart Hospitals to Fight COVID-19
- Webinar: Simplify and Accelerate AI Model Development with PyTorch Lightning, NVIDIA NGC, and AWS
- Webinar: Startups4COVID: Testing, Treating, Tracking - Together