Researchers from The University of Nottingham and Kingston University developed a deep learning-based method that automatically converts two-dimensional images of faces into 3D.
“3D face reconstruction is a fundamental Computer Vision problem of extraordinary difficulty,” mentioned the researchers in their paper. “Current systems often assume the availability of multiple facial images (sometimes from the same subject) as input, and must address a number of methodological challenges such as establishing dense correspondences across large facial poses, expressions, and non-uniform illumination. In general these methods require complex and inefficient pipelines for model building and fitting.”
Using CUDA, TITAN X GPUs and the cuDNN-accelerated Torch deep learning framework, the researchers trained their convolutional neural network on thousands of 2D facial images and 3D meshes. The trained model can then process an image of a face, make autonomous decisions about how that face probably looks in 3D, and create a corresponding model.
The researchers created an online demo where you can try with your own selfie.
Read more >