Researchers at the University of East Anglia in the UK developed an algorithm that is able to interpret mouthed words with a greater degree of accuracy than human lip readers.
Using Tesla K80 GPUs, the researchers trained a deep learning model to recognize mouth shapes corresponding to certain sounds as they are spoken, without any audio input cues at all.
“We’re looking at visual cues and saying how do they vary? We know they vary for different people. How are they using them? What’s the differences? And can we actually use that knowledge in this particular training method for our model? And we can,” says Dr. Helen Bear who created the visual speech recognition system as part of her PhD, along with Prof Richard Harvey of UEA’s School of Computing Sciences.
According to Dr. Bear, the core challenge is that humans make more sounds than distinct visual cues. For example, there are several sounds with confusingly similar lip shapes such as ‘/p/,’ ‘/b/,’ and ‘/m/’ — all of which typically cause difficulties for human lip readers. UEA’s visual speech model is able to more accurately distinguish between these visually similar lip shapes.
This technology may one day help people who have hearing and speech impairments, generate audio for video-only-security video footage or enhance poor audio quality on mobile for video calls.
Read more >>
Algorithm Achieves Better Accuracy Than Humans at Reading Lips
Apr 26, 2016
Discuss (0)

Related resources
- GTC session: Audio-Driven Facial Animation Technology for NVIDIA Audio2Face
- GTC session: How Robotics and AI are Changing Surgery for the Better
- GTC session: Beyond the Camera: Revolutionize Content Creation With Advanced Lip-Sync AI
- NGC Containers: Eye Contact
- NGC Containers: TTS FastPitch HifiGAN Riva
- Webinar: How Telcos Transform Customer Experiences with Conversational AI