A researcher from Carnegie Mellon University developed S.A.R.A. (Socially Aware Robot Assistant) that not only comprehends what you say, but also understands facial expressions and head movements.
Using CUDA, GTX 1080 GPUs and cuDNN with TensorFlow to train the deep learning models, S.A.R.A. will reply differently if she detects a smile than someone frowning and offering a response that doesn’t comply with social norms.
“She looks at your behavior and her behavior,” says Justine Cassell, director of human-computer interaction at Carnegie Mellon University and director of the project, “and calculates on the basis of the intersection of that behavior. Which no one has ever done before.”
The S.A.R.A. project consists of three elements never before used, says Cassell: conversational strategy classifiers, a rapport estimator, and a social reasoner. “The conversational strategy classifiers are five separate recognizers that can classify any one of five conversational strategies with over 80 percent accuracy.”
Read more >
Virtual Agent Understands Your Social Cues
Nov 03, 2016
Discuss (0)

Related resources
- GTC session: Bringing Enterprise Chatbots to Life With AI-Powered Digital Humans
- GTC session: Address Complex/Logical Tasks With Conversational AI: Multi-Agent, Multi-Turn Framework From Scratch
- GTC session: Agentic Architecture and Digital Human Capabilities for Complex Customer Support
- NGC Containers: ACE Agent Sample Frontend
- Webinar: Vision for All: Unlocking Video Analytics With AI Agents
- Webinar: How Telcos Transform Customer Experiences with Conversational AI