A researcher from Carnegie Mellon University developed S.A.R.A. (Socially Aware Robot Assistant) that not only comprehends what you say, but also understands facial expressions and head movements.
Using CUDA, GTX 1080 GPUs and cuDNN with TensorFlow to train the deep learning models, S.A.R.A. will reply differently if she detects a smile than someone frowning and offering a response that doesn’t comply with social norms.
“She looks at your behavior and her behavior,” says Justine Cassell, director of human-computer interaction at Carnegie Mellon University and director of the project, “and calculates on the basis of the intersection of that behavior. Which no one has ever done before.”
The S.A.R.A. project consists of three elements never before used, says Cassell: conversational strategy classifiers, a rapport estimator, and a social reasoner. “The conversational strategy classifiers are five separate recognizers that can classify any one of five conversational strategies with over 80 percent accuracy.”
Read more >
Virtual Agent Understands Your Social Cues

Nov 03, 2016
Discuss (0)
AI-Generated Summary
- S.A.R.A., a robot assistant developed by a researcher from Carnegie Mellon University, understands not only spoken language but also facial expressions and head movements to determine its responses.
- S.A.R.A. uses deep learning models trained on NVIDIA's CUDA, GTX 1080 GPUs, and cuDNN with TensorFlow to analyze user behavior and respond accordingly.
- The robot assistant's responses are influenced by factors such as the user's smile, the content of their message, and whether it complies with social norms, thanks to its conversational strategy classifiers, rapport estimator, and social reasoner.
AI-generated content may summarize information incompletely. Verify important information. Learn more