Created by researchers at the Hong Kong University of Science and Technology, the MoodBox speaker is billed as the first ever high quality wireless speaker that senses human emotions.
Using NVIDIA Tesla GPUs and deep learning, the speaker operates with cutting edge sensory recognition technology named “Emi”. Emi collects and analyzes audio signals and music lyrics to provide efficient retrieval of millions of songs by genres, styles, mood, and artist. Emi not only converses, but also suggests appropriate music, adjusts the lighting to music, reports on weather conditions, and offers wake-up calls.
“We are bringing the latest R&D in speech, music and emotion recognition technology to people’s lives,” explains creator and emotional intelligence pioneer Pascale Fung, PhD. “When you speak to MoodBox, the predictive engine delineates emotional state from tone of voice and content of speech.”
With less than two weeks remaining of their Indiegogo campaign, the team already surpassed their $40,000 funding goal.
First Emotionally Intelligent Speaker Trained on GPUs
Apr 12, 2016
Discuss (0)
Related resources
- GTC session: GPU Acceleration for Remote Workstations in Large Language Model Training (Presented by Lenovo)
- GTC session: Building Accelerated AI With Hugging Face and NVIDIA
- GTC session: Self-Instruct Model Training and Fine-Tuning with NVIDIA GPUs and Software Frameworks
- NGC Containers: ACE Agent Sample Frontend
- NGC Containers: Mistral-Nemo-12B-Instruct
- Webinar: How to Optimize AI Models for Faster Inference