Computer Vision / Video Analytics

NVIDIA’s Top 5 AI Stories of the Week: 4/22

Every week we highlight NVIDIA’s Top 5 AI stories of the week.

In this week’s edition we cover a new deep learning-based algorithm from OpenAI that can automatically generate new music.

Plus, an automatic speech recognition model that could improve Alexa’s algorithm by 15%.

Watch below:

5 – AI Model Can Recommend the Optimal Workout

Planning a workout that is specific to a user’s needs can be challenging. To help deliver more personalized workout recommendations, University of California, San Diego researchers developed a deep learning-based system to better estimate a runner’s heart rate during a workout and predict a recommended route. The work has the potential to help fitness tracking companies and mobile app developers enhance their apps and devices.

Read more>

4 – AI Helps Improve the Shopping Experience

New York City-based startup TheTake, a member of the NVIDIA Inception program, recently unveiled a new deep learning-based algorithm that can automatically decode what a celebrity, athlete, or other public figure is wearing in a video in near real time.

Read more>

3 – OpenAI Releases MuseNet: AI Algorithm That Can Generate Music

Trying to generate music like Mozart, Beethoven, or perhaps Lady Gaga? AI research organization OpenAI just released a demo of a new deep learning algorithm that can automatically generate original music using many different instruments and styles.

“We’ve created Musenet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles,” the organization wrote in a blog post.

Read more>

2 – AI Research Could Help Improve Alexa’s Speech Recognition Model by 15%

Researchers from John Hopkins University and Amazon published a new paperdescribing how they trained a deep learning system that can help Alexa ignore speech not intended for her, improving the speech recognition model by 15%.

“Voice-controlled house-hold devices, like Amazon Echo or Google Home, face the problem of performing speech recognition of device directed speech in the presence of interfering background speech,” the researchers stated in their paper.

Read more>

1 – AI Can Interpret and Translate American Sign Language Sentences

According to the World Health Organization (WHO), there are an estimated 360 million people worldwide with disabling hearing loss. To help with sign language translation,  Researchers from Michigan State University developed a deep learning-based system that can automatically interpret individual signs of the American Sign Language (ASL) as well as translate full ASL sentences without needing users to pause after each sign. The work has the potential to help translate some of the 300 sign languages in use globally. 

Read more>

Discuss (0)

Tags