Researchers from University of Toronto developed an AI system that creates and then sings a Christmas song based by analyzing the visual components of an uploaded image.
With the help of CUDA, Tesla K40 GPUs and cuDNN to train their deep learning models, the researchers trained their neural network on 100 hours of online music. Once trained, the program can take a musical scale and melodic profile and produce a simple 120-beats-per-minute melody — it then adds chords and drums.
The next step was to train their ‘Neural karaoke’ program on a collection of pictures and their captions to learn how specific works can be linked to visual patterns and objects – once fed an image, the program can compile relevant lyrics and sing them.
Neural Story Singing Christmas from Hang Chu on Vimeo.
“We are used to thinking about AI for robotics and things like that. The question now is what can AI do for us?” said Raquel Urtasun, an associate professor in machine learning and computer vision at Toronto’s computer science lab. “You can imagine having an AI channel on Pandora or Spotify that generates music, or takes people’s pictures and sings about them,” adds her colleague, Sanja Fidler. “It’s about what can deep learning do these days to make life more fun?”
Read more >