Trying to generate music like Mozart, Beethoven, or perhaps Lady Gaga? AI research organization OpenAI just released a demo of a new deep learning algorithm that can automatically generate original music using many different instruments and styles.
“We’ve created Musenet, a deep neural network that can generate 4-minute musical compositions with 10 different instruments, and can combine styles from country to Mozart to the Beatles,” the organization wrote in a blog post.
Using NVIDIA Tesla V100 GPUs with the cuDNN-accelerated TensorFlow deep learning framework, the OpenAI team trained their algorithm on hundreds of thousands of MIDI files. The algorithm was taught to discover patterns of harmony, rhythm, and style in the training dataset.
The OpenAI team says MuseNet uses the recompute and optimized kernels of Sparse Transformer, a deep neural network (DNN) that can predict what’s next in a sequence, to train their 72-layer network.
“We collected training data for MuseNet from many different sources. ClassicalArchives and BitMidi donated their large collections of MIDI files for this project, and we also found several collections online, including jazz, pop, African, Indian, and Arabic styles. Additionally, we used the MAESTRO dataset.”
The interactive demo, which uses NVIDIA Tesla V100 GPUs for inference, users can interact with the music generated by the algorithm, applying different instruments and sounds to generate an entirely new track.
Watch MuseNet Concert from OpenAI on www.twitch.tv
Using the demo, you can take Lady Gaga’s ‘Poker Face’ as inspiration and change the tokens or chord progressions. From there you can modify the instruments and modify the style of the track to make it sound like Mozart, or The Beatles, or Journey.
OpenAI says the demo will be available through May 12, at that point they will make a decision on what direction the project goes based on feedback.