What do you really think when music is developed by the latest technological fuss we all call artificial intelligence (A.I)?
Open AI, a non-profit artificial intelligence company backed by Tesla founder Elon Musk and LinkedIn founder Reid Hoffman, has recently launched MuseNet — a new online tool that uses AI to generate songs from as many as 15 difference styles, mixing classical genres like Beethoven with contemporary artists like Lady Gaga, or jazz or even video game music.
The company made headlines earlier this year when it produces an AI that successfully beat a world champion e-sports team at Dota 2. And in another project, the AI was capable of writing fiction and news stories so convincingly that the company declined to release the research publicly in case the system was used inappropriately.
MuseNet is developed using a deep neural network that’s been taught on a dataset of MIDI files gathered from online sources covering genres such as Arabic, pop, Jazz, African, and Indian styles of music. And according to Open AI’s researches, says that it is able to pay attention up to 4-minutes long allowing it to understand a wide context of a song’s melodies, rhythm and chords that enables it to predict the next note in a sequence.
When we gave it a try to write the ending of Lady Gaga’s Poker Face in the style of a jazz performance, it gradually veer more and more away from the original.
Although OpenAI isn’t the first company to study and research AI-generated music, this new direction of music composition is already posing complicated copyright challenges from the very start. Music is a creative output and AI algorithm may just be the last element where creative originality can be comfortably linked to. While OpenAI is able to create music using an AI, it hasn’t been able to create originality — a staple in music songwriting and composition.
MuseNet is now available to try out on OpenAI’s website. A basic mode lets you select a style or composer, and an optional start of a popular piece of music. The advanced option lets you communicate with the model directly. The prototype will be available until 12 May.