top of page

The AI Music of the Future

When thinking about artificial intelligence, the first thing that pops into our minds is usually a very logical system that can solve all sort of mathematical problems, or maybe even understand human language. But what about more creative domains such as music? Could artificial intelligence create it one day?

If this idea sounds crazy to you, you ain't heard nothing yet...

An integral part of our lives

Ever since the dawn of culture, music has been part of our lives. The oldest evidence of a prehistoric musical instrument is a 42,000 years old flute made of bird bone and mammoth ivory, but some researchers believe that music-like singing existed as early as 150,000 to 250,000 years ago.

Over the years, a long series of musical innovations appeared and changed the world of music: from the creation of new instruments to the appearance of unique composing techniques, the boundaries of music have constantly expended. Well, at least up until recent times.

Today's music

Do you ever get this feeling that many of today's pop-hits sound very similar to each other? Many others feel this way too. On 20 July 2011, a music band called "Axis of Awesome" released a music video called "Four Chords". In this video, the band demonstrated how they can play 73 popular pop-songs while using only 4 music chords.

Similar chords are not the only symptom of today's musical simplification. In 2016, the musician Patrick Metzger invented the term "Millennial Whoop". The Millennial Whoop is a melodic pattern alternating between the fifth and third notes in a major scale, typically using the "wa" and "oh" syllables. Metzger noticed that almost every leading pop-singer of our time has used this specific melodic pattern in at least one of his songs.

In 2012, an official study of songs conducted by the Spanish national research council scientifically confirmed these feelings. The researchers used a data set of 464,411 music recordings between 1955 and 2010 to analyze the diversity of note combinations. The research found that the diversity of transitions between note combinations has consistently diminished during the last 50 years. The research also showed that not only are songs' melodies more similar to each other than they used to be, but the sounds of the instruments have also grown narrower.

The music Industry

To better understand the reasons for this musical deterioration it’s important to have a clear perspective of the music industry in general. A recent analysis performed by Goldman Sachs predicted that the music industry will grow into nearly $41 Billion by 2030 - most of it ($34 billion) is expected to be generated from streaming services like YouTube or Spotify.

Streaming technology may seem like a great modern way to monetize music, but it also brings with it an incomprehensible competition over audience attention, which causes a dramatic increase in the required costs of the music producers. According to the IFPI (International Federation of the Phonographic Industry), it can now cost up to $2 Millions to break an artist in a major recorded music market - including recording, music video production, tour support and promotional costs.

With these costs, music producers simply can’t afford to take any risks. Their obvious solution is to play it safe and recapture working formulas. This eventually leads for less musical innovation and more similarity between new pop songs. But this business situation has another interesting outcome: it’s the perfect substrate for AI generated music.

The rise of the AI music

Surprisingly, the Idea of AI generated music is actually quite old. In fact, back in 1843 non- other than Ada Lovelace, the world's first programmer, suggested that “The engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”

In 1957, the world's first machine-made music was created by an AI system developed by Lejaren Hiller and Leonard Isaacson. the AI used standards of music theory to compose a new piece which sounded, well – decent…

About 2 years ago, a Google AI system called Magenta created a 90-second piano melody based on four notes. This short melody was far from a professional composition and felt more like the output of a small child making their first steps with a music editing software, but since Google was behind it - it was enough to create a big buzz around the idea of AI composing.

Google may be the most famous player in this new field, but it’s not the only one and certainly not the most advanced. Aiva Technologies is a startup that specializes in composing genuine classical music strictly based on artificial intelligence. Aiva has released a full album called Genesis, as well as many single tracks, and has become the first AI to ever officially acquire the worldwide status of “Composer” as it was registered under the France and Luxembourg authors’ right society (SACEM).

Another interesting startup in the field of AI composing is Amper Music. It's currently developing a cloud-based platform that composes, performs and produces AI generated music. American singer Taryn Southern recently released a new album called “break free” that was fully composed by Amper Music’s AI.

How does it work?

Many people don’t realize that there is a lot of mathematical logic in music. For instance, the same 12 notes that are used to produce music today were discovered about 2,500 years ago by the Greece philosopher Pythagoras by performing sequels of division operations. Every musical melody is eventually a set of mathematical patterns that can be measured and evaluated, and what an AI composer does is essentially identify all sorts of patterns in exiting melodies and then borrow and manipulate them so they can create new melody.

This concept of “borrowing” is not only a machine-related restriction. In his Ted Talk, Kirby Ferguson, creator of Everything is a Remix, mentions that many well-known groundbreaking artists and inventors, from Bob Dylan to Steve Jobs, have used this technique in their work. So if it’s Okay for a human to do it – what’s the problem if a machine does it? In fact, artificial intelligence can outdo humans that use this technique by better disguising the borrowed piece via sophisticated manipulations.

Which is better?

An interesting experiment that was performed in 2016 by the Sony Computer Science Laboratories in Paris tested people's ability to identify AI generated music: 1,600 people were asked to listen to two distinct harmonies of the same melody – one by Bach and one by an AI called “DeepBach’”. The results showed that more than half of the listeners attributed the DeepBach-generated harmonies to Bach himself, while 75% of listeners were able to identify Bach’s music. It’s worth mentioning that a fourth of the testers were professional musicians or music students.

While it’s becoming harder and harder to distinguish between human and AI generated music, the question of which of them is better will probably remain open forever. After all, when it comes to music, it all comes down topersonal taste. One advantage that AI does have is the ability to analyze people's personal taste in order to adjust songs so they are more suited to their tastes and by this reduce the risk of failure. Would this ability prevail over the human "creative spark”? only time will tell.

AI-human collaboration

Drew Silverstein, one of Amper’s founders, explained in an interview with TechCrunch that Amper was designed specifically to work in collaboration with human musicians: “One of our core beliefs as a company is that the future of music is going to be created in the collaboration between humans and AI. We want that collaborative experience to propel the creative process forward.”

The collaboration between man and machine could mean that implemented a human simply comes out with the general idea for the melody, and the machine completes him or her by adjusting the notes, setting the musical scale, adding relevant musical instruments and eventually combining everything into one holistic musical composition. A preliminary example of this model can be found in an iOS app called Humtap, which allows users to hum a melody or a tune and easily turn it into a song with the help of an AI algorithm.

Men and machine collaboration could also be executed via mind reading technology. Scientists at Austria's Graz University of Technology (TU Graz) have recently developed a system that allows users to write music using only the power of their mind. By readying the user’s EEG brain waves, the system can identify a melody that the user is thinking of and transform it into musical notes. In the future, AI could take this preliminary melody and transform it into a comprehensive song.

In conclusion

Whether you like it or not, artificial intelligent is becoming more and more present in all the aspects of our life – including music. While there is no doubt that in the future there will be AI generated music that we'll like, the question remains whether someday there will exist music that an AI system will like. But that is a whole different discussion…

bottom of page