You’ve probably already heard the following argument for AI: Artificial Intelligence takes on unattractive routine tasks that humans do not want to perform. In return, people can spend their time with creative things, such as composing music or art. Right? Think about it again.
Since the end of the seventies, researchers have developed algorithms that can be used to compose extraordinary music. One of the pioneers was David Cope, professor of music at the University of California (Santa Cruz, USA). David was responsible for developing the EMI (Experimental Musical Intelligence) software, one of the first programs to compose original music in three objective steps:

Decryption of earlier compositions
Processing data collected from previous songs to identify musical patterns
and finally the actual composition of new songs, based on the previously identified musical patterns
In this way, EMI composed over 11,000 original songs that often resembled the works of famous composers such as Bach, Mozart, Beethoven and others. Many of these compositions are available on Youtube, Spotify and other digital platforms.
So music is another example (like photography, data analysis, transportation) where algorithms dominate a process once limited to humans.
Algorithms are used to compose songs in various forms, for example, by analyzing the most popular songs of a particular genre, artist, album, or artist’s career.
However, there is no guarantee that listeners will find artificially composed songs appealing. We still need human feedback to tell the algorithm if the song is good or not. The more algorithms people rate, the better they become, because they will not repeat patterns that they find uncomfortable. Apart from electronic music, songs composed by KI still need to be recorded by people. This will always ensure a human influence on the artificial compositions.
There are currently a tremendous number of stakeholders investing in AI such as AIVA (start-up from Luxembourg creating emotional compositions for soundtracks and commercials), Flow Machines (Sony CSL-Science Lab), Humtap, IBM Watson Music, Jukedeck , Chordpunch, Amper Music, Magenta (Brain), Brain, FM, Melodrive, Popgun and The Echo Nest.
All of these companies are fully dedicated to developing and improving algorithms to artificially compose music for different contexts and to achieve exceptional results. In view of the exponential development of technology, artificially composed music will also become better.
In the studies that I conducted together with my students at the University of Applied Sciences IUBH (Bad Honnef, Germany) for Musicstats.org, the research results are pretty clear.
Overall, respondents have a negative perception of KI-composed music, especially where music is concerned (such as singer-songwriters, acoustic music, and bands).
In contexts where music plays a subordinate role (for example in commercials, as soundtracks in videos and in public spaces), the acceptance of AI compositions is much greater.
Leave a Reply