Does Music Have a Future in the Era of AI?
Today, thanks to neural networks, creating music and even producing music videos can be done effortlessly on a smartphone. With the emergence of Artificial Intelligence (AI) capable of autonomously generating musical pieces, the music production process has been significantly devalued over the past 20-30 years. While many prominent figures in show business criticize the practice of creating music using AI, it's understandable – anyone can offend an artist. However, when a couple of microchips do it, it's doubly disheartening.
On the other hand, modern pop music legislators like The Beatles have utilized AI in crafting new songs, transforming neural networks into a full-fledged creative instrument. Let's try to discern where art ends and technology begins and whether there's a future for music created without human involvement.
The story behind the creation of the song "Now And Then" marked the onset of a new era in sound recording. Back in the mid-90s, John Lennon's widow, Yoko Ono, gifted Paul McCartney with several tapes containing song drafts recorded by her husband on a domestic tape recorder. Two of these drafts, refined by The Beatles with the help of producer Jeff Lynne, made it into the monumental multi-disc release, The Beatles' "Anthology," unveiled in 1995.
The third track, "Now And Then," a sentimental song reflecting late Lennon's spirit, was discarded due to a high level of noise. In 1995, technologies enabling the separation of noise from recordings to refine drafts to an acceptable quality for album release didn't exist. These advancements only emerged in the 21st century through groundbreaking developments in AI.
This tale is worthy of a speculative science fiction narrative from the 70s, the time when the song "Now And Then" was penned – when descendants managed to resurrect the voice of a long-deceased individual thanks to computer technologies of the future.
Here, indeed, the future has arrived.
In a Tight Spot
As previously discussed, the crisis in modern pop music is directly linked to the widespread use of computer technologies in creating and producing musical content. If computer technologies offer ready-made solutions, why reinvent the wheel? Just as The Beatles had to invent it themselves, forced to record their masterpieces on primitive four-track tape recorders.
An apt analogy here is the use of CGI in filmmaking. While visual effects were initially crafted artificially, using models, makeup, and optical illusions, each artist of such effects sought to astonish the audience with something unprecedented. Meanwhile, computer technologies offer ready-made algorithms.
Hence, we witness a slew of blockbusters at the box office with similar visual effects. Similarly, on the radio, there's a stream of songs that hardly differ from one another in composition, arrangement, or performance style. The situation with music is even more dire than in cinema, as technologies for creating audio content at home are much more accessible.
Audio editors work with libraries of sounds containing an immense number of ready-made musical fragments to create music tracks. Artificial Intelligence systems similarly generate musical material based on already recorded music.
Ironically, one of the first such tracks was the composition "Daddy's Car," created in 2016 by the neural network Flow Machines "inspired by" The Beatles.
An algorithm analyzed approximately 45 songs from the band's catalog and created a new melody in the style of the "Liverpool Four". The joke, as they say, turned out well. But about seven years have passed, and it was no longer a laughing matter when this spring, as reported in the news, social media buzzed over the track Heart On My Sleeve (https://www.youtube.com/watch?v=7HZ2ie2ErFI), also created using a neural network but of a fundamentally new generation.
Billboard dubbed this song the "hottest event in the music industry". No wonder: the track couldn't be distinguished from compositions by Drake and The Weeknd, whose voices were used by AI to create "Heart On My Sleeve". Bloggers even joked that the neural network did better than real artists.
The emperor, as they say, turned out to be naked – it turns out that the creativity of these two showbiz millionaires can be easily generated at home. It's no wonder that a scandal erupted when the publisher of Drake and The Weeknd, the recording giant Universal Music Group, demanded streaming platforms block the track.
But the stream of creating music using AI is unstoppable. The bulk of the generated music consists of so-called "mashups": for instance, in the Daft Punk song "Get Lucky," we hear the vocals of Michael Jackson, who passed away four years before its creation.
The rapid advancement of neural networks will eventually enable the creation of such high-quality "fakes" inspired by the work of favorite artists and bands that distinguishing them from original tracks might be challenging for the artists themselves.
Moreover, the issue of copyright remains unresolved. As explained by The Verge, neural network tracks are not part of label catalogs and are effectively equated with original songs.
Because app developers refuse rights to works created using AI, the actual owners of the tracks essentially don't exist. Therefore, it's virtually impossible to prohibit their dissemination. Removing "Heart On My Sleeve" from streaming services was only possible due to an illegitimate mention in the text by producer Metro Boomin, who had no involvement in creating the track.
This story has sparked intense debates. One staunch critic of creating songs using AI is the iconic musician Nick Cave, who called the process a "grotesque mockery of what it is to be human." The intellectual Cave, also a writer and screenwriter, emerges from a literary culture, and his song lyrics are genuine poetry. Naturally, he was offended by tracks generated by a neural network based on his lines.
"Songs arise out of suffering, by which I mean they are predicated upon the complex, internal human struggle of creation and, well, as far as I know, algorithms don’t feel. Data doesn’t suffer. ChatGPT has no inner being... and hence it doesn’t have the capacity for a shared transcendent experience," Cave wrote on his website.
Another aspect involves technical mastery of instruments, interaction with other musicians, and the ability to improvise, which are inaccessible to AI. This emphasis was highlighted by American guitarist Alex Skolnick.
However, this musician, who works across genres from heavy metal to jazz, believes that using neural networks in contemporary pop music is entirely appropriate.
This sentiment was affirmed by the popular French DJ David Guetta, who works in techno and house styles – in one of his tracks, he used a generated voice of Eminem.
Additionally, let's not forget the positive experience of The Beatles, who used AI as a purely technical tool for processing sound recordings rather than creating music. Ultimately, this is very convenient and efficient, a viewpoint supported by industry professionals.
"A neural network cannot yet replace a real artist and creator. However, using AI as a tool that takes on routine recording processes is different. For example, cleaning vocal recordings from excessive reverb and noise on vocal tracks. Just five years ago, this would have been impossible at the level it's done now. A couple of years ago, it would have taken several hours, but today, a neural network does it easily and quickly," shared "The Gaze" with Ukrainian music producer Yurii Medved, who boasts extensive industry experience.
Therefore, the future once again seems to be leaning towards AI.