Dark
Light
In an Article about AI in Music, a teal robot carrie a trumpet, guitar, drumset, and horn.
Illustration by Josh Villanueva, University of Southern California, San Bernardino

AI in the Future of Music

As more artificially generated music circulates online, the concern for artistic property and originality come into question.  
October 3, 2023
9 mins read

Whether or not you’re well versed in forthcoming technology, you’ve likely heard a fair amount about “AI,” short for “artificial intelligence,” in recent months. What was once a mere figment of imagination has become an emerging force in the technological world that has already made an impact on the arts, specifically music. 

Whether for better or worse, AI’s influence on the music industry is already making waves. 

For starters, exactly what is AI? 

Artificial intelligence emerged alongside the advancement of computers in the late 20th century. Though the term “artificial intelligence” was in fact coined in 1956, its capabilities remained limited until the boom of computer popularity in the 1990’s. An early example of AI’s power can be found in IBM’s chess playing program, “Deep Blue,” that defeated world-renowned chess player Gary Kasparov in 1997. This event signified the potential in a computer’s ability to compete with human thought processing. Throughout the 21st century, AI continues to develop with technology like Apple’s virtual assistant Siri — which allows the average person to interact with artificial intelligence on a regular basis. 

Today, AI is practically inescapable. From image and facial recognition systems on our smartphones to autonomous vehicle features, it can be found in just about every aspect of our modern daily lives. But this year alone we’ve seen a new wave of AI’s capabilities being unleashed on academia through ChatGPT, and on the arts through artificially generated music

A simple search for “AI music” on TikTok now generates a never-ending array of popular songs being artificially covered by other musical artists. If you’ve ever wondered what “Bodak Yellow” by Cardi B might sound like if rapped by Nicki Minaj, or what “Shape of You” by Ed Sheeran would sound like if Kanye West sang it, the answers to your questions are now out there. 

The majority of these covers aren’t created to make an earnest impression on audiences beyond a quick chuckle. However, every once in a while, a cover pops up — like Lana Del Rey singing Hozier’s “Take Me to Church” or Beyoncé rapping Big Jade’s “RPM” — that sounds impressively human and almost professionally produced. 

The AI project that seemingly launched this craft into the stratosphere is an “original” song “performed” by Drake and The Weeknd titled “Heart On My Sleeve.” First shared in April by anonymous TikTok user “Ghostwriter977,” the song features original lyrics written by the creator and achieved virality for the sound’s likeness to the two artists’ voices. After months of online circulation — and failed attempts to add the song to streaming services — rumors of Grammy eligibility began to circulate. 

But could a completely fabricated song earn a Grammy award? 

The answer: Sort of?

The buzz reached Recording Academy CEO, Harvey Mason Jr., in early September. Mason claimed that “Heart On My Sleeve” would be “absolutely eligible” for Grammy consideration given that a human (Ghostwriter977) wrote the lyrics. 

But just days later, he countered his original point in an Instagram post stating that “even though it was written by a human creator, the vocals were not legally obtained, the vocals were not cleared by the label or the artists, and the song is not commercially available,” and is therefore not eligible for Grammy consideration. He went on to clarify that “The [Recording] Academy is here to support and advocate and protect and represent human artists, and human creators, period.”

Universal Music Group (who controls several of the industry’s biggest record labels) quickly pulled “Heart On My Sleeve” from online streaming platforms after it was uploaded. 

At this time, there are no legal bars against creating AI music that imitates the likeness of a real human. However, there has been legal action attempted against the unauthorized replication of a performer’s voice. 

In 1988, Bette Midler fought Ford Motor Co. and their advertising agency Young and Rubicam in a $10 million lawsuit over the imitation of her voice in a commercial she had previously declined to participate in. The court case acknowledged that in the context of media, “the First Amendment protects reproduction of likeness and sound,” except for when the reproduction is considered “informative or cultural.” 

Midler’s case was dismissed after the court argued that the First Amendment claim was “not applicable to Midler’s situation because Ford’s use in reproducing her identity was cultural in nature.” In the end, she was awarded $400,000 after the court concluded that Young and Rubicam’s impersonation of Midler constituted “taking her identity.” 

Though Midler’s case did not concern the use of artificial intelligence, its concern for artistic property and originality remains relevant in the midst of the AI boom. All of which begs the question of whether the average person should be allowed to use someone else’s voice to create a work of art. 

No matter how great some of the songs may sound, the majority of these AI covers are not authorized by the original artists. (Except for Grimes, who has encouraged fans to use her voice and has even set up a system to access her vocal stems to use in their creations.) Cases similar to Midler’s are bound to pop up in courtrooms unless a legal precedent is set for using someone else’s voice — particularly a public figure — to produce popular media. 

A spokesperson for Universal Music Group released a statement in April encouraging streaming platforms and consumers of music to choose which side of history they’d like to be on: “the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.” 

What this debate boils down to is the person behind the AI — whether it’s the artist themself using technology to recreate recordings or their own voice, or the unauthorized teen in their bedroom exploring the technology’s power. Originality is undoubtedly stifled when technology comes into play, but at what point does the machinery take things too far? Consumers of music and media now have the tools at their fingertips to “replace” human creativity, which is the source of concern for many. 

The key to understanding all of this controversy and debate is the acknowledgement that humans still possess artistic abilities that surpass AI’s. No matter how good digitally manufactured song may sound, it’s still generally pretty easy to tell it apart from a real human’s voice. Though they may have come close, technology has not yet reached a level of sophistication to create completely flawless reproductions of human actions. 

The creative and artistic capabilities of humans may very well be surpassed by AI one day, but until then, humans prevail as the beating heart behind music that technology simply can’t replicate. 

Avery Heeringa, Columbia College Chicago

Contributing Writer

Avery Heeringa

Columbia College Chicago

Communication, Minor in Journalism

"Avery Heeringa is a senior at Columbia College Chicago studying Communication and Journalism. He’s passionate about all things music and pop culture related, and enjoys frequenting local record stores when not writing."

Don't Miss