As new technological advancements continuously disrupt the music industry’s business models, the industry finds itself perpetually playing catch-up. Both the rise of sampling incubated by hip-hop and the illegal downloading spree powered by Napster were met with lawsuits, the latter of which accomplished the aim of getting the file-sharing platform shut down.
Eventually came specific guidance for how samples should be cleared, agreements with platforms designed to monetize digital music (like the iTunes store and Spotify), and updates to the regulation of payments in the streaming era — but it all took time and plenty of compromise.
After hashing all of that out, Glazier says, music executives wanted to ensure “that we didn’t have a repeat of the past.” The tone of the advocacy for restricting AI abuses feels a little more circumspect than the complaints against Napster.
For one thing, people in the industry aren’t talking like machine learning can, or should, be shut down.
“You do want to be able to use generative AI and AI technology for good purposes,” Glazier says. “You don’t want to limit the potential of what this technology can do for any industry. You want to encourage responsibility.”
There’s also a need for a nuanced understanding of ownership. Popular music has been the scene for endless cases of cultural theft, from the great whitewashing of rock and roll up to the virtual star FN Meka, a Black-presenting “AI-powered robot rapper” conceived by white creators, signed by a label and met with extreme backlash in 2022.
Just a few weeks ago, a nearly real-sounding Delta blues track caused its own controversy: A simple prompt had gotten an AI music generator to spit out a song that simulated the grief and grit of a Black bluesman in the Jim Crow South.
On the heels of “Heart on My Sleeve,” the most notorious musical deepfake to date — which paired the vocal likenesses two Black men, Drake and The Weeknd — it was a reminder that the ethical questions circling the use of AI are many, some of them all too familiar.
The online world is a place where music-makers carve out vanishingly small profit margins from the streaming of their own music. As an example of the lack of agency many artists at her level feel, Bragg pointed out a particularly vexing kind of streaming fraud that’s cropped up recently, in which scammers reupload another artist’s work under new names and titles and collect the royalties for themselves.
Other types of fraud have been prosecuted or met with crackdowns that, in certain cases, inadvertently penalize artists who aren’t even aware that their streaming numbers have been artificially inflated by bots.
Just as it’s hard to imagine musicians pulling their music from streaming platforms in order to protect it from these schemes, the immediate options can feel few and bleak for artists newly entered in a surreal competition with themselves, through software that can clone their sounds and styles without permission.
All of this is playing out in a reality without precedent.
“There is a problem that has never existed in the world before,” says ViNIL’s Brook, “which is that we can no longer be sure that the face we’re seeing and the voice we’re hearing is actually authorized by the person it belongs to.”
For Bragg, the most startling use of AI she’s witnessed wasn’t about stealing someone’s voice, but giving it back. A friend sent her the audio of a speech on climate change that scientist Bill Weihl was preparing to deliver at a conference. Weihl had lost the ability to speak due to ALS — and yet, he was able to address an audience sounding like his old self with the aid of ElevenLabs, one of many companies testing AI as a means of helping people with similar disabilities communicate.
Weihl and a collaborator fed three hours of old recordings of him into the AI model, then refined the clone by choosing what inflections and phrasing sounded just right.
“When I heard that speech, I was both inspired and also pretty freaked out,” Bragg recalled. “That’s, like, my biggest fear in life, either losing my hearing or losing the ability to sing.”
That is, in a nutshell, the profoundly destabilizing experience of encountering machine learning’s rapidly expanding potential, its promise of options the music business — and the rest of the world — have never had. It’s there to do things we can’t or don’t want to have to do for ourselves. The effects could be empowering, catastrophic or, more likely, both. And attempting to ignore the presence of generative AI won’t insulate us from its powers.
More than ever before, those who make, market and engage with music will need to continuously and conscientiously adapt.