I recently encountered a fascinating twist in the evolving relationship between artificial intelligence and intellectual property. After releasing several thousand AI-generated songs via an international distributor to platforms like Spotify, Apple Music, Amazon, and others, I was notified that one of my tracks was being held up. The reason? The system had flagged the release as containing content generated by an AI service.
Apparently, because the AI platform I used requires a paid subscription in order for users to retain copyright ownership, the distributor needed proof of my subscription. Once I provided the necessary documentation, the release was approved and is now being distributed worldwide. Problem solved—for now.
But this incident raises a number of important questions:
-
Why did it take so long for them to notice? Thousands of songs had already been accepted and released without issue. What changed?
-
How exactly did they identify the AI service? Was it metadata, audio fingerprinting, or something else entirely?
-
Why is a distribution platform enforcing the licensing terms of an AI company? That’s typically the domain of contract law and not something you’d expect a music distributor to adjudicate.
-
And most importantly: If I compose the music, write the lyrics, and even perform live instruments—regardless of whether I use AI as a tool—why don’t I automatically own the copyright?
This scenario highlights a curious inconsistency: using AI to generate music isn’t fundamentally different from using digital audio workstations like Logic Pro, Pro Tools, or other virtual tools for engineering, mastering, and production. In both cases, you’re leveraging software to aid the creative process. So why is AI being treated differently?
It’s clear we’re entering uncharted legal territory. As AI continues to become more embedded in creative workflows, questions around ownership, authorship, and licensing are going to get much more complex—and much more urgent.
The industry, creators, and lawmakers all need to catch up quickly. Until then, those of us creating at the frontier will need to navigate a maze of evolving rules, obscure policies, and surprising roadblocks.
One thing is certain: The future of intellectual property in the age of AI is going to be anything but boring.
My main point is that I can afford to pay for the software. Many artists can’t. The real issue is that AI is just the latest in a long line of technological innovations in music. And with every major leap—whether it was multi-track recording, synthesizers, or digital production—there have always been gatekeepers in the industry trying to control access and drive up costs. It’s a way to maintain monopolies and keep independent or less wealthy artists from competing.
I’ve spent my career challenging that model. I helped pioneer the digital home recording studio, and later bypassed traditional distribution entirely by launching the first record label on the web. That made it possible to create and share music globally for almost nothing. It disrupted the industry and helped thousands of independent artists find an audience without the backing of a major label. (And, it is the reason I can now afford to pay for an AI subscription.)
Now, with AI, we’re seeing the same old tactics again—an effort by the industry to reassert control, this time through licensing restrictions and copyright ambiguity. But just like before, innovation will keep moving forward, whether they like it or not.
Addendum
When it comes to copyrights, I composed the music and wrote the lyrics—those creative contributions are mine, and I own the associated copyrights. The involvement of AI doesn’t change that fundamental fact. If there’s any legal argument to be made against my ownership, it would most likely revolve around the concept of “work for hire.”
The “work for hire” doctrine has been at the center of numerous high-profile copyright disputes. This legal principle determines whether the creator of a work owns the copyright—or whether the rights belong to the entity that commissioned or employed the creator. For instance:
-
In Community for Creative Non-Violence v. Reid (1989), the U.S. Supreme Court ruled that a sculptor hired to create a statue was not an employee, and therefore retained the copyright, because the relationship did not meet the criteria for a work-for-hire arrangement.
-
In the music industry, disputes have emerged around session musicians and producers—such as in the case of James Newton v. Beastie Boys, where the court had to assess the nature of a sampled composition and its original copyright.
-
Even tech companies have faced this issue. A notable case involved Marvel Characters, Inc. v. Kirby, where the heirs of comic book artist Jack Kirby sought to reclaim copyrights, arguing he was not a full-time employee but an independent contractor.
In each of these cases, the outcome depended on whether the creative work was truly “for hire” or independently authored.
With AI tools, the situation gets even murkier. If I’m paying for access to a tool—and using it as an instrument, much like a synthesizer or mixing board—then the creative output is mine. The tool doesn’t own the copyright, and neither should the company behind it. Unless there’s a contract that explicitly defines the output as work for hire, default copyright law should favor the creator: the person who makes the creative decisions.
As AI becomes more integrated into music production, these legal gray areas will only grow. But until the laws catch up, it’s important to challenge assumptions that undermine the rights of artists simply because they use new tools.