The U.S. Supreme Court refuses to hear the AI copyright appeal, establishing the principle that “content generated solely by AI is not protected by copyright.” Meanwhile, the UK is shifting toward a “license-first” mechanism.
(Background: Bill Gates predicts AI will replace humans within 10 years, with a two-day workweek not a dream; three types of jobs may survive)
(Additional context: AI truly begins to threaten human jobs—global corporations accelerate layoffs, American college graduates face immediate unemployment)
Computer scientist Stephen Thaler spent eight years litigating over an AI-generated image, only to receive the outcome that the U.S. Supreme Court is unwilling to review the case on appeal.
This week, the Supreme Court officially declined to hear the appeal in Thaler v. Perlmutter, upholding the original ruling that “a human author is a necessary condition for copyright protection.”
The image titled “Latest Entrance to Paradise” was autonomously generated by Thaler’s AI system DABUS. In his 2018 copyright application, he honestly listed the AI’s name as the author instead of himself, which was later rejected by the copyright office.
A Recent Entrance to Paradise
The logic of the ruling is quite simple: U.S. copyright law protects “human creative contributions.” Without a human author, there is no copyright.
But where the red line is drawn, and how broad it is, are two different issues.
The Supreme Court rejected an extreme case: where the creator deliberately lists AI as the sole author, with humans completely退出 the creative process. The real legal battleground lies elsewhere: when humans use AI as a tool, choosing prompts, adjusting parameters, filtering outputs, and performing post-editing, how should that line be drawn?
Analysis by Holland & Knight law firm indicates that this ruling “won’t kill AI-assisted creation,” but requires creators to demonstrate “genuine creative control” during the process. In other words, you can use AI as a brush, but you must prove that you are the one holding the brush.
On the other side of the Atlantic, the story is equally noteworthy. The UK government initially planned to promote a “opt-out” mechanism, allowing AI companies to train models on copyrighted content without consent—an environment seemingly ideal for Silicon Valley dreams.
But reality struck.
During a two-month public consultation, over 10,000 responses, 95% called for stronger protections for creators. Paul McCartney said, “AI has its uses, but it shouldn’t exploit creative people.” House of Lords member Beeban Kidron was more direct:
“We refuse to give away our work for free just to help others build AI.”
The UK government ultimately rejected the bill, adopting a “license-first” approach: AI companies wanting to train models on copyrighted content must first negotiate licenses and pay. The AI legislation originally scheduled to be included in the King’s Speech has now been indefinitely postponed.
In the two major common law countries, the U.S. has established the “no human, no copyright” principle through judicial rulings; the UK is pushing for a “pay-to-use” framework through legislation.