Anthropic Faces Potentially “Business-Ending” Copyright Lawsuit

By Garrison @ 2025-07-25T17:01 (+29)

This is a linkpost to https://www.obsolete.pub/p/anthropic-faces-potentially-business

A class action over pirated books exposes the 'responsible' AI company to penalties that could bankrupt it — and reshape the entire industry

This is the full text of a post first published on Obsolete, a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to Build Machine Superintelligence. Consider subscribing to stay up to date with my work.

This piece has been updated to add additional context and clarify some details. 

Anthropic, the AI startup that’s long presented itself as the industry’s safe and ethical choice, is now facing legal penalties that could bankrupt the company. Damages resulting from its mass use of pirated books would likely exceed a billion dollars, with the statutory maximum stretching into the hundreds of billions.

Last week, William Alsup, a federal judge in San Francisco, certified a class action lawsuit against Anthropic on behalf of nearly every US book author whose works were copied to build the company’s AI models. This is the first time a US court has allowed a class action of this kind to proceed in the context of generative AI training, putting Anthropic on a path toward paying damages that could ruin the company.

The judge ruled last month, in essence, that Anthropic's use of pirated books had violated copyright law, leaving it to a jury to decide how much the company owes for these violations. That number increases dramatically if the case proceeds as a class action, putting Anthropic on the hook for a vast number of books beyond those produced by the plaintiffs.

The class action decision came just one day after Bloomberg reported that Anthropic is fundraising at a valuation potentially north of $100 billion — nearly double the $61.5 billion investors pegged it at in March. According to Crunchbase, the company has raised $17.2 billion in total. However, much of that funding has come in the form of Amazon and Google cloud computing credits — not real money.

Santa Clara Law professor Ed Lee warned in a blog post that the ruling means “Anthropic faces at least the potential for business-ending liability.” 

He separately wrote that if Anthropic ultimately loses at trial and a final judgment is entered, the company would be required to post a surety bond for the full amount of damages in order to delay payment during any appeal, unless the judge grants an exception. 

In practice, this usually means arranging a bond backed by 100 percent collateral — not necessarily cash, but assets like cloud credits, investments, or other holdings — plus a 1-2 percent annual premium. The impact on Anthropic’s day-to-day operations would likely be limited at first, aside from potentially higher insurance costs, since the bond requirement would only kick in after a final judgment and the start of any appeals process.

Lee wrote in another post that Judge Alsup “has all but ruled that Anthropic’s downloading of pirated books is [copyright] infringement,” leaving “the real issue at trial… the jury’s calculation of statutory damages based on the number of copyrighted books/works in the class.” 

While the risk of a billion-dollar-plus jury verdict is real, it’s important to note that judges routinely slash massive statutory damages awards — sometimes by orders of magnitude. Federal judges, in particular, tend to be skeptical of letting jury awards reach levels that would bankrupt a major company. As a matter of practice (and sometimes doctrine), judges rarely issue rulings that would outright force a company out of business, and are generally sympathetic to arguments about practical business consequences. So while the jury’s damages calculation will be the headline risk, it almost certainly won’t be the last word.

On Thursday, the company filed a motion to stay — a request to essentially pause the case — in which they acknowledged the books covered likely number “in the millions.” Anthropic’s lawyers also wrote, “the specter of unprecedented and potentially business-threatening statutory damages against the smallest one of the many companies developing [large language models] with the same books data” (though it’s worth noting they have an incentive to amplify the stakes in the case to the judge).

The company could settle, but doing so could still cost billions given the scope of potential penalties.

Anthropic, for its part, told Obsolete it “respectfully disagrees” with the decision, arguing the court “failed to properly account for the significant challenges and inefficiencies of having to establish valid ownership millions of times over in a single lawsuit,” and said it is “exploring all avenues for review.”

The plaintiffs lawyers did not reply to a request for comment.

From “fair use” win to catastrophic liability

Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark victory. Alsup had ruled that using copyrighted books to train an AI model — so long as the books were lawfully acquired — was protected as “fair use.” This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.

But Alsup split a very fine hair. In the same ruling, he found that Anthropic’s wholesale downloading and storage of millions of pirated books — via infamous “pirate libraries” like LibGen and PiLiMi — was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.

Thanks to Alsup’s ruling and subsequent class certification, Anthropic is now on the hook for a class action encompassing five to seven million books — although only works with registered US copyrights are eligible for statutory damages, and the precise number remains uncertain. A significant portion of these datasets consists of non-English titles, many of which were likely never published in the US and may fall outside the reach of US copyright law. For example, an analysis of LibGen’s holdings suggests that only about two-thirds are in English.

Assuming that only two-fifths of the five million books are covered and the jury awards the statutory minimum of $750 per work, you still end up with $1.5 billion in damages. And as we saw, the company’s own lawyers just said the number is probably in the millions. 

The statutory maximum and with five million books covered? $150,000 per work, or $750 billion total — a figure Anthropic’s lawyers have called “ruinous.” No jury will award that, but it gives you a sense of the range.

The previous record for a case like this was set in 2019, when a federal jury found Cox Communications liable for $1 billion after the nation’s biggest music labels accused the company of turning a blind eye to rampant piracy by its internet customers. That verdict was overturned on appeal years later and is now under review by the Supreme Court.

But even that historic sum could soon be eclipsed if Anthropic loses at trial.

The decision to treat AI training as fair use was widely covered as a win for the industry — and, to be fair, it was. But Anthropic is now facing an existential threat, with barely a mention. Outside of the legal and publishing press, only Reuters and The Verge have covered the class certification ruling, and neither discussed the fact that this case could spell the end for Anthropic. 

Update: early Friday morning, the LA Times ran a column discussing the potential for a trillion-dollar judgment.

Respecting copyright is “not doable”

The legal uncertainty now facing the company comes as the industry continues an aggressive push in Washington to reshape the rules in their favor. In comments submitted earlier this year to the White House’s “AI Action Plan,” Meta, Google, and OpenAI all urged the administration to protect AI companies’ access to vast training datasets — including copyrighted materials — by clarifying that model training is unequivocally “fair use.” Ironically, Anthropic was the only leading AI company to not mention copyright in its White House submission.

At the Wednesday launch of the AI Action Plan, President Trump dismissed the idea that AI firms should pay to use every book or article in their training data, calling strict copyright enforcement “not doable” and insisting that “China’s not doing it.” Still, the administration’s plan is conspicuously silent on copyright — perhaps a reflection of the fact that any meaningful change would require Congress to amend the Copyright Act. The federal Copyright Office can issue guidance but ultimately have no power to settle the matter. Administration officials told the press the issue should be left to the courts. 

Anthropic made some mistakes

Anthropic isn’t just unlucky to be up first. The judge described this case as the “classic” candidate for a class action: a single company downloading millions of books in bulk, all at once, using file hashes and ISBNs to identify the works. The lawyers suing Anthropic are top-tier, and the judge has signaled he won’t let technicalities slow things down. A single trial will determine how much Anthropic owes; a jury could choose any number between the statutory minimum and maximum.

The order reiterates a basic tenet of copyright law: every time a pirated book is downloaded, it constitutes a separate violation — regardless of whether Anthropic later purchased a print copy or only used a portion of the book for training. While this may seem harsh given the scale, it’s a straightforward application of existing precedent, not a new legal interpretation.

And the company’s handling of the data after the piracy isn’t winning it any sympathy.

As detailed in the court order, Anthropic didn’t just download millions of pirated books; it kept them accessible to its engineers, sometimes in multiple copies, and apparently used the trove for various internal tasks long after training. Even when pirate sites started getting taken down, Anthropic scrambled to torrent fresh copies. After a company co-founder discovered a mirror of “Z-Library,” a database shuttered by the FBI, he messaged his colleagues: “[J]ust in time.” One replied, “zlibrary my beloved.”

That made it much easier for the judge to say: this is “Napster” for the AI age, and the copyright law is clear.

Anthropic is separately facing a major copyright lawsuit from the world’s biggest music publishers, who allege that the company’s chatbot Claude reproduced copyrighted lyrics without permission — a case that could expose the firm to similar per-work penalties from thousands to potentially millions of songs.

Ironically, Anthropic appears to have tried harder than some better-resourced competitors to avoid using copyrighted materials without any compensation. Starting in 2024, the company spent millions buying books, often in used condition — cutting them apart, scanning them in-house, and pulping the originals — to feed its chatbot Claude, a step no rival has publicly matched.

Meta, despite its far deeper pockets, skipped the buy-and-scan stage altogether — damning internal messages show engineers calling LibGen “obviously pirated” data and revealing that the approach was approved by Mark Zuckerberg.

Why the other companies should be nervous

If Anthropic settles, it could end up as the only AI company forced to pay for mass copyright infringement — especially if judges in other cases follow Meta’s preferred approach and treat downloading and training as a single act that qualifies as fair use.

For now, Anthropic’s best shot is to win on appeal and convince a higher court to reject Judge Alsup’s reasoning in favor of the more company-friendly approach taken in the Meta case, which treats the act of training as fair use and effectively rolls the infringing downloads into that single use.

If Anthropic settles, it could end up the only AI company forced to pay out — if judges in other copyright cases follow Meta’s preferred approach and treat downloading and training as a single, potentially fair use act.

Right now, Anthropic’s only real hope is to win on appeal and convince a higher court to reject Judge Alsup's approach and accept the approach of the judge in the Meta case — treating the act of training as fair use that subsumes the infringing act of downloading pirated copies. 

But appeals usually have to wait until after a jury trial — so the company faces a brutal choice: settle for potentially billions, or risk a catastrophic damages award and years of uncertainty. If Anthropic goes to trial and loses on appeal, the resulting precedent could drag Meta, OpenAI, and possibly even Google into similar liability.

OpenAI and Microsoft now face 12 consolidated copyright suits — a mix of proposed class actions by book authors and cases brought by news organizations (including The New York Times) — in the Southern District of New York before Judge Sidney Stein.

If Stein were to certify an authors’ class and adopt an approach similar to Alsup’s ruling against Anthropic, OpenAI’s potential liability could be far greater, given the number of potential covered works.

What’s next

A trial is tentatively set for December 1st. If Anthropic fails to pull off an appellate victory before then, the industry is about to get a lesson in just how expensive “move fast and break things” can be when the thing you’ve broken is copyright law — a few-million times over.

A multibillion dollar settlement or jury award would be a death-knell for almost any four-year-old company, but the AI industry is different. The cost to compete is enormous, and the leading firms are already raising multibillion dollar rounds multiple times a year.

That said, Anthropic has access to less capital than its rivals at the frontier — OpenAI, Google DeepMind, and, now, xAI. Overall, company-killing penalties may be unlikely, but they’re still possible, and Anthropic faces the greatest risk at the moment. And given how fiercely competitive the AI industry is, a multibillion dollar setback could seriously affect the company’s ability to stay in the race. 

And some competitors seem to have functionally unlimited capital. To build out its new superintelligence team, Meta has been poaching rival AI researchers with nine-figure pay packages, and Zuckerberg recently said his company would invest “hundreds of billions of dollars” into its efforts.

To keep up with its peers, Anthropic recently decided to accept money from autocratic regimes, despite earlier misgivings. On Sunday, CEO Dario Amodei issued a memo to staff saying the firm will seek investment from Gulf states, including the UAE and Qatar. The memo, which was obtained and reported on by Kylie Robison at WIRED, admitted the decision would probably enrich “dictators” — something Amodei called a “real downside.” But, he wrote, the company can’t afford to ignore “a truly giant amount of capital in the Middle East, easily $100B or more.”

Amodei apparently acknowledged the perceived hypocrisy of the decision, after his October essay/manifesto “Machines of Loving Grace” extolled how important it is that democracies win the AI race.

In the memo, Amodei wrote, “Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”

The timing is striking: the note to staff went out only days after the class action certification suddenly presented Anthropic with potentially existential legal risk.


The question of whether generative AI training can lawfully proceed without permission from rights-holders has become a defining test for the entire industry.

OpenAI and Meta may still wriggle out of similar exposure, depending on how their judges rule and whether they can argue that the core act of AI training is protected by fair use. But for now, it’s Anthropic — not OpenAI or Meta — that’s been forced onto the front lines, while the rest of the industry holds its breath.

Edited by Sid Mahanta and Ian MacDougall, with inspiration and review from my friend Vivian.

If you enjoyed this post, please subscribe to Obsolete


Ian Turner @ 2025-07-25T19:29 (+23)

How is this case different from the many other cases alleging copyright infringement against LLM companies, many also including allegations of piracy?

this list mentions copyright infringement allegations in Advance Local Media vs Cohere, Andersen vs Stability AI, Getty Images vs Stability AI, Kadrey vs Meta, and In re: OpenAI, Inc. Copyright Infringement Litigation (a consolidation of 12 other cases).

Matrice Jacobine @ 2025-07-27T11:03 (+1)

Just a month ago, Anthropic and the rest of the industry were celebrating what looked like a landmark victory. Alsup had ruled that using copyrighted books to train an AI model — so long as the books were lawfully acquired — was protected as “fair use.” This was the legal shield the AI industry has been banking on, and it would have let Anthropic, OpenAI, and others off the hook for the core act of model training.

But Alsup split a very fine hair. In the same ruling, he found that Anthropic’s wholesale downloading and storage of millions of pirated books — via infamous “pirate libraries” like LibGen and PiLiMi — was not covered by fair use at all. In other words: training on lawfully acquired books is one thing, but stockpiling a central library of stolen copies is classic copyright infringement.

Ozzie Gooen @ 2025-07-26T01:28 (+10)

The article broadly seems informative, but I really don't like the clickbait headline. 

"Potentially Business-Ending"?

I did a quick look at the Manifold predictions. In this (small) market, there's a 22% chance given to "Will Anthropic be ordered to pay $1B+ in damages in Bartz v. Anthropic?" (note that even $1B would be far from "business-ending"). 

And larger forecasts of the overall success of Anthropic have barely changed. 

Jason @ 2025-07-26T02:49 (+18)

The article relies on an analysis by an IP law professor, which in turn rests on an analysis of statutory damages under copyright law and Judge Alsup's findings. "Business-ending" is a direct quote from said professor. That strikes me as a reasonable basis on which to characterize the liability as potentially business-ending, even if people on Manifold do not seem to agree.

Juries are hard to predict, especially where the allowable range for statutory damages is so wide. That the infringement was willful and by a sophisticated actor doesn't help Anthropic here.

I'd love to hear what their lawyers had to say about all this before the piracy happened (or maybe they weren't even consulted?) They had to expect copyright suits from the get go, did they not understand that upping the ante with mass piracy was likely to make things much worse?

Ozzie Gooen @ 2025-07-26T03:31 (+4)

Good point about it coming from a source. But looking at that, I think that that blog post was had a similarly clickbait headline, though more detailed ("Anthropic faces potential business-ending liability in statutory damages after Judge Alsup certifies class action by Bartz"). 

The analysis in question also looks very rough to me. Like a quick sketch / blog post. 

I'd guess that if you'd have most readers here estimate what the chances seem that this will actually force the company to close down or similar, after some investigation, it would be fairly minimal. 

Ozzie Gooen @ 2025-07-26T03:36 (+6)

This got me to investigate Ed Lee a bit. Seems like a sort of weird situation.

titotal @ 2025-07-26T12:06 (+4)

I found his page on the actual Santa clara law website, and it specifically mentioned that he founded the chatgpt blog in question. So it looks like he is legitimately a qualified law professor and from his profile it looks like he does specialise in IP law stuff.  

On the other hand, the blog has posts with questionable methodology like asking chatgpt for probabilities of lawsuit outcomes

I would like to hear from other IP law specialists. 

NunoSempere @ 2025-07-26T19:29 (+4)

We're looking into this as part of Sentinel. I agree it looks unlikely. Manifest at 3%, though I think it's even lower than that because of the low return over ~2 years https://manifold.markets/embed/NuñoSempere/will-anthropic-go-bankrupt-or-be-di

Hauke Hillebrandt @ 2025-07-27T10:58 (+2)

Also, in some secondary markets Anthropic is trading at record highs (e.g. https://notice.co/c/anthropic )

SummaryBot @ 2025-07-28T20:38 (+1)

Executive summary: In a detailed investigative analysis, the author argues that Anthropic, long considered a “responsible” AI company, now faces potentially existential legal and financial threats from a newly certified class action lawsuit over its use of pirated books to train AI models—setting a precedent that could reshape copyright liability across the generative AI industry.

Key points:

  1. Class action certified over pirated book use: A U.S. federal judge has allowed a class action lawsuit to proceed against Anthropic for downloading and using millions of pirated books to train AI models—an unprecedented development in generative AI litigation.
  2. Scale of potential liability is staggering: If the jury awards even the minimum statutory damages for a fraction of covered works, Anthropic could owe over $1.5 billion; at the statutory maximum, damages could theoretically reach $750 billion, though such an amount is unlikely to be awarded or upheld.
  3. Court ruled fair use doesn't cover pirated sources: Judge Alsup drew a sharp legal distinction between training on lawfully acquired books (potentially fair use) and wholesale downloading from pirate libraries like LibGen, which he deemed clear copyright infringement.
  4. Settlement or appeal are Anthropic’s best options: A loss at trial followed by a failed appeal could bankrupt the company or force a massive settlement; conversely, a successful appeal could roll the infringement into a fair use defense and reduce or nullify damages.
  5. Implications for the AI industry are profound: If Alsup’s reasoning holds, companies like OpenAI and Meta could face even greater liability; but if they avoid such rulings, Anthropic could end up uniquely punished despite efforts to behave more ethically than peers.
  6. Funding pressures are rising: With limited access to capital compared to rivals, Anthropic is now seeking investment from Gulf states—a reversal of its earlier ethical stance—underlining the financial strain posed by the lawsuit and competitive dynamics.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.