ARTICLE AD BOX

Summary
Anthropic’s complaint against three Chinese labs, Deepseek included, illustrates why US-based AI developers can’t stop others from using its work. Legal claims are hard to enforce across borders and others can claim they need to use what others developed. Think of pharma formulas.
Anthropic’s latest complaint against three Chinese labs is a warning sign for Silicon Valley: Don’t expect to earn too much from the competitive edge your model gives you. Companies from the developing world will line up to swim across your moat, if they can—and neither the US government nor your lawyers will be able to help you. As the music industry and Big Pharma could tell them, nobody will eliminate your rivals, you have to accommodate them.
The company says it has pretty solid evidence that DeepSeek, MiniMax and Moonshot ran “industrial-scale campaigns” to “illicitly extract” the capabilities of its own model, Claude. That involved over 24,000 accounts that Anthropic described as fraudulent and 16 million exchanges that were “in violation of our terms of service and regional access restrictions.”
There’s just one problem: Laws that are unenforceable might get ignored. And if they are both unenforceable and profitable to break, then they will almost certainly be brushed aside.
It might sound like this is straightforward theft or even industrial espionage. Anthropic owns Claude and these three labs in China are stealing it. Unfortunately, that doesn’t hold up to closer examination. First, ‘distillation’—what DeepSeek et al are accused of—is, as the American company itself admits, a “widely used and legitimate training method.”
Where it gets murky is when your competitor decides to use that legitimate tactic to try and narrow the gap between the two of you. Anthropic says this is ‘illicit,’ which is one of the most wonderfully misleading words in the English language. It can mean ‘illegal.’ But it could also mean ‘frowned upon by custom and morality.’
In this case, the wronged party wants us to think the first, when it can really only claim the second. But many legal scholars argue that generative AI companies probably do not own rights over either their models or the output of those models. That’s why the three Chinese upstarts can’t be accused of theft.
Instead, they are ‘in violation of the terms of reference.’ As everyone who has grown up on the internet can guess, only three people have probably ever read ‘terms of reference’ online: the first three lawyers to log on to the World Wide Web in 1993. After that, even they must have started clicking through automatically. People can be and are successfully prosecuted for unauthorized access to systems or for copyright infringement—but not for breaching a website’s terms of reference.
Such claims don’t seem to have real legal force at the moment. But even if they do, Silicon Valley will have to deal with an additional problem: How will they enforce them across the world?
The more important that AI becomes and the more revenue that the frontline labs try to make from it, the more likely it is that both governments and companies in other parts of the globe will refuse to comply with Silicon Valley’s expectations and Washington’s demands.
Look at pharmaceuticals, for example. India never accepted the notion that life-saving drugs could be fully protected. If a generics manufacturer could replicate it through a different process, they were allowed to. The same will happen with AI: If it is possible to copy it, distil it, break the licence or force it open politically, then countries will allow—perhaps even encourage—their companies to do it.
Distillation by hostile or irresponsible actors is a genuine concern. A frontline model that’s stripped of safety and privacy guardrails could lead to nightmare scenarios. But then, if the US is forcing Anthropic to ditch its own safety pledges—as the Pentagon is trying to do—can it really complain about other governments?
Companies can’t rely on US law to protect them internationally. They will have to throw more resources into preventing distillation at this scale and simultaneously expose their systems to millions of queries, while preventing any combination of those queries from revealing too much to any one hidden actor.
Perhaps some brilliant technical counter-measures will succeed at this task. But, right now, it seems like a tough ask.
The music industry could remind Silicon Valley that it ended piracy not by complaining about the law but by making products more accessible and raising revenues elsewhere. Big Pharma had to accept compulsory licensing by governments, as well as make its peace with the generics sector. They did so because otherwise they faced the possibility that the very idea of intellectual property would be contested.
The lesson seems to be: Give the world a stake in the system you want to legitimize, or it will find a way to break it. Illicitly perhaps, but they won’t care. ©Bloomberg
The author is a Bloomberg Opinion columnist.

20 hours ago
2





English (US) ·