When an AI Company Told the Pentagon 'No
Anthropic is in federal court seeking an injunction against the Pentagon's supply chain risk designation and Trump's ban on federal use of Claude AI. Billions in contracts—and a bigger question about AI ethics—hang in the balance.
Most government contractors learn quickly: when the Pentagon asks for unfettered access, you say yes. Anthropic said no. Now it's paying for it.
On Tuesday, the AI startup founded by former OpenAI researchers walked into a San Francisco federal courtroom to ask a judge to pause what may be the most aggressive government action ever taken against an American AI company. The stakes: billions of dollars in federal contracts, the company's reputation, and a question that cuts to the heart of the AI industry's uncomfortable relationship with national security.
What Happened, and How It Got Here
The conflict traces back to last September, when Anthropic was negotiating to deploy its Claude AI models on the Pentagon's GenAI.mil platform. The company had already signed a $200 million contract with the Department of Defense in July 2024 and had become the first AI lab to deploy technology across classified networks. By all accounts, it was a model partnership.
Then talks hit a wall. Anthropic wanted contractual limits: Claude could not be used for fully autonomous weapons systems or mass surveillance of American citizens. The DOD's position was simpler—it demanded unrestricted access to the technology for all lawful purposes. Neither side budged. The deal collapsed.
What followed moved fast. In February, President Trump posted on Truth Social ordering federal agencies to "immediately cease" all use of Anthropic's technology. "WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company," he wrote. In March, the Pentagon formalized the retaliation: it designated Anthropic a supply chain risk—a label previously reserved for foreign adversaries, most notably Chinese firms. It was the first time an American company had ever received it.
The practical consequence is severe. Defense contractors including Amazon, Microsoft, and Palantir would be required to certify they don't use Claude in any work with the military. Without a court injunction, Anthropic says it faces losses potentially reaching into the billions.
The Question the Judge Is Actually Asking
U.S. District Judge Rita Lin, who is presiding over Tuesday's hearing, sent lawyers a pointed list of questions in advance. The most revealing: "What evidence in the record shows that Anthropic had ongoing access to or control over Claude after delivering it to the government, such that Anthropic could engage in acts of sabotage or subversion?"
That question matters because it exposes the government's core argument—that Anthropic, by retaining some form of access or influence over its deployed models, poses an inherent security risk. It's a novel legal theory, and one that could have sweeping implications for how any software company does business with the federal government.
Anthropic counters that the designation is legally baseless and retaliatory. The company argues it is being punished for exercising a legitimate ethical position. The government, it says, is trying to compel a private company to make its technology available for any use the military sees fit, with no recourse.
Meanwhile, Palantir CEO Alex Karp confirmed on March 12 that his company is continuing to use Claude in its Pentagon work even as the legal battle plays out. Reports have also indicated the model is being used in operations related to the conflict with Iran.
The Stakeholders Watching This Closely
For investors, the calculus is complicated. Anthropic was last valued at roughly $60 billion, backed heavily by Amazon and Google. Federal contracts represent not just revenue but credibility—the signal that the technology is trusted at the highest levels. Losing that signal, even temporarily, has downstream effects on enterprise sales well beyond Washington.
Amazon is in a particularly awkward position. It has invested tens of billions into Anthropic and simultaneously operates AWS infrastructure for numerous government agencies. A ruling that goes against Anthropic doesn't just hurt its portfolio company—it raises questions about the terms under which any AI vendor can operate within the federal ecosystem.
For other AI companies—OpenAI, Google DeepMind, Meta—this case is a precedent-setting moment. If Anthropic loses, the message is clear: attach ethical conditions to a government contract, and you risk being labeled a national security threat. If Anthropic wins, it establishes that AI companies have legal standing to contest how their technology is used, even by the most powerful client in the world.
For civil liberties advocates, the case raises a different concern entirely. The supply chain risk designation was designed as a national security tool to keep adversarial foreign technology out of sensitive systems. Applying it to a domestic company that simply disagreed on terms of use is, in their view, a dangerous expansion of executive power.
The Deeper Tension This Case Reveals
Strip away the legal arguments and what remains is a genuinely hard problem: who decides how AI gets used?
Weapons manufacturers routinely attach end-use conditions to their products—export controls, resale restrictions, prohibitions on certain modifications. Software companies have long included terms of service. But AI sits in an uncomfortable middle ground. It's not a bomb, but it's not a word processor either. It makes decisions, adapts, and operates in contexts its creators may never have anticipated.
Anthropic's position—that it has a right and responsibility to prevent its technology from being used in fully autonomous lethal systems—is not fringe. It reflects a growing consensus among AI safety researchers that the humans who build these systems bear some responsibility for their downstream effects. The DOD's counter-position—that a contractor cannot attach ethical strings to national security tools—is equally coherent from a military readiness standpoint.
Judge Lin could rule from the bench Tuesday or issue a written decision later. Either way, the ruling will be read carefully far beyond San Francisco.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
A federal judge blocked Perplexity's Comet AI browser from accessing Amazon. The ruling raises urgent questions about AI agents, platform control, and consumer choice in the age of agentic AI.
Anthropic filed two federal lawsuits against the Trump administration after being labeled a 'supply chain risk' for refusing to greenlight autonomous weapons use. What this fight means for AI ethics, defense contracts, and the future of the industry.
America is preparing strict new AI guidelines amid tensions with Anthropic, signaling a major regulatory shift that could reshape the global AI landscape and investment strategies.
U.S. businesses are pushing Trump for a unified China strategy ahead of the high-stakes March summit, warning that mixed signals could hurt long-term commercial interests.
Thoughts
Share your thoughts on this article
Sign in to join the conversation