The Pentagon Called an AI Company a National Security Threat. Here's the Paper Trail.
Anthropic filed two sworn declarations challenging the Pentagon's claim that it poses a national security risk. The timeline they reveal raises uncomfortable questions about the government's real motives.
The Day After You're Declared a Threat, They Said You Were Almost a Deal
On March 4, 2026, the Pentagon finalized its designation of Anthropic as a supply-chain risk — the first time in U.S. history that designation had ever been applied to an American company. That same day, the Pentagon's own Under Secretary Emil Michael emailed Anthropic CEO Dario Amodei to say the two sides were "very close" on the exact two issues the government now cites as proof the company is a national security threat.
That email sits at the center of Anthropic's legal counteroffensive. Late Friday, the company filed two sworn declarations in a California federal court, directly challenging the Pentagon's framing ahead of a hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco. The declarations accompany Anthropic's reply brief in its lawsuit against the Department of Defense — a case that has quietly become one of the most consequential legal battles over AI governance in the United States.
What Actually Happened in That Room
The dispute began publicly on February 28, when President Trump and Defense Secretary Pete Hegseth announced they were cutting ties with Anthropic after the company refused to allow unrestricted military use of its AI technology. The Pentagon subsequently designated Anthropic under supply-chain risk management rules — a designation previously reserved for foreign adversaries.
The first declaration comes from Sarah Heck, Anthropic's Head of Policy and a former National Security Council official from the Obama administration. She was physically present at the February 24 meeting where Amodei sat down with Hegseth and Under Secretary Michael. Her declaration targets what she calls the central falsehood in the government's court filings: that Anthropic demanded some kind of approval role over military operations.
"At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," she wrote under oath.
She also flags a procedural problem. The Pentagon's concern about Anthropic potentially disabling or altering its technology mid-operation — one of the government's key arguments — never came up during months of negotiations. It appeared for the first time in the government's court filings, giving Anthropic no opportunity to respond before the designation was made.
Then there's that timeline. March 4: designation finalized. Also March 4: Michael emails Amodei saying they're "very close" on autonomous weapons and mass surveillance of Americans — the two issues the government says make Anthropic dangerous. March 5: Amodei publishes a statement describing "productive conversations" with the Pentagon. March 6: Michael posts on X that "there is no active Department of War negotiation with Anthropic." One week later: he tells CNBC there's "no chance" of renewed talks.
Heck's implicit argument is pointed: if Anthropic's positions on those two issues constitute a national security threat, why was the Pentagon's own official saying they were nearly resolved the day after the designation was locked in?
The Technical Argument: There Is No Kill Switch
The second declaration takes a different approach. Thiyagu Ramasamy, Anthropic's Head of Public Sector, spent six years at Amazon Web Services managing AI deployments for government clients, including classified environments. At Anthropic, he built the team that secured the $200 million Pentagon contract announced last summer — the very contract now at the center of this dispute.
His declaration dismantles the government's claim that Anthropic could theoretically interfere with military operations by disabling or altering Claude mid-deployment. Per Ramasamy, once the model is running inside a government-secured, air-gapped system operated by a third-party contractor, Anthropic has no access to it. No remote kill switch. No backdoor. No mechanism to push unauthorized updates. Anthropic can't even see what government users are typing into the system.
Any change to the model, he explains, would require the Pentagon's explicit approval and a deliberate installation action by Pentagon-controlled personnel. The "operational veto" the government fears isn't technically possible.
Ramasamy also pushes back on the government's suggestion that Anthropic's foreign national employees represent a security risk. He notes that Anthropic personnel have undergone U.S. government security clearance vetting and claims, to his knowledge, that Anthropic is the only AI company where cleared personnel actually built the AI models designed to run in classified environments.
Why This Case Is Bigger Than One Contract
The legal framing matters here. Anthropic's lawsuit argues that the supply-chain risk designation amounts to government retaliation for the company's publicly stated views on AI safety — a First Amendment violation. The government's 40-page response rejected that entirely, arguing Anthropic's refusal to allow all lawful military uses is a business decision, not protected speech.
That distinction is the crux of the case, and its implications extend well beyond Anthropic.
If the government prevails, the precedent is stark: an AI company that publicly advocates for safety constraints on military use of its technology can be designated a national security risk and effectively locked out of government contracts. Every major AI lab — OpenAI, Google DeepMind, Meta — has published some version of responsible use policies. Those policies would suddenly carry legal risk.
If Anthropic prevails, it establishes something equally significant: that AI companies have a legally protected right to negotiate the terms of how their technology is used, even by the military. That's a right no defense contractor has historically enjoyed.
Three Ways to Read This
The national security hawk view: The military needs operational certainty. An AI system with any usage restrictions — however well-intentioned — creates unpredictability in high-stakes environments. The Pentagon's job is to eliminate that unpredictability, not negotiate around it.
The civil liberties view: The government is using national security designation as a cudgel against a company that simply declined to enable autonomous weapons and mass surveillance of Americans. Those are the two specific issues. If that's what makes you a threat, the designation process needs scrutiny.
The industry view: Anthropic built a $200 million government relationship precisely by demonstrating it could operate responsibly in sensitive environments. The abrupt reversal, combined with the email timeline Heck reveals, looks less like a security assessment and more like a negotiating breakdown that escalated into official action.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
On Day 13 of America's war with Iran, a tech reporter sits in a Pentagon briefing room, unable to move without an escort or bring in a cup of coffee. What does that tell us?
The Pentagon is exploring training AI models like OpenAI and xAI on classified military data. As tensions with Iran escalate, the plan raises urgent questions about security, accountability, and the future of AI in warfare.
After a $200M contract collapse, the Pentagon is building its own LLMs, signed deals with OpenAI and xAI, and labeled Anthropic a supply-chain threat. What this means for AI safety, defense tech, and the industry's ethical calculus.
Elon Musk has ousted more xAI cofounders over weak coding AI performance, deploying SpaceX and Tesla "fixers" ahead of a June IPO. What does this mean for the AI coding race?
Thoughts
Share your thoughts on this article
Sign in to join the conversation