Pentagon Picks OpenAI, Brands Anthropic a 'Supply Chain Risk
Defense Department labels Anthropic a national security risk while striking deal with OpenAI. The AI safety vs military utility debate just got real.
The Pentagon just gave Anthropic a label usually reserved for Chinese spy companies: "Supply-Chain Risk to National Security." Hours later, OpenAI announced it had struck a deal to deploy its AI models on the Defense Department's classified networks.
Same day. Same government. Completely opposite outcomes.
The 24-Hour Reversal
Late Friday night, OpenAI CEO Sam Altman posted a brief message on X: "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network." The timing wasn't coincidental. Earlier that day, Defense Secretary Pete Hegseth had essentially blacklisted Anthropic from all government work.
Anthropic had been the first AI company to deploy models across the DoD's classified network. But weeks of tense negotiations collapsed over red lines. The company wanted guarantees its AI wouldn't be used for fully autonomous weapons or mass surveillance of Americans. The Pentagon wanted Anthropic to agree to "all lawful use cases."
Neither side blinked. Until one got kicked out.
Same Terms, Different Reception
Here's where it gets interesting: Altman said OpenAI demanded the exact same restrictions as Anthropic. No domestic mass surveillance. Human responsibility for use of force. No autonomous weapon systems.
Yet the Pentagon agreed to OpenAI's terms while branding Anthropic a national security risk.
Why the different treatment? Government officials have spent months criticizing Anthropic for being "overly concerned with AI safety." Meanwhile, OpenAI has positioned itself as more pragmatic about real-world deployment, even as it maintains similar ethical guardrails.
Winners and Losers
The implications are massive. OpenAI now has exclusive access to the government market worth potentially billions. President Trump directed every federal agency to "immediately cease" using Anthropic's technology. That's not just the Pentagon—that's the entire U.S. government.
Anthropic's$40 billion valuation suddenly looks shaky. The company says it will challenge the designation in court, but "supply chain risk" labels are notoriously difficult to overturn. DoD contractors are already required to certify they don't use Anthropic models.
Altman tried to play peacemaker, asking the Pentagon to "offer these same terms to all AI companies." But the damage is done. The AI industry just learned that how you frame safety concerns matters as much as the concerns themselves.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
India's Air Force showcased foreign fighters like Rafale and MiG-29 at major drills, but its indigenous Tejas was notably absent. What this means for the country's $140B defense ambitions.
President Trump ordered all federal agencies to stop using Anthropic's AI technology after the company rejected Pentagon demands for unrestricted military use. A $200M defense contract hangs in the balance.
Amazon's massive $50 billion OpenAI investment creates new AI alliance, challenging Microsoft's dominance while accelerating cloud wars and custom chip competition
Jack Dorsey cuts 4,000 jobs citing AI efficiency. Stock soars 25%, but critics question if this is real disruption or pandemic hiring cleanup.
Thoughts
Share your thoughts on this article
Sign in to join the conversation