Who Controls AI When It Matters Most?
Anthropic's clash with the Pentagon reveals a timeless pattern: the people who build powerful technologies rarely get the final say in how they're used. Nuclear history already told us this story.
The scientists who built the atomic bomb weren't consulted when it was dropped on Hiroshima. Nobody called.
The weapons were loaded onto trucks and driven away from the New Mexico desert. The physicists who had spent years wrestling with fission reactions, who had debated the ethics of mass destruction in hushed voices, who had signed petitions urging restraint—none of them were in the room when the decision was made. Their leverage had been front-loaded: build it or don't. Once they'd built it, that was that.
Nearly eighty years later, Dario Amodei is learning the same lesson.
The Standoff That Revealed Everything
The facts are striking in their bluntness. Anthropic's AI model, Claude, had been running on America's classified networks for months. According to reports, it had already been used in U.S. military operations targeting Venezuela and Iran. Then, on February 23, 2025, the Pentagon issued an ultimatum: remove any restrictions on military use of the model beyond what existing law already requires.
Amodei had drawn two red lines throughout negotiations. He didn't want Claude used for mass surveillance of American citizens. And he didn't want it directing autonomous weapons—systems that could kill without a human making the final call. These weren't abstract ethical positions; they were the kind of limits that Amodei had spent years arguing were essential to responsible AI development.
The Pentagon said no. When talks collapsed, the Defense Department reached for a tool it had never deployed against an American company before: a supply-chain-risk designation, a designation that could threaten Anthropic's ability to operate. Anthropic has since filed suit to have it removed.
While all of this was unfolding, Sam Altman moved quickly. OpenAI finalized its own Pentagon deal—the kind of rapid accommodation that prompted Amodei to write a memo to his staff calling the arrangement "safety theater" and saying the episode had revealed "who they really are."
What Amodei didn't fully reckon with, at least not publicly, is what the episode revealed about his own position.
The Utopians Who Came Before
Amodei is, by any measure, one of the more thoughtful figures in the AI industry. His 15,000-word essay "Machines of Loving Grace" imagines AI compressing decades of scientific progress into years—curing cancers, defeating infectious disease, doubling human lifespans, all by 2035. He's compared today's AI researchers to Manhattan Project scientists and has recommended Richard Rhodes's definitive history of the atomic bomb to his staff.
The comparison is apt. But perhaps not in the way Amodei intends.
The men who split the atom were also utopians. Leo Szilard, who first conceived of the nuclear chain reaction, had been profoundly shaped by H.G. Wells's 1914 novel The World Set Free, which imagined atomic energy liberating humanity from toil. Edward Teller dreamed of using nuclear explosions to redirect rivers, blast highways through mountains, and carve a new Panama Canal. He once offered to excavate an Alaskan harbor in the shape of a polar bear. Lewis Strauss, chair of the Atomic Energy Commission, declared that nuclear power would soon be "too cheap to meter"—a phrase Altman has recently recycled, swapping atoms for intelligence.
Niels Bohr went furthest of all. He believed the sheer terror of nuclear weapons would force world leaders toward radical transparency and cooperation. The bomb, in his telling, would be so awful that it would compel humanity to grow up.
None of it happened the way they imagined. The Ford Nucleon—a family car designed around a miniature rear-mounted reactor—never made it past the concept stage. Project Plowshare, the Atomic Energy Commission's program for "peaceful" nuclear detonations, conducted 27 tests and produced nothing except radioactive contamination and a galvanized environmental movement. And Bohr's dream of nuclear-induced peace? Today, nine nations hold nuclear arsenals totaling more than 12,000 warheads. The last remaining treaty constraining the two largest—America's and Russia's—expired last month without renewal.
The scientists were right that they were delivering humanity into a new world. They were wrong about what kind.
The Structural Problem Nobody Wants to Name
Amodei's frustration with OpenAI is understandable. But it points away from the harder question.
He believes democracies need AI to maintain a military edge over China and other authoritarian states. That's a coherent position—and one shared by many serious people who think about geopolitics and technology. But that logic contains a trap. If AI advantage over adversaries is genuinely a matter of national survival, then governments will treat AI the way they've always treated technologies that matter that much: as a strategic asset to be controlled, not a product whose terms of service can be negotiated.
The Pentagon's behavior here wasn't a policy mistake or an overreach by a rogue official. It was the predictable expression of competitive logic. Claude is being used against Venezuela and Iran—countries that aren't even serious AI competitors. If the U.S. were facing a peer adversary with frontier AI models as capable as its own, the pressure on companies like Anthropic wouldn't be a supply-chain designation. It would be something much harder to resist.
J. Robert Oppenheimer understood this dynamic, eventually. After the war, he channeled Bohr's ideas into a proposal for international control of nuclear technology. A similar plan went to the United Nations in 1946. The Soviet Union rejected it. The cold logic of competitive advantage—the same logic that now governs AI development—defeated every serious attempt at restraint.
Amodei has written that superhuman AI could be more dangerous than nuclear weapons. He may be right. But the nuclear analogy suggests that being right about the danger doesn't translate into control over what happens next.
What This Means Beyond the Headlines
For readers who care about AI ethics and technology governance, the Anthropic-Pentagon episode is less about one company's contract dispute and more about a structural reality that the industry has been slow to confront.
The people building the most powerful AI systems in the world are operating under a set of assumptions—that they can negotiate meaningful limits, that their ethical commitments will be respected, that "responsible development" is a durable concept in a competitive geopolitical environment—that history gives us reason to question.
This isn't an argument that AI development should stop, or that military use of AI is inherently wrong. It's a question about who actually holds the steering wheel when the stakes are highest. The AI governance debate has largely been framed around questions of bias, privacy, and economic disruption. The Anthropic case suggests the harder question may be simpler and older: when a technology becomes indispensable to national power, can its creators retain any meaningful say in its use?
Bohr tried to answer that question with nuclear weapons. Oppenheimer tried. The treaties they inspired were eventually abandoned, one by one, by the cold arithmetic of mutual suspicion.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Anthropic rejected a Pentagon contract over surveillance concerns while OpenAI rushed to sign. The market's reaction reveals what users really value in AI companies.
The Pentagon has given Anthropic until Friday evening to allow unrestricted military use of its Claude AI, setting up the biggest AI ethics showdown since 2018. What's really at stake?
AI is transforming romance scams into sophisticated operations that exploit loneliness at unprecedented scale. How do we protect love in the age of artificial intimacy?
Tennessee Rep. Andy Ogles declared "Muslims don't belong in American society." His district has 40,000 Muslim constituents. What happens when a democracy produces representatives who reject pluralism itself?
Thoughts
Share your thoughts on this article
Sign in to join the conversation