The $200M Price of Breaking AI Safety Promises
Anthropic lost a Pentagon contract for refusing surveillance and killer robots. But MIT's Max Tegmark says AI companies created this mess by blocking regulation while breaking their own safety pledges.
A $200 Million Contract Vanished in 48 Hours
Friday afternoon brought a stunning reversal: the Trump administration severed all ties with Anthropic, the San Francisco AI company that had positioned itself as the "safety-first" alternative to OpenAI. Defense Secretary Pete Hegseth invoked national security laws to blacklist the company after CEO Dario Amodei refused two specific requests: using Anthropic's AI for mass surveillance of U.S. citizens and developing autonomous armed drones that could kill without human oversight.
The fallout was swift and brutal. A Pentagon contract worth up to $200 million evaporated. President Trump posted on Truth Social directing every federal agency to "immediately cease all use of Anthropic technology." The company now faces exclusion from working with other defense contractors and has promised to challenge the decision in court.
On the surface, this looks like a principled stand: an ethical AI company refusing to compromise its values against government pressure. But MIT physicist Max Tegmark, who has spent a decade warning about AI risks, sees something else entirely—a cautionary tale about the dangers of regulatory capture.
The Great Safety Promise Rollback
Tegmark's analysis cuts to the heart of a troubling pattern. The same AI companies that have spent years lobbying against binding regulation—while promising to govern themselves responsibly—have systematically abandoned their own safety commitments:
- Google: Dropped its famous "Don't be evil" motto, then abandoned broader AI ethics pledges to sell surveillance and weapons tech
- OpenAI: Quietly removed "safety" from its mission statement
- xAI: Shut down its entire safety team
- Anthropic: This week dropped its core safety pledge—the promise not to release powerful AI systems until confident they won't cause harm
"We now have less regulation on AI systems in America than on sandwiches," Tegmark observes with characteristic bluntness. "If a health inspector finds 15 rats in a sandwich shop kitchen, they shut it down. But if you want to build AI girlfriends for 11-year-olds that have been linked to suicides, or release something called superintelligence that might overthrow the U.S. government—the inspector has to say, 'Fine, go ahead, just don't sell sandwiches.'"
The China Card: A Convenient Myth?
When pressed on regulation, AI companies invariably play what Tegmark calls "the China card"—arguing that any restrictions will hand Beijing a competitive advantage. The reality, he suggests, is more complex.
China is actually moving to ban AI girlfriends entirely, viewing them as harmful to Chinese youth. And the notion that Xi Jinping—a leader obsessed with control—would tolerate Chinese companies building superintelligence capable of overthrowing his government? "No way," says Tegmark.
The same logic applies to the U.S. "It's clearly really bad for the American government too if it gets overthrown in a coup by the first American company to build superintelligence. This is a national security threat."
The Regulatory Vacuum They Created
Here's where Tegmark's argument becomes particularly damning. If AI companies had taken their early safety promises seriously—converting voluntary commitments into binding law that would constrain even "sloppy competitors"—they wouldn't face today's predicament.
Instead, they successfully lobbied for what amounts to "complete corporate amnesty." AI lobbyists now outspend those from fossil fuels, pharmaceuticals, and the military-industrial complex combined. But there's currently no law preventing AI from being used to kill Americans—so the government can suddenly demand exactly that.
"They really shot themselves in the foot," Tegmark concludes. "Their own resistance to having laws saying what's okay and not okay to do with AI is now coming back and biting them."
The Superintelligence Timeline
Perhaps most unsettling is Tegmark's assessment of how quickly we're approaching the scenarios he's long warned about. Six years ago, nearly every AI expert predicted human-level language AI was decades away—maybe 2040 or 2050. They were all wrong.
Last year, AI won the gold medal at the International Mathematics Olympiad, "about as difficult as human tasks get." The progression from high school to college to PhD to professor level has happened faster than anyone anticipated.
When Amodei describes his vision of "a country of geniuses in a data center," Tegmark suggests national security officials might start asking: "Wait, did Dario just use the word 'country'? Maybe I should put that country of geniuses on the same threat list I'm keeping tabs on."
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Anthropic's Claude jumps from outside top 100 to No. 2 in App Store after Pentagon dispute. How AI ethics became the ultimate marketing strategy.
President Trump orders federal agencies to cease using Anthropic's AI tools after weeks of tensions over military applications, setting up a six-month negotiation window.
Illinois health officials deployed an AI chatbot to investigate a salmonella outbreak linked to a county fair. But questions remain about whether artificial intelligence can truly revolutionize epidemic tracking.
The Vera C. Rubin Observatory's automated alert system went live, flooding astronomers with 800,000 alerts about asteroids, supernovas, and black holes on its first night. The age of astronomical big data has begun.
Thoughts
Share your thoughts on this article
Sign in to join the conversation