Musk's Sworn Words, Page's Silence, and the Argument That Built OpenAI
At his OpenAI trial, Elon Musk testified under oath about a falling-out with Larry Page over AI safety. The story reveals how personal philosophy shapes billion-dollar industries.
"Fine" — that was Larry Page's response when Elon Musk raised the possibility of AI wiping out humanity.
The year was 2015. The setting: Page's Palo Alto home, where Musk was a frequent overnight guest. Musk pressed the existential risk. Page's counter: as long as AI itself survived, the extinction of humans would be acceptable. Page then called Musk a "speciest" — for being, in Page's framing, irrationally pro-human. Musk called the attitude "insane."
On Tuesday, April 28, 2026, Musk repeated that story in a San Francisco courtroom — this time, under oath.
The Friendship That Built (and Broke) an Industry
Fortune named Musk and Page to its 2016 list of secretly best-friend business leaders. The intimacy was real: Musk regularly crashed at Page's home, and Page once told Charlie Rose he'd rather give his fortune to Musk than to charity. These weren't rivals performing cordiality. They were, by most accounts, genuinely close.
The break came when Musk recruited Google AI star Ilya Sutskever to help co-found OpenAI in 2015. Page felt personally betrayed. He cut off contact. A friendship worth more in social capital than most people will ever see in dollars — ended over an AI researcher.
Musk has told this story before, to biographer Walter Isaacson and to podcaster Lex Fridman. As recently as 2023, he told Fridman: "We were friends for a very long time" — and that he wanted to patch things up. But Tuesday's testimony is the first time the account entered the legal record. Page has not responded.
What the Courtroom Drama Is Really About
The lawsuit itself centers on a straightforward allegation: that OpenAI, the nonprofit Musk co-founded, has betrayed its original mission by converting to a for-profit structure — enriching insiders rather than advancing AI for humanity's benefit. Musk is seeking to block or reverse that conversion.
The Page anecdote serves a purpose in that argument. It's Musk's attempt to establish that his motivations for founding OpenAI were genuinely philosophical, not merely financial. If the jury (or the public) believes he founded the company out of sincere concern for AI safety, his grievance about the mission drift carries more weight.
But the context cuts both ways. Musk now runs xAI and its Grok model — a direct OpenAI competitor. He has publicly floated acquiring OpenAI outright. His testimony is simultaneously a personal account and a litigation strategy. Everything said in a courtroom is said in service of winning.
Three Ways to Read This
For AI developers and researchers, the Page-Musk split is a founding myth of the modern AI safety movement. The argument that AI should be built for humans — not simply by them — underpins much of the alignment research that OpenAI, Anthropic, and others have pursued. Whether or not Musk's current motives are pure, the philosophical fault line he described is real and consequential.
For investors, the trial raises a structural question that goes beyond Musk's personal grievance. OpenAI's $157 billion valuation rests partly on a nonprofit-to-profit conversion that is now under legal challenge. If courts impose constraints on that transition, the implications ripple into how AI companies raise capital and how investors price governance risk.
For the broader public, the story is a reminder that the technologies reshaping daily life were often shaped first by arguments between a small number of people in living rooms — arguments that never got a public airing until they landed in court.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Elon Musk and Sam Altman head to trial this week in a case that could determine whether OpenAI survives as a for-profit company—and who leads it. Here's what's really at stake.
The Musk vs. Altman OpenAI trial opened with a jury selection crisis. Prospective jurors called Musk a 'world-class jerk' on official court forms. What does that tell us?
From hyper-personalized phishing to deepfake video calls, AI has turbocharged cybercrime. Meanwhile, hospitals adopt AI tools whose patient benefits remain unproven. What does this mean for trust?
Cohere and Aleph Alpha are merging to build a transatlantic AI challenger valued at $20 billion. Their pitch: sovereignty, not just performance. Can it work?
Thoughts
Share your thoughts on this article
Sign in to join the conversation