OpenAI Said AI Was Too Dangerous for Investors. Then It Took Their Money.
OpenAI's restructuring into a for-profit model raises urgent questions about AI governance, nonprofit law, and whether billion-dollar philanthropy can paper over a structural conflict of interest.
Sam Altman once built a wall between AI and investors — on purpose. Now he's asking us to trust that the wall still stands, even as he tears it down brick by brick.
In October 2025, OpenAI completed a sweeping organizational restructure, shifting from a nonprofit-controlled model to one centered on a for-profit entity. The nonprofit arm didn't disappear — it now sits atop a $180 billion foundation, theoretically one of the largest charitable organizations on earth. Altman insists the mission hasn't changed: AI must still benefit all of humanity. Critics say the mission hasn't changed. The power behind it has.
The Original Bargain
When OpenAI was founded, the nonprofit structure wasn't incidental — it was the whole point. Altman himself was explicit about it: they chose the nonprofit model precisely because they believed AI was dangerous enough that putting it in investors' hands would make that danger real. The governance structure was designed to keep the technology at arm's length from profit motives.
Catherine Bracy, founder of Tech Equity, a nonprofit focused on technology and economic opportunity, knew Altman from those early days. She was researching venture capital when she spoke with him about OpenAI's founding logic. "He was very forthcoming," she recalls. "That was the explicit reason why OpenAI was founded as a nonprofit."
A few months after that conversation, she watched the structure begin to unravel.
Why the Restructure May Already Be Illegal
The legal argument Bracy and others are making is more specific than it might appear. Under U.S. tax law, nonprofits receive tax exemptions in exchange for operating in service of a public mission. OpenAI's stated mission is to ensure AI develops for the benefit of all humanity. Legally, Altman is required to prioritize that mission above everything else.
Here's where it gets complicated. When OpenAI tried to fully separate the nonprofit from the for-profit entity, it discovered that doing so would require divesting the intellectual property the nonprofit owned — including the foundational technology underlying ChatGPT — as well as its equity stake in the commercial arm. The price tag, by any reasonable estimate, would be staggering.
"They looked at that price tag and said, that's not a price we're willing to pay," Bracy argues. "So instead of splitting, they continued down this path of nonprofit ownership — which in my mind is completely untenable, unsustainable, and irreconcilable."
Her conclusion: "Basically, every day that OpenAI exists, they are violating the law."
The California Attorney General has jurisdiction to investigate and enforce nonprofit law against OpenAI. Bracy believes OpenAI is betting he won't — or can't. "They've loaded up on lawyers and they are making a bet that the AG will not pursue this in any way that's actually meaningful." Silicon Valley's oldest operating principle: ask forgiveness, not permission.
The $180 Billion Question
This week, OpenAI announced philanthropic priorities for its foundation, including Alzheimer's research, education access, and community investment. The announcement was widely covered as a sign of good faith.
Bracy isn't buying it — and she makes the case in personal terms. Her mother is currently dying of Alzheimer's. She carries a copy of the gene that puts her at elevated risk. "I pray every day that AI helps us find a solution to Alzheimer's fast enough that I can benefit from it."
But then she asks the harder question: what happens if research funded by the OpenAI Foundation finds that Anthropic's models are better at drug discovery than ChatGPT? "We do not accept the science around nicotine that tobacco companies were funding. We do not accept the science around alcohol addiction that the alcohol companies fund. And we should not accept that this scientific research is funded by an entity that has a vested financial interest in the outcome."
The parallel she draws is to Google.org — Google's corporate foundation, which by most accounts functions as an extension of the marketing department. It funds innocuous causes. It never funds anything that cuts against Google's core interests. Bracy expects the OpenAI Foundation to operate the same way. "That's essentially what they have built here."
Not Everyone Agrees the Structure Is Broken
It's worth noting that not all observers share Bracy's alarm. Some legal scholars argue that the hybrid nonprofit-for-profit structure, while unusual, isn't inherently illegal — and that the California AG's office has historically been cautious about intervening in complex corporate arrangements. Others point out that OpenAI's commercial success is itself a prerequisite for funding the kind of safety and alignment research that the nonprofit mission demands. No revenue, no research.
There's also a pragmatic counterargument: even a compromised foundation with $180 billion in theoretical assets, funding Alzheimer's research and AI safety work, might produce more public benefit than a clean nonprofit with no resources. The question of whether imperfect philanthropy beats no philanthropy at all doesn't have an obvious answer.
But Bracy's deeper concern isn't really about the foundation's grant-making. It's about what happens when the entity funding the science has an irreconcilable conflict of interest in the outcome. Scientific independence, she argues, isn't a nice-to-have. It's the whole thing.
What This Means Beyond OpenAI
The OpenAI restructuring isn't happening in isolation. Across the AI industry, the gap between stated mission and commercial behavior is widening. Anthropic was founded by former OpenAI researchers who left partly over safety concerns — and has since raised billions from Amazon and Google. DeepMind operates inside Alphabet. Meta AI is, by design, a product division of an advertising company.
The governance question Bracy is raising applies to all of them: who actually holds these companies accountable to their stated values, and what enforcement mechanisms exist when they don't? Right now, the honest answer is: mostly nothing, unless a regulator decides to act.
"This is the fight of our time," Bracy says. "AI is not inevitable. The way it develops is not inevitable. And we do not have to take these companies at their word that they know best how to govern this technology. We should have bigger imaginations about what's possible."
Altman no longer returns her calls.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Anthropic's clash with the Pentagon reveals a timeless pattern: the people who build powerful technologies rarely get the final say in how they're used. Nuclear history already told us this story.
Anthropic rejected a Pentagon contract over surveillance concerns while OpenAI rushed to sign. The market's reaction reveals what users really value in AI companies.
OpenAI's contract with the Pentagon contains numerous loopholes that could allow mass surveillance of Americans, despite public assurances. Legal experts warn the 'red lines' may be blurrier than they appear.
The Pentagon has given Anthropic until Friday evening to allow unrestricted military use of its Claude AI, setting up the biggest AI ethics showdown since 2018. What's really at stake?
Thoughts
Share your thoughts on this article
Sign in to join the conversation