The iPhone Battery Myth: Altman Fights Back on AI's Green Critics
OpenAI's CEO dismisses claims about AI's environmental impact as 'totally fake' and 'insane,' while acknowledging the real energy challenge ahead.
"17 gallons per ChatGPT query? That's completely insane."
Those were the words Sam Altman delivered at an AI summit in India this week, taking direct aim at critics who've been sounding alarms about AI's environmental footprint. The OpenAI CEO didn't just defend his company—he went on the offensive, calling widespread concerns about AI's water usage "totally fake" and having "no connection to reality."
But here's where it gets interesting: Altman wasn't dismissing all environmental concerns. He was drawing a very specific line in the sand.
The Data Center Evolution Nobody's Talking About
Altman's water usage defense hinges on a technical shift that's flown under most people's radar. "It was a real issue when we used to do evaporative cooling in data centers," he acknowledged. "Now that we don't do that," the horror stories about massive water consumption are outdated.
The problem? There's no legal requirement for tech companies to disclose their actual energy and water usage. Scientists are left trying to reverse-engineer the numbers, leading to what Altman sees as wildly inaccurate claims circulating online.
When asked about reports that a single ChatGPT query uses the equivalent of 1.5 iPhone battery charges—a claim that's been making rounds in tech circles—Altman was blunt: "There's no way it's anything close to that much."
The Energy Admission: "Fair to Be Concerned"
Here's where Altman's narrative shifts. While dismissing water usage fears, he readily admits energy consumption is a legitimate worry—just not in the way most people think.
"It's fair to be concerned about the energy consumption—not per query, but in total, because the world is now using so much AI," he said. Data centers have already been linked to rising electricity prices in some regions, and the trend is accelerating.
His solution? "Move towards nuclear or wind and solar very quickly." It's a response that acknowledges the scale of the challenge while pushing responsibility toward broader energy infrastructure changes.
The Human vs. AI Efficiency Showdown
Perhaps Altman's most provocative argument was his comparison between human and AI energy efficiency. Critics often focus on how much energy it takes to train AI models, but Altman thinks that's the wrong comparison.
"It also takes a lot of energy to train a human," he argued. "It takes like 20 years of life and all of the food you eat during that time before you get smart. And not only that, it took the very widespread evolution of the 100 billion people that have ever lived... to produce you."
His proposed fair comparison: "If you ask ChatGPT a question, how much energy does it take once its model is trained to answer that question versus a human? And probably, AI has already caught up on an energy efficiency basis, measured that way."
The Transparency Problem
What's missing from this entire debate is hard data. Without disclosure requirements, we're left with estimates, educated guesses, and—according to Altman—"totally insane" misinformation.
This information vacuum creates a perfect storm: environmental advocates cite worst-case scenarios, tech leaders dismiss concerns as overblown, and consumers are left wondering who to believe. Meanwhile, data centers continue expanding, and electricity bills keep climbing.
The Regulatory Response Brewing
While Altman was speaking in India, regulators in the US and Europe are already eyeing mandatory disclosure requirements for tech companies' environmental impact. The EU's upcoming AI Act includes provisions for transparency around computational resources, and similar measures are being discussed in Washington.
For investors, this regulatory uncertainty adds another layer of complexity to AI valuations. Companies that can demonstrate genuine energy efficiency may find themselves with competitive advantages as scrutiny intensifies.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Microsoft shakes up gaming leadership with AI expert Asha Sharma replacing Phil Spencer. She vows to integrate AI without flooding the ecosystem with soulless content.
Canadian mass shooter used ChatGPT to describe gun violence months before killing 8 people. OpenAI staff debated calling police but didn't. Where does AI companies' responsibility end?
OpenAI staff raised concerns about a user who later committed a mass shooting, but company leaders declined to alert authorities. Where does AI safety responsibility end?
Sarvam AI launches Indus chat app with 105B parameter model, challenging OpenAI and Google in India's booming AI market. Can local expertise beat global scale?
Thoughts
Share your thoughts on this article
Sign in to join the conversation