When ChatGPT Shows Ads, Your Deepest Thoughts Become Products
Former OpenAI researcher Zoë Hitzig quits as company introduces ChatGPT ads, warning of unprecedented privacy risks from AI's intimate user conversations
The $700 Million Question That Made a Researcher Quit
What would make a Harvard fellow walk away from one of the world's most powerful AI companies on the same day it introduces a major new feature? For Zoë Hitzig, the answer was simple: ChatGPT started showing ads, and she couldn't stomach what that meant for humanity.
On Wednesday, Hitzig published a scathing essay in The New York Times, revealing she'd resigned from OpenAI on Monday—precisely when the company began testing advertisements inside ChatGPT. After two years helping shape how AI models are built and priced, the economist-turned-poet had seen enough.
"I once believed I could help the people building A.I. get ahead of the problems it would create," she wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer."
Facebook's Playbook, AI's Scale
Hitzig didn't condemn advertising outright. Instead, she warned that ChatGPT ads risk repeating Facebook's mistakes from a decade ago—but with far more intimate data at stake.
Think about what you've shared with ChatGPT versus what you've posted on Facebook. On social media, you curate. With ChatGPT, you confess. Users have poured out medical fears, relationship troubles, religious doubts, and career anxieties to the chatbot, "because people believed they were talking to something that had no ulterior agenda."
The result? What Hitzig calls "an archive of human candor that has no precedent." While Facebook knew you liked certain pages, ChatGPT knows your deepest insecurities.
The Economics of Intimacy
OpenAI's move toward advertising isn't surprising from a business perspective. Running ChatGPT costs an estimated $700 million monthly, and subscription revenue alone can't sustain those expenses. But the ethical implications are staggering.
Consider the targeting possibilities: Someone who's discussed depression symptoms could see antidepressant ads. A user exploring relationship problems might get divorce lawyer promotions. The line between helpful and exploitative becomes razor-thin when AI knows your vulnerabilities better than your closest friends.
Google faces similar dilemmas with Gemini, while Microsoft's Bing Chat already mixes search results with advertisements. But conversational AI advertising represents uncharted ethical territory.
The Trust Paradox
Hitzig's resignation highlights a fundamental paradox in AI development. The more useful these systems become—the more they understand context, emotion, and nuance—the more dangerous they become as advertising platforms.
Users trust ChatGPT precisely because it seems agenda-free. It doesn't judge, doesn't gossip, doesn't have ulterior motives. But advertising changes that dynamic completely. Suddenly, every conversation becomes a data point in someone's marketing strategy.
The timing of Hitzig's departure sends a clear message: even those building these systems are uncomfortable with where they're heading. If the people creating AI can't reconcile its potential with its perils, what hope do the rest of us have?
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Cerebras Systems has refiled for an IPO targeting mid-May, backed by a $23B valuation, a reported $10B OpenAI deal, and an AWS partnership. What does this mean for Nvidia's dominance and the AI chip landscape?
After two months of bitter conflict, Anthropic and the Trump administration may be thawing—thanks to a new cybersecurity AI model. What does it mean when principle meets political pressure?
OpenAI's $852B valuation is drawing skepticism from its own backers as Anthropic's ARR tripled in three months. The secondary market is already voting with its feet.
OpenAI acquired Hiro Finance, an AI-powered personal finance startup. Is this just a talent grab, or is the ChatGPT maker quietly building a financial services empire?
Thoughts
Share your thoughts on this article
Sign in to join the conversation