When ChatGPT Shows Ads, Your Deepest Thoughts Become Products
Former OpenAI researcher Zoë Hitzig quits as company introduces ChatGPT ads, warning of unprecedented privacy risks from AI's intimate user conversations
The $700 Million Question That Made a Researcher Quit
What would make a Harvard fellow walk away from one of the world's most powerful AI companies on the same day it introduces a major new feature? For Zoë Hitzig, the answer was simple: ChatGPT started showing ads, and she couldn't stomach what that meant for humanity.
On Wednesday, Hitzig published a scathing essay in The New York Times, revealing she'd resigned from OpenAI on Monday—precisely when the company began testing advertisements inside ChatGPT. After two years helping shape how AI models are built and priced, the economist-turned-poet had seen enough.
"I once believed I could help the people building A.I. get ahead of the problems it would create," she wrote. "This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I'd joined to help answer."
Facebook's Playbook, AI's Scale
Hitzig didn't condemn advertising outright. Instead, she warned that ChatGPT ads risk repeating Facebook's mistakes from a decade ago—but with far more intimate data at stake.
Think about what you've shared with ChatGPT versus what you've posted on Facebook. On social media, you curate. With ChatGPT, you confess. Users have poured out medical fears, relationship troubles, religious doubts, and career anxieties to the chatbot, "because people believed they were talking to something that had no ulterior agenda."
The result? What Hitzig calls "an archive of human candor that has no precedent." While Facebook knew you liked certain pages, ChatGPT knows your deepest insecurities.
The Economics of Intimacy
OpenAI's move toward advertising isn't surprising from a business perspective. Running ChatGPT costs an estimated $700 million monthly, and subscription revenue alone can't sustain those expenses. But the ethical implications are staggering.
Consider the targeting possibilities: Someone who's discussed depression symptoms could see antidepressant ads. A user exploring relationship problems might get divorce lawyer promotions. The line between helpful and exploitative becomes razor-thin when AI knows your vulnerabilities better than your closest friends.
Google faces similar dilemmas with Gemini, while Microsoft's Bing Chat already mixes search results with advertisements. But conversational AI advertising represents uncharted ethical territory.
The Trust Paradox
Hitzig's resignation highlights a fundamental paradox in AI development. The more useful these systems become—the more they understand context, emotion, and nuance—the more dangerous they become as advertising platforms.
Users trust ChatGPT precisely because it seems agenda-free. It doesn't judge, doesn't gossip, doesn't have ulterior motives. But advertising changes that dynamic completely. Suddenly, every conversation becomes a data point in someone's marketing strategy.
The timing of Hitzig's departure sends a clear message: even those building these systems are uncomfortable with where they're heading. If the people creating AI can't reconcile its potential with its perils, what hope do the rest of us have?
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Discord faces fierce criticism after announcing all users will default to teen mode until age verification. Privacy advocates clash with child safety concerns as 70,000 government IDs were recently breached.
Meta CEO's trial sees smart glasses ban as wearable recording devices blur boundaries between convenience and privacy invasion in public spaces.
OpenAI's first hardware device will be a camera-equipped smart speaker priced at $200-300, marking the AI giant's ambitious pivot into physical products
Meta considered launching facial recognition for Ray-Ban smart glasses during political chaos to avoid privacy backlash, revealing big tech's calculated approach to controversial features
Thoughts
Share your thoughts on this article
Sign in to join the conversation