Who's Really Responsible When AI Guesses Your Age?
OpenAI's new automatic age prediction system highlights the growing battle over who should verify users' ages online. Privacy advocates and child safety experts are divided on the solution.
When OpenAI announced last week that it would start automatically predicting users' ages, it wasn't just rolling out another feature. It was making a calculated move in tech's hottest game of hot potato: who should be responsible for keeping kids safe online?
The answer matters more than ever. AI chatbots are generating child abuse material, contributing to teen suicides, and creating troubling emotional attachments with young users. Yet every proposed solution seems to create new problems.
The AI Crystal Ball Problem
ChatGPT now analyzes factors like the time of day you're chatting and your conversation patterns to guess whether you're under 18. If the system thinks you're a kid, it applies filters to reduce exposure to graphic violence or sexual content. YouTube launched something similar last year.
Sounds reasonable, right? Here's the catch: AI predictions aren't perfect. Adults might get flagged as minors, or worse, actual children might slip through as adults. When the system gets it wrong, users can prove their age by submitting a selfie or government ID to a company called Persona.
But selfie verification has its own issues. It fails more often for people of color and those with certain disabilities. Sameer Hinduja from the Cyberbullying Research Center warns that having millions of government IDs and biometric data in one place creates a massive target for hackers. "When those get breached, we've exposed massive populations all at once," he says.
The Great Responsibility Shuffle
Meanwhile, Apple CEO Tim Cook has been lobbying lawmakers for a different approach: device-level verification. Parents would specify their child's age when setting up the phone, and that information would stay on the device while being shared securely with apps and websites.
Cook's proposal isn't altruistic—he's fighting laws that would require app stores to verify ages directly, which would saddle Apple with enormous liability. It's a classic tech move: appear helpful while shifting responsibility elsewhere.
Hinduja supports device-level verification because it keeps personal data from being stored on central servers. But it still requires active parental involvement, which isn't always realistic.
Politics Enters the Chat
The debate has become deeply political. Republican-led states are passing laws requiring adult content sites to verify ages, which critics say could be used to block everything from sex education to LGBTQ+ resources. Democratic-leaning states like California are targeting AI companies specifically, requiring them to protect kids who chat with bots.
President Trump wants to keep AI regulation at the federal level rather than letting states create a patchwork of rules. But the Federal Trade Commission, which would enforce these laws, has become increasingly politicized. In December, the FTC overturned a Biden-era ruling against an AI company that flooded the internet with fake reviews, saying it conflicted with Trump's AI Action Plan.
Wednesday's Moment of Truth
This Wednesday, the FTC is holding an all-day workshop on age verification that could signal where things are headed. Apple's head of government affairs will be there, along with child safety executives from Google and Meta. Also speaking: Bethany Soye, a Republican state representative pushing for age verification laws in South Dakota.
The lineup suggests the FTC might favor the Republican approach of requiring ID checks for websites. The ACLU opposes such laws, arguing they threaten free expression and privacy, and instead supports expanding existing parental controls.
The Impossible Balance
Every solution creates new problems. Automatic age prediction is imperfect and potentially discriminatory. ID verification threatens privacy and free expression. Device-level controls require engaged parents. Parental controls can be easily bypassed by tech-savvy teens.
Meanwhile, the problems these systems aim to solve are getting worse. AI is making it easier to create child abuse material. Teens are forming unhealthy attachments to chatbots. Some have reportedly died by suicide after troubling conversations with AI companions.
The Global Ripple Effect
Whatever approach wins in the US will likely influence global standards. European regulators are watching closely, as are companies worldwide that serve American users. The stakes extend far beyond any single country's borders.
Tech companies know they need to do something, but they're desperately trying to avoid being the one holding the bag when things go wrong. The result is a complex dance of responsibility-shifting that leaves everyone pointing fingers while the underlying problems persist.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Share your thoughts on this article
Sign in to join the conversation
Related Articles
Creators with 6.2M subscribers add Snap to their growing list of AI lawsuits. The battle over who owns the training data that powers AI is intensifying across Silicon Valley.
Google agrees to $68M settlement over illegal voice recordings by Assistant devices triggered by non-wake words. A deeper look at the privacy costs of always-listening technology.
Anthropic's Claude can now work directly inside Slack, Canva, and other apps without tab-switching. This isn't just convenience—it's a fundamental shift in how we interact with software. What does this mean for productivity?
Anthropic's Claude now integrates directly with workplace apps like Slack, Figma, and Box, blurring the lines between AI assistance and direct tool manipulation.
Thoughts