Meta AI Character Access for Teens Paused Globally Amid Legal Battles
Meta is pausing AI character access for teens globally as it faces lawsuits over child safety and addiction. A new PG-13 version with parental controls is in development.
The conversation between teens and AI just hit a pause button. Meta has exclusively told TechCrunch that it's disabling access to its AI characters for teenagers across all its platforms worldwide. While the company says it's not abandoning the project, it's taking a step back to rebuild the experience with safety at the forefront.
Meta AI Character Access for Teens Halted Under Legal Pressure
This sudden shutdown arrives just days before a high-stakes trial in New Mexico where Meta faces accusations of failing to protect children from sexual exploitation. Additionally, next week, CEO Mark Zuckerberg is expected to testify in a case regarding social media addiction. The regulatory walls are closing in, forcing a massive pivot in how AI interacts with younger demographics.
Teens will no longer be able to access AI characters across our apps until the updated experience is ready... this applies to anyone who has given us a teen birthday or those we suspect are teens via age prediction tech.
Building a Digital Sandbox: PG-13 AI
The upcoming version of these AI characters will reportedly follow a PG-13 rating model. Parents will soon have the power to monitor chat topics or shut down AI interactions entirely. The AI's scope will be narrowed to safe zones like education, sports, and hobbies, strictly avoiding extreme violence or graphic content.
| Feature | Current Status | Future Update |
|---|---|---|
| Teen Access | Paused Globally | Restricted Access |
| Parental Control | Limited | Full Monitoring & Toggle |
| Content Scope | Open-ended | Education & Hobbies (PG-13) |
| Verification | Self-declared | AI Age Prediction |
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Anthropic filed suit against the Trump administration after being designated a supply-chain risk — allegedly for refusing to let its AI be used for autonomous weapons and mass surveillance.
Meta introduces alerts to parents when teens search self-harm content. The line between protection and privacy just got blurrier.
Former Meta executive Brian Boland testified about Meta's revenue system prioritizing user engagement over teen safety. A deep dive into social media's fundamental conflict.
Meta's CEO spent eight hours in court denying social media's role in teen harm, but his monotone responses revealed a calculated strategy of minimal accountability in the face of mounting pressure.
Thoughts
Share your thoughts on this article
Sign in to join the conversation