Sora 2 Exploited: How AI-Generated 'Toy' Ads Create a New Child Safety Crisis
OpenAI's Sora 2 is being used to generate disturbing, fake toy ads and fetish content involving AI minors, exposing a critical flaw in platform safety and prompting new legislation.
Just one week after OpenAI launched its powerful new video generator, Sora 2, on September 30, a disturbing trend emerged on TikTok. A fake commercial for a children's toy named the 'Vibro Rose'—depicting photorealistic young girls with a buzzing, floral-themed toy—garnered outrage. This video, and others like it featuring 'sticky milk' squirting cake decorators, represents a new and insidious front in the battle for online safety: the use of generative AI to create sexually suggestive content involving synthetic minors, operating in a legal and ethical gray zone.
A Surge in Synthetic Abuse
While these videos use digitally created amalgamations, not real children, they are contributing to a verifiable crisis. According to the Internet Watch Foundation (IWF) in the UK, reports of AI-generated child sexual abuse material (CSAM) have more than doubled in one year, from 199 cases between January-October 2024 to 426 in the same period of 2025. The IWF notes that 56% of this content falls into the UK's most serious category, and an overwhelming 94% of the illegal AI images it tracked were of girls.
Often, we see real children’s likenesses being commodified to create nude or sexual imagery and, overwhelmingly, we see AI being used to create imagery of girls. It is yet another way girls are targeted online.
This influx has spurred legislative action. The UK is introducing a new amendment to its Crime and Policing Bill to allow authorized testing of AI tools for CSAM generation capabilities. In the US, 45 states have implemented laws to criminalize AI-generated CSAM, most within the last two years as AI generators have evolved.
The Moderation Maze: Intent vs. Content
AI companies like OpenAI have policies strictly prohibiting the sexualization of minors and report CSAM to authorities. However, creators are finding ways to circumvent guardrails. The 'Vibro Rose' ads, while not explicit pornography, show an apparent intent to attract predators through suggestive naming and imagery. Other clips tread a fine line between dark humor and sexualization, including fake commercials for playsets parodying Jeffrey Epstein and Harvey Weinstein, or memes like 'Incredible Gassy,' a fetishistic parody character often depicted alongside AI-generated minors.
Mike Stabile, public policy director at the Free Speech Coalition, highlighted this struggle to WIRED, comparing it to the difficulty platforms like Facebook have in differentiating between a benign family photo and exploitative material. He argues that AI firms need more nuanced moderation, including word bans and better-trained human teams who understand fetish-related language.
Platform Response
Following WIRED's inquiry, both OpenAI and TikTok took action. OpenAI spokesperson Niko Felix confirmed the company banned several accounts violating its policies. A TikTok spokesperson stated they removed videos and banned accounts after WIRED flagged over 30 instances of inappropriate content. However, at the time of reporting, some of the flagged material remained online, showing the persistent nature of the problem.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
OpenAI and SoftBank Group partner with SB Energy to build a 1.2GW AI data center in Texas to support the Stargate initiative. A deep dive into the energy-tech nexus.
SoftBank will debut its AI-equipped wireless network in 2026. Learn how this GPU-powered AI-RAN system will revolutionize data processing in offices and factories.
Discover the OpenAI and Datadog AI monitoring integration 2026. Learn how enterprises are achieving 30% token cost savings and real-time LLM observability.
OpenAI introduces ChatGPT Health, a new feature integrating medical records and wellness data. Despite personalized benefits, recent safety concerns and past tragedies highlight the risks of AI medical advice.