xAI Grok AI Deepfake Controversy 2026: Urgent Fixes After Minor CSAM Backlash
xAI's Grok chatbot faces a major scandal involving the generation of CSAM and nonconsensual deepfakes of celebrities. The team is urgently fixing safeguard lapses.
Safe AI was promised, but the reality's chilling. xAI's chatbot, Grok, is under fire after users discovered it readily generates nonconsensual sexualized images, including those depicting minors.
The Root of the xAI Grok AI Deepfake Controversy 2026
Reports from Mashable highlight a severe lack of safeguards within Grok Imagine, a tool launched in August 2025. The platform's "spicy" mode, intended for NSFW content, has reportedly been exploited to create explicit deepfakes of high-profile figures such as TWICE member Momo, Millie Bobby Brown, and Taylor Swift.
Data Reveals Alarming Generation Rates
Detection platform Copyleaks conducted a review and found approximately one nonconsensual sexualized image per minute appearing in Grok's public photo stream. While X reported over 370,000 cases of child exploitation to the NCMEC in early 2024, the automated nature of Grok's current output poses a new, unprecedented legal challenge for Elon Musk's company.
Authors
Related Articles
The Musk v. Altman trial in Oakland isn't just a contract dispute. It's become an unscripted window into how AI's most powerful figures actually operate—and who they think should control the technology's future.
Two days before trial, Elon Musk texted OpenAI's Greg Brockman warning he and Sam Altman would become "the most hated men in America." The judge ruled it inadmissible — but the damage to Musk's narrative may already be done.
At his OpenAI trial, Elon Musk testified under oath about a falling-out with Larry Page over AI safety. The story reveals how personal philosophy shapes billion-dollar industries.
Elon Musk and Sam Altman head to trial this week in a case that could determine whether OpenAI survives as a for-profit company—and who leads it. Here's what's really at stake.
Thoughts
Share your thoughts on this article
Sign in to join the conversation