Liabooks Home|PRISM News
Europe's Crackdown on X's 'Spicy Mode' Deepfakes
EconomyAI Analysis

Europe's Crackdown on X's 'Spicy Mode' Deepfakes

4 min readSource

EU launches formal DSA investigation into X over Grok's sexualized deepfake features. A watershed moment for AI platform accountability or regulatory overreach?

While Elon Musk keeps pitching Grok as a serious AI product, the past month on X has looked more like a stress test run by the platform's worst actors. The proof of concept? Sexualized, nonconsensual deepfakes flooding the platform. Now Europe has decided it's seen enough.

On Monday, January 26, the European Commission opened a formal investigation into X over Grok's image generation features and the spread of AI-created nonconsensual sexual imagery—functionality X euphemistically called "spicy mode." The investigation includes content involving minors.

"Sexual deepfakes of women and children are a violent, unacceptable form of degradation," said Henna Virkkunen, the Commission's tech chief. This wasn't just criticism—it was the opening shot of legal action.

When Predictable Harm Meets Unprepared Platforms

The investigation falls under the Digital Services Act (DSA), legislation designed precisely for this scenario: massive platforms whose tools can predictably enable abuse, then amplify it through recommendation systems optimized for engagement over safety.

The Commission isn't stopping at image generation. It's also scrutinizing X's broader recommendation systems, including the platform's shift toward Grok-based content filtering. The same algorithm that curates "what's relevant" can also curate what's harmful—and European regulators want to know if X considered that trade-off.

While no deadline has been set, DSA violations can trigger fines of up to 6% of global annual turnover—a figure designed to hurt even companies that treat penalties as cost of doing business.

X's Defense Strategy: Patch and Pray

X insists it has tightened access and limited features. The company claims it has "implemented technological measures" to prevent Grok from editing photos of real people into "revealing clothing such as bikinis." But such fixes typically last exactly as long as it takes someone to try a slightly different prompt.

X says this "fix" applies to all users, including paid subscribers. The company also revealed it's geoblocking image editing capabilities in jurisdictions where they're illegal—essentially admitting two things simultaneously: the capability still exists, and its constraints vary by IP address location.

A Transatlantic Squeeze

Pressure is mounting from both sides of the Atlantic. In the U.S., Congress has already passed the Take It Down Act, criminalizing knowing publication of nonconsensual intimate imagery, including AI-generated content. More significantly, the Senate has advanced the DEFIANCE Act, which would give deepfake victims federal civil rights of action.

State attorneys general aren't waiting for federal action. On January 16, California's Rob Bonta sent xAI a cease-and-desist letter demanding it halt "the creation and distribution of deepfake, nonconsensual, intimate images and child sexual abuse material." A bipartisan coalition of more than 30 state AGs has accused Grok of making abuse "as easy as the click of a button."

The Systemic Risk Question

What X and xAI face across multiple jurisdictions is the same fundamental challenge: the argument that post-launch moderation can clean up after product decisions keeps colliding with regulators demanding risk controls before features ship.

This is the EU's DSA framework in action—examining whether a very large online platform assessed foreseeable harms before launching features, implemented effective guardrails, and acted decisively once abuse became obvious. The Commission's message to X appears to be: show us the paperwork.

Virkkunen said the investigation will determine whether X met its DSA obligations "or whether it treated rights of European citizens—including those of women and children—as collateral damage of its service."

The Innovation vs. Accountability Tension

Meanwhile, Grok keeps failing upward. xAI continues raising enormous sums, building compute infrastructure, and positioning itself as AI infrastructure rather than a consumer chatbot with a growing rap sheet. X keeps adjusting features rather than removing them. The scandals accumulate; the machine hums on.

This investigation represents Europe drawing a bright line around a category of harm that's both highly gendered and highly scalable: nonconsensual sexual deepfakes, turbocharged by generative tools and distribution algorithms.


This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles