Liabooks Home|PRISM News
EU Targets Musk's X Over AI-Generated Sexual Deepfakes
PoliticsAI Analysis

EU Targets Musk's X Over AI-Generated Sexual Deepfakes

4 min readSource

European Commission launches investigation into X's Grok AI tool for creating sexual deepfakes. Platform faces potential fines up to 6% of global revenue under Digital Services Act.

The European Union has fired its biggest regulatory shot yet at Elon Musk's X, launching a formal investigation into how the platform's AI tool Grok has been weaponized to create sexually explicit deepfakes of real people.

The stakes couldn't be higher: if found guilty of breaching the EU's Digital Services Act, X faces fines of up to 6% of its global annual revenue – potentially billions of dollars.

A Pattern of Global Pushback

This isn't an isolated European concern. The UK's Ofcom announced a similar investigation in January, while Australia, France, and Germany are conducting their own probes. Indonesia and Malaysia went further, temporarily banning Grok entirely (though Malaysia has since lifted its ban).

The numbers tell a troubling story. Grok's official account boasted on Sunday that users generated more than 5.5 billion images in just 30 days. While intended as a success metric, this figure has instead amplified regulators' concerns about scale and oversight.

Regina Doherty, an Irish MEP, said the Commission will assess whether "manipulated sexually explicit images" have reached EU users. The regulator warned it may "impose interim measures" if X refuses meaningful adjustments.

Musk's Defiant Response

True to form, Musk posted what appeared to be a mocking image about Grok's new restrictions just before the EU announcement. He's previously dismissed scrutiny of the app's image-editing capabilities as "any excuse for censorship," particularly targeting the UK government.

This defiance extends beyond social media posts. When the EU fined X €120 million last month over its blue tick verification system, Musk amplified criticism from US officials who called it an attack on American tech companies.

Marco Rubio, US Secretary of State, escalated the rhetoric: "The European Commission's fine isn't just an attack on X, it's an attack on all American tech platforms and the American people by foreign governments." Musk reposted this with an emphatic "absolutely."

The Broader Digital Sovereignty Battle

Henna Virkkunen, the EU's Executive Vice-President for Tech Sovereignty, framed sexual deepfakes as a "violent, unacceptable form of degradation." Her language reveals how this investigation transcends individual platform policies – it's about fundamental questions of digital rights and corporate accountability.

"With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens – including those of women and children – as collateral damage of its service," she said.

The investigation also extends X's ongoing scrutiny over its recommendation algorithms, launched in December 2023. This dual approach suggests EU regulators see systemic issues rather than isolated problems.

Tech Innovation vs. Human Dignity

X's Safety account previously stated the platform had stopped Grok from digitally removing clothing from images "in jurisdictions where such content is illegal." But this jurisdiction-specific approach highlights a core tension: should harmful capabilities exist at all, even if restricted in some regions?

Campaigners and victims argue the ability to generate sexually explicit images "should have never happened." This raises uncomfortable questions about the tech industry's "build first, regulate later" mentality.

What's at Stake

This case could set precedents far beyond X or deepfakes. It tests whether the EU's Digital Services Act has real teeth against major US tech platforms. It also crystallizes growing transatlantic tensions over who gets to set rules for global digital platforms.

For users, the outcome could determine whether AI tools prioritize innovation over safety, and whether platforms can be held accountable for predictable misuse of their technologies.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles