X Cuts Creator Cash for War Deepfakes
X targets war-related AI videos with revenue penalties. As platforms police AI content, who decides what's real in the creator economy?
$0 for 90 Days
X just hit creators where it hurts: their wallets. The platform announced Tuesday that anyone posting AI-generated videos of armed conflicts without proper disclosure will be suspended from its Creator Revenue Sharing Program for 90 days. Get caught twice? Permanent ban from monetization.
"During times of war, it is critical that people have access to authentic information on the ground," wrote Nikita Bier, X's head of product. "With today's AI technologies, it is trivial to create content that can mislead people."
The move targets a growing problem: realistic AI war footage spreading across social media, often designed to go viral and generate revenue.
The Incentive Problem
X's Creator Revenue Sharing Program splits advertising revenue with popular creators. More engagement equals more money. Critics have long argued this structure incentivizes "sensationalized content, like clickbait or other posts designed to spark outrage."
War content fits this formula perfectly. Dramatic footage from conflict zones naturally attracts attention, making AI-generated war videos a tempting revenue stream. Recent conflicts in Ukraine and Gaza have seen waves of deepfake content spreading across platforms.
X plans to identify violators through AI detection tools and its crowdsourced fact-checking system, Community Notes. But the company's track record on content moderation remains mixed.
Selective Enforcement
Here's the catch: X's new policy only covers armed conflict. Political misinformation, fake product endorsements, and other AI deception remain fair game for monetization.
This selective approach raises questions about platform responsibility. Why is war-related AI content treated differently than election deepfakes or fraudulent advertisements? The answer likely lies in public pressure and regulatory scrutiny around conflict misinformation.
YouTube, TikTok, and other platforms face similar challenges. As AI generation tools become more accessible, the line between authentic and artificial content continues to blur. Each platform is developing its own approach, creating an inconsistent patchwork of policies.
The Bigger Battle
X's move reflects a broader shift in how platforms handle AI content. Rather than blanket bans, companies are experimenting with targeted restrictions tied to specific harms or contexts.
But this approach creates new dilemmas. Who decides which topics deserve special protection? How do platforms balance free expression with information integrity? And can technical solutions keep pace with rapidly evolving AI capabilities?
The stakes extend beyond individual platforms. As AI-generated content becomes indistinguishable from reality, society's relationship with truth itself is being tested.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Google's Gemini AI on Pixel phones can now order food, book rides, and complete tasks on your behalf across select apps like Uber and Grubhub. Is this the dawn of true AI agents?
Biologists are treating large language models as living organisms instead of computer programs, uncovering AI secrets that traditional approaches missed. What does this paradigm shift mean?
Apple unveiled M5-powered MacBooks with 4x faster AI performance. But who actually needs this power, and what does it mean for the laptop market's future?
AI can now identify pseudonymous social media users with 68% recall and 90% precision, potentially ending internet anonymity as we know it.
Thoughts
Share your thoughts on this article
Sign in to join the conversation