X Cuts Creator Cash for War Deepfakes
X targets war-related AI videos with revenue penalties. As platforms police AI content, who decides what's real in the creator economy?
$0 for 90 Days
X just hit creators where it hurts: their wallets. The platform announced Tuesday that anyone posting AI-generated videos of armed conflicts without proper disclosure will be suspended from its Creator Revenue Sharing Program for 90 days. Get caught twice? Permanent ban from monetization.
"During times of war, it is critical that people have access to authentic information on the ground," wrote Nikita Bier, X's head of product. "With today's AI technologies, it is trivial to create content that can mislead people."
The move targets a growing problem: realistic AI war footage spreading across social media, often designed to go viral and generate revenue.
The Incentive Problem
X's Creator Revenue Sharing Program splits advertising revenue with popular creators. More engagement equals more money. Critics have long argued this structure incentivizes "sensationalized content, like clickbait or other posts designed to spark outrage."
War content fits this formula perfectly. Dramatic footage from conflict zones naturally attracts attention, making AI-generated war videos a tempting revenue stream. Recent conflicts in Ukraine and Gaza have seen waves of deepfake content spreading across platforms.
X plans to identify violators through AI detection tools and its crowdsourced fact-checking system, Community Notes. But the company's track record on content moderation remains mixed.
Selective Enforcement
Here's the catch: X's new policy only covers armed conflict. Political misinformation, fake product endorsements, and other AI deception remain fair game for monetization.
This selective approach raises questions about platform responsibility. Why is war-related AI content treated differently than election deepfakes or fraudulent advertisements? The answer likely lies in public pressure and regulatory scrutiny around conflict misinformation.
YouTube, TikTok, and other platforms face similar challenges. As AI generation tools become more accessible, the line between authentic and artificial content continues to blur. Each platform is developing its own approach, creating an inconsistent patchwork of policies.
The Bigger Battle
X's move reflects a broader shift in how platforms handle AI content. Rather than blanket bans, companies are experimenting with targeted restrictions tied to specific harms or contexts.
But this approach creates new dilemmas. Who decides which topics deserve special protection? How do platforms balance free expression with information integrity? And can technical solutions keep pace with rapidly evolving AI capabilities?
The stakes extend beyond individual platforms. As AI-generated content becomes indistinguishable from reality, society's relationship with truth itself is being tested.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Memory makers can't build fabs fast enough. By end of 2027, supply will cover just 60% of demand. Here's why the shortage could last until 2030—and what it means for AI, your devices, and the chip industry.
OpenAI's $852B valuation is drawing skepticism from its own backers as Anthropic's ARR tripled in three months. The secondary market is already voting with its feet.
Machine-translated junk is flooding minority-language Wikipedia pages. AI learns from that junk. The result could accelerate the extinction of thousands of languages.
The Trump administration is battling Anthropic in court while simultaneously urging Wall Street banks to test its Mythos AI model. What does this contradiction reveal about US AI policy?
Thoughts
Share your thoughts on this article
Sign in to join the conversation