When AI Rewrites Reality, What Can We Still Trust?
The US government now uses AI to edit public content, and new research shows people remain influenced by deepfakes even when told they're fake. As truth-verification tools fail, how do we navigate this new reality?
The era of truth decay isn't coming—it's here. Last week brought the first confirmation that the US Department of Homeland Security uses AI video generators from Google and Adobe to create content shared with the public. But perhaps more troubling is new research showing that even when people know content is fake, they're still emotionally swayed by it.
Government and Media Both Embrace AI Editing
On January 22nd, the White House posted a digitally altered photo of a woman arrested at an ICE protest, making her appear more hysterical and tearful than in the original. When asked about the manipulation, White House deputy communications director Kaelan Dorr didn't deny it, simply stating: "The memes will continue."
News outlets aren't immune either. MS Now (formerly MSNBC) aired an AI-edited image of Alex Pretti that made him appear more handsome, sparking viral clips across platforms including Joe Rogan's podcast. The network claimed they didn't know the image was altered, but the damage was done.
These cases reveal different problems. One involves a government intentionally sharing manipulated content and refusing to explain; the other shows a news outlet making a mistake and acknowledging it. Yet public reaction often lumps them together as evidence that "truth no longer matters."
The Failure of Truth-Verification Tools
Remember 2024's much-hyped Content Authenticity Initiative? Co-founded by Adobe and adopted by major tech companies, it promised to attach labels showing who made content, how, and whether AI was involved.
Yet even Adobe itself only applies these labels to entirely AI-generated content, not partially edited material. Worse, platforms like X can strip content of such labels anyway—the altered arrest photo got a user-added note about manipulation only after it spread widely.
The Pentagon's official image-sharing website, DVIDS, was supposed to display these authenticity labels when Adobe launched the initiative. Today, no such labels are visible on the site.
When Knowing It's Fake Isn't Enough
A new study in Communications Psychology reveals the deeper problem. Researchers showed participants a deepfake "confession" to a crime, explicitly telling them the evidence was fake. Despite this clear warning, participants still relied on the fabricated content when judging the individual's guilt.
"Even when people learn that the content they're looking at is entirely fake, they remain emotionally swayed by it," the study found.
Disinformation expert Christopher Nehring puts it bluntly: "Transparency helps, but it isn't enough on its own. We have to develop a new masterplan of what to do about deepfakes."
The New Reality of Influence
AI content generation tools are becoming more advanced, easier to use, and cheaper to operate—explaining why government agencies increasingly pay for them. Immigration agencies have flooded social media with content supporting mass deportation efforts, some appearing AI-generated.
But we prepared for the wrong crisis. We focused on confusion—people not knowing what's real. Instead, we're entering a world where influence survives exposure, where doubt becomes a weapon, and where establishing truth doesn't serve as a reset button.
Beyond Traditional Fact-Checking
The implications extend far beyond politics. Consider how this affects:
Consumer Trust: When companies use AI to enhance product images or testimonials, how do consumers make informed decisions?
Legal Systems: If fabricated evidence influences juries even when labeled as fake, how do courts maintain justice?
Democratic Discourse: When altered content shapes opinions regardless of fact-checks, how do societies make collective decisions?
The traditional model assumed that revealing manipulation would neutralize its impact. This research suggests otherwise. People process emotional content faster than analytical warnings, and first impressions stick even when later corrected.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
Mozilla announces Firefox will let users block all AI features, taking a contrarian approach in a browser market increasingly dominated by AI integration.
Snowflake signs identical $200M deals with OpenAI and Anthropic within two months. The enterprise AI market is becoming a multi-vendor game where choice trumps exclusivity.
Linq doubled its revenue in 8 months by letting AI agents communicate natively through iMessage. The messaging app is becoming the new AI platform.
Carbon Robotics' new Large Plant Model, trained on 150 million plant images, enables farmers to identify and eliminate any weed in real-time without retraining robots.
Thoughts
Share your thoughts on this article
Sign in to join the conversation