Why Technology to Label Reality Is Failing
How C2PA and other AI content labeling systems are struggling in practice, ushering in an era where we can no longer trust what we see
In 10 years, you might need to doubt every photo and video you see.
That's the bombshell Instagram chief Adam Mosseri dropped on New Year's Day. "For most of my life, I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it's going to take us years to adapt. We're going to move from assuming what we see as real by default to starting with skepticism."
This isn't just a tech shift—it's a fundamental change in how society processes information. And behind this transformation lies the failure of a technology standard called C2PA.
The Dream of Labeling Our Way to Truth
C2PA (Coalition for Content Provenance and Authenticity) was supposed to be the solution. Led by Adobe with backing from Meta, Microsoft, and OpenAI, the idea seemed elegant: embed metadata at the moment of creation so anyone could later verify "Is this real or fake?"
Google built it into Pixel phones. Adobe started tracking AI edits in Photoshop. OpenAI embedded C2PA data into Sora 2 videos. On paper, everything looked promising.
But reality proved messier. When Sora 2 videos spread across the internet, the "AI-generated" labels vanished. Metadata disappeared during uploads, and platforms failed to read what little survived.
Why It Doesn't Work
The problems run deeper than technical glitches. First, C2PA was designed as a photography metadata tool, not an AI detection system. Second, it requires universal adoption across an ecosystem where cooperation is rare.
The biggest gap? Apple. The world's most important camera maker hasn't joined C2PA. No metadata authentication for iPhone photos and videos. Samsung Galaxy phones are similarly absent. Google Pixel support alone can't solve the problem.
Social platforms present an even bigger mess. Instagram and Facebook claim C2PA compatibility but struggle with implementation. X has abandoned the system entirely. TikTok and YouTube participate only nominally.
Meanwhile, bad actors—including government agencies—freely distribute AI-manipulated content without consequence. The White House regularly posts AI-altered images, and when questioned, defiantly says it won't stop.
The Label Problem
Even if technical issues were solved, a fundamental problem remains: people hate AI labels.
Creators feel their work is devalued when marked "AI-generated." When Instagram first implemented C2PA labeling two years ago, the backlash was so severe that Meta retreated. The company learned that labeling content as AI-assisted often triggers anger from both creators and audiences.
There's also the definitional nightmare: how much AI makes something "AI content"? Modern smartphone cameras use AI for night mode, portrait effects, and basic processing. Photoshop's standard tools increasingly rely on AI. Should everything get labeled?
The Economics of Fake
The real barrier might be economic incentives. AI-generated content means more posts, more engagement, and more revenue for platforms. Why would they want to devalue this content stream?
More problematically, the biggest AI investors—Google, Meta, Microsoft—also run major content platforms. They're unlikely to aggressively label content created by technologies they've spent billions developing.
As one industry observer noted: "They're using C2PA as a merit badge while not putting real effort into making it work. Otherwise, we'd see widespread results by now."
What Comes Next
With technical solutions failing, the next phase likely involves regulation. The EU's AI Act and similar legislation may force platforms to implement robust labeling systems. But that's years away, and enforcement remains uncertain.
Meanwhile, we're entering what Mosseri calls the "age of skepticism." Some platforms like Cara promise AI-free spaces, but they lack reliable detection methods. The fundamental challenge remains: distinguishing real from fake at internet scale.
Smaller solutions might emerge. Trusted intermediaries like Getty Images or Shutterstock could verify content before distribution. Professional photographers might need new certification systems. But universal solutions seem increasingly unlikely.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
AI bots now account for significant web traffic, fundamentally changing how the internet functions. An arms race unfolds as bots deploy sophisticated tactics to bypass website defenses.
METR's AI capability graph shows exponential growth, but the reality behind Claude 4.5's 5-hour task completion is far more complex than the dramatic headlines suggest.
From M3GAN's sequel flop to Mercy's critical disaster, AI-themed movies are failing spectacularly. What's behind audiences' growing fatigue with artificial intelligence narratives?
METR's viral AI capability graph shows exponential progress, but the reality behind the dramatic numbers is far more complex than it appears.
Thoughts
Share your thoughts on this article
Sign in to join the conversation