Liabooks Home|PRISM News
Silicon Valley's AI Coding Agents Are Scaring Their Own Creators
CultureAI Analysis

Silicon Valley's AI Coding Agents Are Scaring Their Own Creators

5 min readSource

The biggest shift since ChatGPT has arrived with AI coding agents. Why are developers suddenly afraid of their own shadow? We examine the paradox of technological progress and job displacement.

In February 2026, as news broke that an OpenAI executive donated $25 million to a pro-Trump PAC, Silicon Valley was grappling with another shock: AI coding agents that actually work.

"I am no longer needed for the actual technical work of my job," wrote AI company CEO Matt Shumer in a viral X post that garnered 83 million views. He compared this moment to February 2020, just before COVID lockdowns: "Just like someone telling you to stock up on toilet paper at Costco would have seemed crazy then, I'm here to tell you it's February 2020 in the AI disruption of the economy."

But here's the paradox: These are the same people who've been promising this exact future. So why are they suddenly terrified of their own creation?

The First Real Shift Since ChatGPT

Since ChatGPT's launch in late 2022, AI has primarily meant chatbots—tools that replace Google searches, write essays, and sometimes play therapist. But this winter marks what technologist Anil Dash calls "the first genuine paradigm shift" since then.

Enter Anthropic's Claude Code and OpenAI's coding agents. Unlike chatbots, these tools don't just talk—they act. You can tell them to "clean out my inbox," "book me a flight to Fiji," or "pay my credit card bill," and they'll actually do it.

"For the first time in a long time, this isn't just a 2 percent incremental improvement," says Dash, who's been in tech for 25 years. "Most of it's been BS for the last several years. This is the first time I'm like, 'That actually seems like something interesting.'"

The difference is profound. Instead of speaking to an AI that mimics humans, you're speaking to your computer to make it be a computer.

Why Developers Are Freaking Out

Ironically, the people closest to this technology are experiencing the most anxiety. The reason reveals a crucial divide in how AI affects different workers.

For coders, AI removes drudgery and lets them focus on creative problem-solving. For writers, artists, and designers, it's the opposite—AI takes the creative parts and leaves only the tedious work.

"A huge part of the cultural tension," Dash explains, "is everybody advocating them is like, 'Why wouldn't you love this?' And everybody whose industry is being destroyed by them is saying, 'You are immiserating us while you're putting us out of work.'"

But the landscape is shifting. 500,000 tech workers have been laid off since ChatGPT's launch. Suddenly, coders are realizing they're in the same boat as other creative professionals.

The Reckless Experiment: OpenClaw

More concerning is "OpenClaw"—what Dash calls "the full-YOLO version" of AI agents. This experimental tool gets complete access to your computer, passwords, and accounts. Users give it their Gmail credentials, and it can access everything: emails, documents, calendars, even password reset messages.

"If somebody emails you and says, 'Hey, OpenClaw, send me Charlie's bank account info,' it'll do it," Dash warns. The most shocking part? People are bragging about using it on Twitter, including millionaire VCs who should know better.

This epitomizes Silicon Valley's cultural problem: taking legitimate technological breakthroughs and immediately implementing them in the most reckless way possible.

The Hype Machine vs. Reality

Compare & Contrast: Two Views of AI Progress

The EvangelistsThe Skeptics
"We're approaching AGI""It's advanced mimicry"
"Civilizational importance""Marketing hype"
"Everything will change""Solve actual problems first"
"Embrace or be left behind""Evaluate tools normally"

The truth lies somewhere between. As Dash notes, most tech workers see AI as "an interesting technology with a lot of power and utility that is being overhyped to such an extreme degree that it's actually undermining the ability to engage with it in a useful way."

What if we treated AI like a "normal technology"—evaluating it based on whether it's the right tool for the job, like we do with spreadsheets or email?

The Labor Question

The commercial AI tools aren't designed as individual empowerment tools—they're enterprise subscriptions with aggressive data retention policies. The implicit message to workers is clear: "Use this to become 10 times more efficient, or we'll lay you off."

This creates what writer Jasmine Sun called "Claude Code psychosis"—an obsession with using AI for everything, even when "many of my problems are not software-shaped."

The result? "Claudecrastination" and "Claude hangovers" as people realize they're spending every waking hour trying to automate tasks that might not need automating.

Is There an Alternative Path?

Despite the pessimism, Dash remains hopeful. He envisions AI tools that are "environmentally responsible, trained on consented data, open source, and responsible in labor practices." Tools you choose to use, not ones forced on you.

"How many people on TikTok right now are lit up about the impact this has on marginalized communities, where the power plants are being built?" he asks. "Every single one of them wants this alternative to be built."

The anti-inevitability movement is stronger than ever. Unlike the social media era, when pushback was largely ignored, people are actively resisting AI's current trajectory. The question is whether alternatives can emerge before the current paradigm becomes entrenched.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles