Liabooks Home|PRISM News
When AI Agents Started Eating the World
CultureAI Analysis

When AI Agents Started Eating the World

4 min readSource

Two journalists with zero coding experience replicated a $5 billion platform in one hour using Claude Code, triggering a 20% stock crash. Here's why Silicon Valley thinks we're in February 2020 again.

One hour. That's how long it took two CNBC journalists—neither with any coding experience—to build a functional competitor to Monday.com, a project management platform valued at $5 billion. Their weapon of choice? An AI agent called Claude Code. Since the story broke, Monday.com's stock has plummeted 20%.

Silicon Valley is having déjà vu. It's February 2020 again—that eerie moment when an exponential force was gathering steam while most people went about their daily lives, oblivious to the asteroid heading their way. Except this time, the invisible disruptor isn't a virus that will surge and ebb. It's artificial intelligence that many believe will irreversibly transform white-collar work.

From Passive to Proactive: The Agent Revolution

Until recently, public-facing AI systems were fundamentally passive. You'd type a question to ChatGPT, get an answer, and the bot would wait for your next command. It was like texting with an infinitely knowledgeable but sycophantic encyclopedia.

These chatbots had real utility, but strict limitations. Gemini could draft your email but couldn't send it. Claude could generate code but couldn't run it, debug it, and iterate until it worked.

Then came 2025 and commercially viable AI agents. These systems receive broad objectives—"detect and fix the bug crashing our app" or "monitor regulatory filings for anything relevant to our business"—and figure out how to achieve them independently. They function less like search engines and more like junior staffers who can decide their next steps, use tools, test their work, and keep iterating until the job is done.

SemiAnalysis called it an "inflection point." OpenAI CEO Sam Altman declared it "the first time I felt another ChatGPT moment—a clear glimpse into the future of knowledge work."

Why Wall Street Is Panicking

The CNBC experiment crystallized a terrifying realization for many incumbents: if two non-programmers can replicate a $5 billion platform in an hour, what does that mean for the entire software industry?

The math is brutal. As SemiAnalysis puts it: "One developer with Claude Code can now do what took a team a month."Claude Pro costs $20 monthly, ChatGPT Max$200, while the median US knowledge worker costs $350-500 daily. Even handling a fraction of daily workflow at $6-7 delivers 10-30x ROI.

Initially, investors assumed AI agents would boost productivity at existing firms—more apps and audits with fewer workers. But recent weeks brought a darker realization: why pay Gartner for research reports or Asana for project management when Claude Code delivers both at a fraction of the cost? This reasoning triggered selloffs across software and consulting stocks, with Gartner and Asana each shedding over one-third of their value in a month.

When Automation Automates Itself

What's driving Silicon Valley's millennial rhetoric isn't just current capabilities—it's the prospect of recursive self-improvement. The top AI labs are now using their own agents most aggressively. Engineers at Anthropic and OpenAI report that nearly 100% of their code is now AI-generated.

This suggests AI progress won't unfold as a steady march but as a chain reaction. As AI agents build their successors, each advance accelerates the next, triggering a self-reinforcing feedback loop where innovation compounds exponentially.

The data supports this trajectory. METR measures AI performance by tracking the complexity of coding tasks models can complete with 50% success. That complexity has been doubling every 7 months.

Exponential change confounds human intuition. On March 1, 2020, the US had only 40 confirmed COVID cases. By April 1, over 200,000 Americans were infected. AI bulls believe we're again sleeping on the speed and scale of what's coming.

The Reality Check

While AI agents will undoubtedly reshape white-collar work, Silicon Valley's apocalyptic timeline faces several challenges.

First, AI still makes mistakes. An autonomous agent might execute the right trade, send the perfect email, and fix errant code nine times out of ten. But if that tenth time it stakes your firm's capital on Dogecoin, insults your top client, or introduces security vulnerabilities, you'll probably want human oversight on high-stakes projects.

Second, institutional inertia slows technology adoption. Though generators became common in the late 19th century, it took decades for factories to reorganize around electric power. Legacy corporations may take longer to adjust than tech firms, and sectors like healthcare and law face additional regulatory constraints.

Most critically, it's unclear whether AI capabilities will continue growing exponentially. Many past technologies enjoyed compounding returns before plateauing.


This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles