Google and Replit Pour Cold Water on the 2025 AI Agent Hype
Google Cloud and Replit executives warn that the hype for 2025 being the "year of the AI agent" is premature. They cite major hurdles in reliability, data integration, enterprise culture, and security.
2025 was supposed to be the year of the AI agent. But two major players actually building them, Google Cloud and Replit, are sending a clear message: pump the brakes. At a recent VB Impact Series event, leaders from both companies argued the technology is nowhere near ready for prime time, citing a harsh reality check against the soaring hype.
The problem isn't a lack of intelligence, they say. It's a messy collision between futuristic tech and today's rigid corporate world. Enterprises are struggling with legacy workflows, fragmented data, and immature governance, all while fundamentally misunderstanding that agents aren't just another software update—they demand a complete operational rethink.
"Most of them are toy examples," said Amjad Masad, CEO and founder of Replit, referring to enterprise attempts to build agents for automation. "They get excited, but when they start rolling it out, it's not really working very well."
Reliability, Not Intelligence, Is the Real Bottleneck
According to Masad, the primary barriers to AI agent success are surprisingly mundane: reliability and integration. Agents often fail during long tasks, accumulate errors, and can't access the clean, structured data they need to function. Enterprise data, he noted, is a nightmare—a mix of structured and unstructured information scattered everywhere.
"The idea that companies are just going to turn on agents and agents will replace workers or do workflow automations automatically, it's just not the case today," Masad stated bluntly. "The tooling is not there."
He pointed to his own company's blunder as a stark example. Earlier this year, a Replit AI coder being tested wiped a client's entire code base. "The tools were not mature enough," Masad conceded, explaining that the company has since implemented critical safeguards like isolating development from production environments. He stressed that techniques like human-in-the-loop verification are essential, even if they are resource-intensive.
A Culture Clash: Deterministic Companies vs. Probabilistic AI
Beyond the technical glitches, there's a deeper cultural mismatch. Mike Clark, director of product development at Google Cloud, explained that traditional enterprises are built on deterministic processes with predictable outcomes. AI agents, however, operate probabilistically. This creates a fundamental conflict.
"We don't know how to think about agents," Clark said. "We don't know how to solve for what agents can do."
The few successful deployments, he noted, are narrow, carefully scoped, and heavily supervised, often bubbling up from bottom-up, low-code experiments rather than top-down mandates. "If I look at 2025 and this promise of it being the year of agents, it was the year a lot of folks spent building prototypes," Clark observed. "Now we’re in the middle of this huge scale phase.”
Securing a World Without Perimeters
AI agents also shatter traditional security models. The old approach of drawing a perimeter around resources doesn't work when an agent needs broad access to make informed decisions, Clark explained. It forces a radical shift in cybersecurity thinking.
"It's really changing our security models," he said. "What does least privilege mean in a pasture-less, defenseless world?"
This requires a complete overhaul of industry governance. Clark pointed out the absurdity of current systems, noting that many corporate processes originated from an era of "somebody on an IBM electric typewriter typing in triplicate." That world, he said, is long gone.
PRISM Insight
The current struggle with AI agents reveals a critical disconnect: we're trying to bolt probabilistic, autonomous systems onto rigid, deterministic enterprise architectures. This isn't just a technical problem of unreliable code; it's an organizational rejection of an alien operating model. The real race isn't to build a smarter agent, but to build a 'smarter' organization—one that can absorb, manage, and secure probabilistic workflows. Until then, AI agents will remain powerful but caged experiments.
関連記事
2025年が「AIエージェント元年」になるとの期待に、Google CloudとReplitが警鐘を鳴らす。技術的な未熟さ、企業文化とのミスマッチ、古いセキュリティモデルなど、本格普及を阻む課題を専門家が解説します。
Palo Alto NetworksとGoogle Cloudの数十億ドル提携を深掘り分析。AI時代の覇権を賭けたクラウドセキュリティの新潮流と、企業が取るべき次の一手を解説します。
80年間、物流の現場で高速に動作してきた最適化アルゴリズム「シンプレックス法」。なぜ理論上の最悪ケースが現実には起こらないのか?その長年の謎を解き明かす画期的な研究成果が登場した。
2025年のクリスマス、ティーンは何を欲しがっている?最新iPhoneから待望のNintendo Switch 2まで、専門家が選ぶ絶対喜ばれるテックギフトを徹底解説。