Google and Replit Pour Cold Water on the 2025 AI Agent Hype
Google Cloud and Replit executives warn that the hype for 2025 being the "year of the AI agent" is premature. They cite major hurdles in reliability, data integration, enterprise culture, and security.
2025 was supposed to be the year of the AI agent. But two major players actually building them, Google Cloud and Replit, are sending a clear message: pump the brakes. At a recent VB Impact Series event, leaders from both companies argued the technology is nowhere near ready for prime time, citing a harsh reality check against the soaring hype.
The problem isn't a lack of intelligence, they say. It's a messy collision between futuristic tech and today's rigid corporate world. Enterprises are struggling with legacy workflows, fragmented data, and immature governance, all while fundamentally misunderstanding that agents aren't just another software update—they demand a complete operational rethink.
"Most of them are toy examples," said Amjad Masad, CEO and founder of Replit, referring to enterprise attempts to build agents for automation. "They get excited, but when they start rolling it out, it's not really working very well."
Reliability, Not Intelligence, Is the Real Bottleneck
According to Masad, the primary barriers to AI agent success are surprisingly mundane: reliability and integration. Agents often fail during long tasks, accumulate errors, and can't access the clean, structured data they need to function. Enterprise data, he noted, is a nightmare—a mix of structured and unstructured information scattered everywhere.
"The idea that companies are just going to turn on agents and agents will replace workers or do workflow automations automatically, it's just not the case today," Masad stated bluntly. "The tooling is not there."
He pointed to his own company's blunder as a stark example. Earlier this year, a Replit AI coder being tested wiped a client's entire code base. "The tools were not mature enough," Masad conceded, explaining that the company has since implemented critical safeguards like isolating development from production environments. He stressed that techniques like human-in-the-loop verification are essential, even if they are resource-intensive.
A Culture Clash: Deterministic Companies vs. Probabilistic AI
Beyond the technical glitches, there's a deeper cultural mismatch. Mike Clark, director of product development at Google Cloud, explained that traditional enterprises are built on deterministic processes with predictable outcomes. AI agents, however, operate probabilistically. This creates a fundamental conflict.
"We don't know how to think about agents," Clark said. "We don't know how to solve for what agents can do."
The few successful deployments, he noted, are narrow, carefully scoped, and heavily supervised, often bubbling up from bottom-up, low-code experiments rather than top-down mandates. "If I look at 2025 and this promise of it being the year of agents, it was the year a lot of folks spent building prototypes," Clark observed. "Now we’re in the middle of this huge scale phase.”
Securing a World Without Perimeters
AI agents also shatter traditional security models. The old approach of drawing a perimeter around resources doesn't work when an agent needs broad access to make informed decisions, Clark explained. It forces a radical shift in cybersecurity thinking.
"It's really changing our security models," he said. "What does least privilege mean in a pasture-less, defenseless world?"
This requires a complete overhaul of industry governance. Clark pointed out the absurdity of current systems, noting that many corporate processes originated from an era of "somebody on an IBM electric typewriter typing in triplicate." That world, he said, is long gone.
PRISM Insight
The current struggle with AI agents reveals a critical disconnect: we're trying to bolt probabilistic, autonomous systems onto rigid, deterministic enterprise architectures. This isn't just a technical problem of unreliable code; it's an organizational rejection of an alien operating model. The real race isn't to build a smarter agent, but to build a 'smarter' organization—one that can absorb, manage, and secure probabilistic workflows. Until then, AI agents will remain powerful but caged experiments.
相关文章
AI代理人被譽為2025年的焦點,但Google Cloud與Replit高管警告,技術未成熟、文化衝突與安全模型過時等問題,是實現大規模企業採用的巨大障礙。
深度分析Palo Alto Networks與Google Cloud的數十億美元AI安全合作,探討其如何重塑雲端競爭格局,並對抗微軟Azure的領先地位。
運行近80年的核心最佳化演算法「單純形法」,其運行效率的理論與實踐長期存在矛盾。最新研究終於從數學上證明其高效性的原因,解開了電腦科學領域的一大謎團。
還在煩惱送給青少年的聖誕禮物?從最新的iPhone到萬眾期待的Nintendo Switch 2,PRISM為您解析2025年Z世代最想要的科技禮物清單。