Liabooks Home|PRISM News
AGI Is Already Here — You're Just Measuring It Wrong
TechAI Analysis

AGI Is Already Here — You're Just Measuring It Wrong

5 min readSource

Databricks CTO Matei Zaharia just won computing's top prize. His take on AGI, the security nightmare hiding inside AI agents, and why the real AI revolution is about research, not chatbots.

What if the reason we can't agree on whether AGI has arrived is that we're asking the wrong question entirely?

Matei Zaharia, co-founder and CTO of Databricks, thinks that's exactly the problem. And given that he almost missed the email telling him he'd won computing's most prestigious prize for early-career researchers, he seems like someone who doesn't spend much time chasing headlines.

From a PhD Side Project to a $134 Billion Empire

The backstory matters here. In 2009, Zaharia was a PhD student at UC Berkeley, working under professor Ion Stoica, wrestling with a problem that plagued every data team of the era: big data processing was painfully slow. His solution was an open-source framework called Apache Spark — a way to dramatically accelerate distributed data computation. He was 28 years old.

Spark didn't just solve a technical problem. It reshaped an industry. Big data in 2009 occupied the same cultural space that AI does today — every company wanted it, few knew how to make it work. Spark made it work. Zaharia became a minor celebrity in Silicon Valley, and the technology became the foundation for Databricks, the company he co-founded.

Fast forward to now: Databricks has raised over $20 billion, carries a valuation of $134 billion, and reported $5.4 billion in annual revenue. It has quietly become one of the most important data infrastructure companies in the world — the plumbing beneath a significant chunk of enterprise AI.

On Wednesday, the Association for Computing Machinery (ACM) recognized Zaharia's collective contributions with its 2026 Prize in Computing, which comes with a $250,000 cash award. He's donating it all to charity, destination TBD.

"Stop Applying Human Standards to AI Models"

Here's where Zaharia gets interesting — and a little uncomfortable.

"AGI is here already," he told TechCrunch. "It's just not in a form that we appreciate."

Before you reach for the skepticism, hear the follow-up: "I think the bigger point is we should stop trying to apply human standards to these AI models."

PRISM

Advertise with Us

[email protected]

This is a more nuanced claim than it first appears. Take the bar exam. A human passes it by integrating years of legal knowledge into coherent judgment. An AI can ingest the same corpus in minutes and answer knowledge questions correctly. But does that constitute understanding law? Zaharia says conflating the two — AI capability with human-style cognition — isn't just philosophically sloppy. It creates real danger.

His example: the AI agent OpenClaw. "On the one hand, it's awesome. You can do so many things with it. It just does them automatically." But it's also, in his words, "a security nightmare." OpenClaw is designed to behave like a trusted human assistant — which means it's built to access the things a trusted assistant would access. Passwords. Browser sessions. Bank accounts. The risk isn't science fiction: an agent that mimics human trust patterns, given access to your logged-in financial accounts, can spend your money without authorization. "Yeah, it's not a little human there," Zaharia said flatly.

The problem isn't the technology. It's the design philosophy that treats AI as a digital person rather than a fundamentally different kind of tool.

The AI Use Case Nobody's Talking About Enough

Zaharia is an associate professor at UC Berkeley in addition to his CTO role, and his academic lens shapes where he thinks AI's real value lies — and it's not in the chatbot wars.

"Not that many people need to build applications, but lots of people need to understand information."

He draws a parallel to vibe coding — the recent trend of non-programmers building functional software through natural language prompts. That democratized software prototyping. Zaharia thinks the next wave is AI that democratizes research: accurate, hallucination-free tools that let anyone — a nurse, a small business owner, a policy analyst — do the kind of deep information synthesis that previously required specialized expertise.

He's already seeing it at Berkeley. Students are using AI to simulate molecular-level changes and predict their biological effectiveness. He envisions AI that can tell you what every rattle in your car means, or that extends analysis beyond text and images to radio signals and microwaves. "The thing that I'm most excited about is what I'd call AI for search, but specifically for research or engineering," he said.

This is a quieter vision than AGI domination narratives, but arguably more consequential for the average person.

Three Ways to Read This

If you're a researcher or data scientist: Zaharia's framing suggests the most durable AI applications won't be general-purpose assistants but domain-specific research accelerators. The companies building accurate, verifiable AI for scientific and engineering workflows are playing a longer game than the chatbot builders.

If you're a tech executive: The OpenClaw warning is a signal worth taking seriously. As AI agents gain access to enterprise systems, the security architecture assumptions built around human users may be dangerously inadequate. "Trusted assistant" is a design metaphor that carries real liability.

If you're an investor:Databricks at $134 billion is betting that the data layer — not the model layer — is where durable enterprise value accumulates. Zaharia's prize and profile reinforce that narrative, but the question of whether data infrastructure or model capability commands the premium is far from settled.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]