Liabooks Home|PRISM News
Why Nvidia Just Started Selling CPUs (It's Not What You Think)
TechAI Analysis

Why Nvidia Just Started Selling CPUs (It's Not What You Think)

3 min readSource

Nvidia's multi-billion dollar Meta deal reveals a strategic shift from GPU dominance to full-system computing. The move signals broader changes in AI infrastructure and competitive dynamics.

For 20 years, if you said "Nvidia," people thought "graphics cards." Yesterday, that changed. The chipmaker's multi-billion dollar deal with Meta includes something unexpected: massive purchases of Nvidia's Grace CPUs as standalone chips.

This isn't just hardware expansion. It's a strategic pivot that reveals how AI computing is evolving—and why Nvidia's customers are getting nervous.

The CPU Comeback Nobody Saw Coming

Meta just became the first tech giant to buy Nvidia CPUs at scale, alongside "millions" of Blackwell and Rubin GPUs. The timing isn't coincidental. As AI software becomes more sophisticated—particularly agentic AI—it needs CPUs to handle tasks that GPUs simply can't.

Consider Microsoft's data centers supporting OpenAI: tens of thousands of CPUs now process the petabytes of data that GPUs generate. Without AI workloads, this CPU demand wouldn't exist.

"The reason the industry is bullish on CPUs within data centers right now is agentic AI, which puts new demands on general-purpose CPU architectures," says Ben Bajarin, CEO of Creative Strategies.

The Great AI Diversification

Nvidia's CPU push comes as its biggest customers are quietly reducing their dependence. The numbers tell the story:

  • OpenAI: Signed deals with Broadcom for custom chips and AMD for 6 gigawatts of computing power
  • Google: Relies primarily on homegrown TPUs, reportedly considering selling them to Meta
  • Anthropic: Uses chips from Nvidia, Google, and Amazon simultaneously
  • Recent Cerebras deal: OpenAI committed $10 billion for ultra-low-latency computing

"The AI labs are looking to diversify because the needs are changing, but it's still mostly that they just can't access enough GPUs," Bajarin explains. "They're going to look wherever they can get the chips."

From Monopoly to Platform Play

Nvidia's response? Go big or go home. The company spent $20 billion—its largest acquisition ever—to license technology from Groq, a startup focused on low-latency AI computing. The deal signals Nvidia's shift from selling individual components to providing complete computing ecosystems.

Meta's commitment is staggering: the company plans to spend $115-135 billion on AI infrastructure this year, up from $72.2 billion in 2024. That's nearly a 60% increase in a single year.

Two years ago, Nvidia CEO Jensen Huang estimated the company's business was "40% inference, 60% training." Today's deals suggest that balance is shifting as companies move from building AI models to actually running them at scale.

The Real Competition Starts Now

Nvidia's CPU strategy isn't just about capturing more revenue—it's about preventing customer defection. When OpenAI CEO Sam Altman appeared on stage with AMD's Lisa Su last year, it sent a clear message: the AI industry won't accept single-vendor dependence forever.

But CPUs alone won't solve Nvidia's competitive challenges. As Bajarin notes, "If you're one of the hyperscalers, you're not going to be running all of your inference computing on CPUs. You just need whatever software you're running to be fast enough on the CPU to interact with the GPU architecture."

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles