Liabooks Home|PRISM News
Five Cracks in the AI Supply Chain
TechAI Analysis

Five Cracks in the AI Supply Chain

8 min readSource

At Milken 2026, five AI insiders—from the CEO of ASML to a quantum physicist challenging LLMs—laid out the physical, energy, and geopolitical limits the AI boom is running into.

Google Cloud's revenue backlog nearly doubled in a single quarter—from $250 billion to $460 billion. The COO announcing that number didn't sound triumphant. He sounded measured. "The demand is real," he said. That calm is worth sitting with. It means the orders are there, the money is there, and the product is not.

At the Milken Global Conference in Beverly Hills this week, five people who collectively touch every layer of the AI supply chain sat down for a conversation that cut against the industry's default optimism. ASML CEO Christophe Fouquet, whose company holds a monopoly on the extreme ultraviolet lithography machines without which modern chips cannot be made. Francis deSouza, COO of Google Cloud, overseeing one of the largest infrastructure bets in corporate history. Qasar Younis, CEO of Applied Intuition, a $15 billion physical AI company now deep in defense. Dimitry Shevelenko, chief business officer of Perplexity, which has evolved from search engine to what it calls a "digital worker." And Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted.

What they described, collectively, was an industry running hard into limits that no amount of venture capital can simply spend its way past.

The Bottlenecks Are Physical, Not Financial

Fouquet was direct. Despite a "huge acceleration" in chip manufacturing, he holds a "strong belief" that for the next two to five years, the market will be supply-limited. The hyperscalers—Google, Microsoft, Amazon, Meta—are not going to receive all the chips they're paying for. Full stop.

For deSouza, the numbers illustrate just how acute the mismatch is. Google Cloud crossed $20 billion in quarterly revenue, growing at 63%. The backlog nearly doubled in one quarter. The infrastructure to fulfill that demand simply does not yet exist.

For Younis, the constraint is different but equally stubborn. Applied Intuition builds autonomy systems for vehicles, drones, mining equipment, and defense platforms. His bottleneck isn't silicon—it's real-world data. Synthetic simulation, however sophisticated, cannot fully replace the information you can only gather by sending machines into the world and watching what happens. "There will be a long time before you can fully train models that run on the physical world synthetically," he said. No shortcut has been found. None appears imminent.

The Energy Ceiling—and One Radical Response

Behind the chip shortage sits the energy problem. deSouza confirmed that Google is treating orbital data centers as a genuine option. The logic: space offers more abundant energy. The complication: space is a vacuum, which eliminates convection cooling. Heat can only be shed via radiation—far slower and harder to engineer than the air and liquid cooling systems that terrestrial data centers depend on. Google is pressing forward anyway.

The more near-term answer deSouza offered is vertical integration. By co-engineering its full stack—custom TPU chips designed in tandem with the models that will run on them—Google claims it achieves efficiency in flops per watt that commodity hardware configurations cannot match. "Running Gemini on TPUs is much more energy efficient than any other configuration," he said, because the chip designers know what the model needs before it ships.

Fouquet offered the industry's uncomfortable corollary: "Nothing can be priceless." More compute means more energy, and energy always has a cost. The current moment—extraordinary capital investment driven by strategic necessity—is not a permanent exception to that rule.

A Physicist Who Thinks the Whole Architecture Might Be Wrong

PRISM

Advertise with Us

[email protected]

The most structurally disruptive argument came from the smallest company on stage. Bodnia's startup, Logical Intelligence, is built on energy-based models (EBMs)—a class of AI that doesn't predict the next token in a sequence but instead tries to learn the underlying rules that govern data. She argues this is closer to how human cognition actually works.

"Language is a user interface between my brain and yours," she said. "The reasoning itself is not attached to any language."

Her largest model runs at 200 million parameters—orders of magnitude smaller than the hundreds of billions in leading LLMs. She claims it runs thousands of times faster, and crucially, it can update its knowledge as data changes without full retraining. For domains like chip design and robotics, where a system needs to understand physical rules rather than linguistic patterns, she argues EBMs are the more natural fit.

"When you drive a car, you're not searching for patterns in any language. You look around you, understand the rules about the world around you, and make a decision."

It's a claim that would be easy to dismiss—except that Yann LeCun, Meta's former chief AI scientist and one of the field's most credentialed skeptics of the LLM paradigm, joined Logical Intelligence as founding chair of its technical research board earlier this year. The AI field is beginning to ask whether scale alone is sufficient. Bodnia is betting it isn't.

Agents, Trust, and the CISO Problem

Shevelenko spent much of the conversation explaining how Perplexity has moved from a search product to what it now calls a "digital worker." Its newest offering, Perplexity Computer, is designed not as a tool a knowledge worker uses, but as staff a knowledge worker directs.

"Every day you wake up and you have a hundred staff on your team," he said. "What are you going to do to make the most of it?"

The obvious follow-up: what happens when those staff make mistakes inside corporate systems? Shevelenko's answer was granularity. Enterprise administrators can specify not just which tools an agent can access, but whether those permissions are read-only or read-write—a distinction that matters enormously when agents are acting autonomously inside live systems. Comet, Perplexity's computer-use agent, presents a plan and asks for approval before acting. Some users find the friction annoying. Shevelenko considers it non-negotiable.

His reasoning has a personal dimension. After joining the board of Lazard—a 180-year-old firm whose entire value proposition rests on client trust—he found himself unexpectedly sympathetic to the conservative instincts of a CISO. "Granularity is the bedrock of good security hygiene," he said. It's a notable admission from someone selling the future of autonomous agents.

Physical AI Is a Sovereignty Question, Not Just a Safety One

Younis made perhaps the panel's most geopolitically significant point. The internet spread as American technology and faced real pushback only when its offline consequences became visible—the Ubers and DoorDashes disrupting local incumbents. Physical AI is different from the start.

Autonomous vehicles, defense drones, agricultural machines—these operate in the real world in ways governments cannot ignore. They raise questions about safety, data collection, and who ultimately controls systems operating inside a nation's borders. "Almost consistently, every country is saying: we don't want this intelligence in a physical form in our borders, controlled by another country."

His data point was striking: fewer nations can currently field a robotaxi than possess nuclear weapons.

Fouquet framed the US-China dynamic through the lens he knows best. DeepSeek's release earlier this year rattled parts of the industry. But China's AI progress is constrained below the model layer. Without access to EUV lithography, Chinese chipmakers cannot manufacture the most advanced semiconductors. Models built on older hardware operate at a compounding disadvantage regardless of software quality. "Today, in the United States, you have the data, you have the computing access, you have the chips, you have the talent. China does a very good job on the top of the stack, but is lacking some elements below."

The export controls on ASML's equipment—a policy decision made in Washington and enforced through The Hague—are, in Fouquet's implicit framing, one of the most consequential technology policy levers currently in play.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]