Liabooks Home|PRISM News
Three AI CEOs Sat Down. They Said Almost Nothing.
TechAI Analysis

Three AI CEOs Sat Down. They Said Almost Nothing.

5 min readSource

Sam Altman, Dario Amodei, and Demis Hassabis all appear in a new documentary about AI's future. The access is impressive. The answers are not. A critical look at what the film reveals—and avoids.

"You shouldn't trust me." Sam Altman said this on camera. Then the line of questioning stopped.

That exchange—roughly four seconds of a feature-length documentary—may be the most revealing moment in The AI Doc: Or How I Became an Apocaloptimist, which opens in theaters today. The film secured what few journalists have managed: sit-down interviews with the CEOs of OpenAI, Anthropic, and Google DeepMind. And yet, when the cameras rolled, the men who are reshaping civilization largely said what they always say.

The Access Was Real. The Answers Were Not.

Director Daniel Roher earned his credibility the hard way. His 2022 documentary Navalny won an Academy Award by getting close to a man the Kremlin wanted silenced. For this project, he turned his lens on AI—framed through the anxiety of an expectant father wondering what world his son will inherit. It's a human hook, and it works.

The film delivers a genuinely useful crash course in AI fundamentals, insisting on plain language over startup jargon. Visually, it's warmer than you'd expect: Roher's own drawings and paintings run throughout, and producer Daniel Kwan—the Oscar-winning co-director of Everything Everywhere All at Once—brings a whimsical stop-motion sensibility that softens the apocalyptic undertow.

The early interviews land hard. Tristan Harris, co-founder of the Center for Humane Technology, delivers this: "I know people who work on AI risk who don't expect their children to make it to high school." He's describing a scenario where AI dismantles the infrastructure of traditional education entirely. It's the kind of claim that demands follow-up.

Then the CEOs walk in, and the film changes register.

What Happens When Billionaires Enter the Frame

PRISM

Advertise with Us

[email protected]

Altman, Dario Amodei, and Demis Hassabis all appear. Mark Zuckerberg and Elon Musk were reportedly invited; neither showed. The three who did participate offer a familiar blend: sober-sounding caution wrapped around barely-examined optimism. Venture capitalist Reid Hoffman acknowledges that AI's benefits will come with "unspecified harms." The film doesn't press on what, exactly, those harms might be, or who will bear them.

The question of how today's large language models—systems that hallucinate, that their own creators admit they don't fully understand—might give rise to artificial general intelligence (AGI) capable of outstripping human cognition receives almost no scrutiny. When executives compare AI's near-term implications to the advent of nuclear weapons, the film treats this as a data point rather than an invitation to interrogate the people making that comparison.

The most telling moment comes when Roher asks Altman why anyone should trust him to guide AI development given its stakes. "You shouldn't," Altman says. The camera moves on. It's either the most honest thing a tech CEO has said in years, or the most convenient deflection—and the film doesn't seem to know which.

A Strange Conclusion

The documentary ends by calling viewers to action: ordinary citizens, it suggests, can pressure governments and corporations to steer AI toward the "safest, narrowest path toward prosperity for all." The sequence plays over footage of the Golden Gate Bridge being built, as though a suspension bridge were shaped by popular referendum.

This is where the film's internal tension becomes most visible. Roher has, in press interviews, described the AI economy as a "Ponzi scheme." That Roher and the on-screen Roher are saying different things is perhaps the documentary's most honest unintentional disclosure. The film needs a hopeful ending—there's a baby coming—and the presence of powerful men who admit they don't understand their own systems seems to require a gentle touch.

But consider what's being let slide. When a CEO says "I don't fully understand what goes on inside the models I've already deployed at scale," the film frames this as humility. It could just as reasonably be framed as a confession of negligence. The gap between those two interpretations is where AI governance actually lives, and the documentary doesn't stay there long enough.

Who's Actually Driving This

The film is accurate in diagnosing the structural problem: an unregulated AI race driven by market incentives and geopolitical competition, concentrating wealth and decision-making power in a vanishingly small circle. It's less accurate in its implied solution. The suggestion that public pressure can meaningfully redirect trillion-dollar technology trajectories assumes the existence of accountability mechanisms that, in most jurisdictions, don't yet exist.

Regulators in the EU have moved furthest with the AI Act, which came into force in 2024. The US remains fragmented, with no federal AI legislation passed as of this writing. China has sector-specific rules but no comprehensive framework. The gap between the speed of deployment and the speed of governance is not a minor detail—it is the central fact of this moment.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]