Liabooks Home|PRISM News
AI Has No Consciousness. That's Exactly Why It's Dangerous.
CultureAI Analysis

AI Has No Consciousness. That's Exactly Why It's Dangerous.

8 min readSource

The AI consciousness debate is settled. But the question that actually matters — whether human-AI arrangements grow or erode human judgment — remains almost entirely unasked.

A consultant used AI to write a strategy report. The output was fast, polished, and completely missed the client's actual problem. The AI didn't fail. The human stopped judging.

That quiet moment — unremarkable, unlogged, repeated in thousands of offices every day — may matter more than anything currently being debated in AI ethics circles.

The loudest question in AI right now is: Can machines be conscious? Neuroscientist Anil Seth answers it convincingly in his Berggruen Prize-winning essay, "The Mythology of Conscious AI." No. Brains are not meat-based Turing machines. Silicon cannot replicate biological consciousness. The liberal humanist order holds. Humans remain on top. Breathe easy.

But the reassurance conceals a more consequential question — one that will determine whether AI becomes an engine of institutional intelligence or a slow, invisible accelerant of institutional decay.

What the Consciousness Debate Keeps Hidden

Seth's framework asks what AI is. The question that actually shapes outcomes is different: What does the human-AI arrangement produce? And under what conditions does human judgment grow — or quietly disappear?

The data on this is striking, and largely ignored.

In a randomized clinical trial conducted across Stanford, Beth Israel Deaconess Medical Center, and the University of Virginia, physicians given access to GPT-4 alongside conventional resources were no more accurate in their diagnoses than physicians without it. GPT-4 alone, meanwhile, outperformed both groups by more than 15%. Same AI. Same clinical task. No measurable benefit — because the technology was simply bolted onto existing workflows with no designed interaction, no structured dialogue, no preservation of the clinician's independent reasoning.

When the collaboration was redesigned — requiring clinician and AI to generate independent assessments, then structuring a dialogue that surfaced their disagreements — diagnostic accuracy climbed from 75% without AI to 82–85% with the redesigned collaboration. The data hadn't changed. The arrangement had.

Researchers at the Stockholm School of Economics and the University of Geneva deployed the same AI system to pharmaceutical sales professionals. When the system was tailored to the expert's cognitive style — preserving expert judgment, structuring authority and incentives accordingly — client meetings rose more than 40% and sales rose 16%. When the same system was imposed without regard for how the human thinks, sales fell roughly 20% below the no-AI baseline. Worse than no AI at all.

The sharpest evidence comes from a study of 758 Boston Consulting Group consultants by researchers at Harvard, MIT, Wharton, and Warwick. For tasks AI handled well, those working with AI showed a 12% higher completion rate, finished 25% faster, and produced 40% higher quality results. But for tasks requiring genuine judgment, AI-assisted consultants performed significantly worse than those working alone. The technology didn't fail. The arrangement did. When humans deferred to computational fluency precisely where human judgment was most needed, the collaboration became a liability.

The Second Mythology Nobody Talks About

PRISM

Advertise with Us

[email protected]

Seth names one mythology: the overattribution of consciousness to machines. It's philosophically interesting and largely harmless in practice. The mythology that does real damage operates every Monday morning, in every boardroom where AI deployment decisions are made.

Call it the automation mythology: the belief that removing the human from the loop is always an efficiency gain. That judgment is a cost to be eliminated. That capability is a fixed input rather than a compounding asset.

This mythology doesn't announce itself. It arrives dressed as ROI calculations, headcount reduction targets, and productivity dashboards. Increasingly, it arrives dressed as augmentation — the word every major AI company now uses to describe tools that are, in practice, engineered to move in the opposite direction. Every significant platform release of the past two years — agent frameworks, coding agents, research agents, computer use — follows the same trajectory: make the AI do more, make the human do less. None of them measure whether human capability is growing or atrophying.

In February 2025, the inaugural Anthropic Economic Index analyzed more than 1 million conversations between people and Claude, categorizing each as automation or augmentation. It found that 57% of the time, people were thinking with AI — not delegating to it. Individual users, left to their own devices, gravitate toward collaboration. Organizations, by contrast, consistently default toward automation — not because it delivers superior returns, but because they have institutional muscles for cost reduction and almost none for capability cultivation.

Economic historian Carl Benedikt Frey's analysis in "The Technology Trap" shows this pattern is not new. The first Industrial Revolution produced what economists call "Engels' Pause" — output per worker rose 46% while wages rose just 12%. The Second Industrial Revolution, built on enabling technologies, produced broadly shared prosperity. The third, dominated by replacing technologies, has coincided with stagnant wages and rising inequality. The difference was never technological. It was institutional: societies that built frameworks for augmentation distributed wealth; societies that defaulted to automation concentrated it.

The Arrangement Is Where the Value Lives

Here is what Seth's essay, for all its rigor, cannot reach: If AI is not conscious — if it offers no inherent meaning, possesses no intrinsic orientation, maintains no autonomous understanding — then every act of human-AI collaboration places a specific demand on the human participant. The human must continuously project, test, and stabilize meaning within the collaborative field. Context drifts. Coherence flattens. Outputs subtly diverge from the human's intention. The human must notice, re-anchor, and redirect.

This is not a bug that better engineering will fix. It is a permanent structural feature of working with entities that process without understanding. And it is, in the precise sense of the word, work — continuous cognitive labor that is economically valuable, organizationally invisible, and almost entirely unmeasured.

AI researcher Ethan Mollick, an author on the BCG study, has noted that the worst outcomes came from those who "fell asleep at the wheel" — ceding judgment to the system precisely when judgment was most needed. The best outcomes came from those who maintained active, directional engagement: holding meaning steady across interactions that do not hold it for you.

The EU AI Act's Article 14 gestures at this. It requires that high-risk AI systems be designed so they "can be effectively overseen by natural persons." The enforcement deadline has already been pushed back — from August 2026 to December 2027 — before a single organization has been asked to demonstrate compliance. The phrase "human-centric AI" saturates European policy documents as though naming the aspiration were equivalent to building it. The architecture to actually stand on that aspiration has barely been imagined.

It will likely not come from the companies whose revenue grows with every token consumed.

What This Means for Anyone Who Works with AI

For individual workers, the implication is practical and immediate. The question is not whether to use AI. The question is whether you are using it in ways that build your judgment or quietly replace it. The consultant who outsources the hard interpretive work to an LLM is not becoming more productive. She is, over time, becoming less capable of doing the work that made her valuable in the first place.

For organizations, the implication is structural. The metrics that currently govern AI deployment — cost savings, processing speed, headcount reduction — measure what is eliminated, not what is enabled. Building the institutional capacity to measure whether human judgment is compounding or atrophying is not a philosophical nicety. It is, the BCG data suggests, the difference between AI that creates value and AI that destroys it while appearing to generate it.

For investors and policymakers, the implication is longer-term but no less concrete. Frey's historical analysis suggests that the institutional choices made in the early phases of a general-purpose technology tend to compound across decades. The window for building augmentation frameworks — rather than defaulting to automation infrastructure — is not indefinitely open.

The consciousness question is settled, or settling. Seth is right about what AI lacks. But the question that will actually shape the next decade is not what AI is. It is what the human-AI arrangement produces — and whether we are building the institutions, the metrics, and the design practices capable of telling the difference between arrangements that compound human intelligence and arrangements that consume it.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]