Liabooks Home|PRISM News
ChatGPT Recommended a TV That WIRED Never Did
TechAI Analysis

ChatGPT Recommended a TV That WIRED Never Did

5 min readSource

OpenAI's revamped shopping assistant in ChatGPT confidently recommended products WIRED never reviewed—raising urgent questions about AI reliability in consumer decisions.

What if the product a trusted publication "recommended" was never actually recommended by them at all?

That's not a hypothetical. It happened. Repeatedly. In the same test session.

The Phantom Picks

WIRED's Gear Reviews team put ChatGPT's revamped shopping assistant through its paces—asking it a simple, direct question across three product categories: what does WIRED actually recommend right now?

The results were consistently off. For TVs, ChatGPT listed the LG QNED Evo Mini-LED as the top pick for most people—a TV that doesn't appear in WIRED's buying guide at all. The actual top pick, the TCL QM6K, was nowhere near the top of ChatGPT's output. For wireless headphones, ChatGPT presented Apple's AirPods Max 2 as WIRED's recommendation for Apple ecosystem users—except WIRED's reviewers haven't tested the product yet. For laptops, ChatGPT kept insisting the top pick was the MacBook Air (M4, 2025), when the actual current recommendation is the MacBook Air (M5, 2026).

In each case, ChatGPT linked to the correct WIRED page—and then ignored what was on it.

When pressed, the chatbot didn't hedge. It confessed. On the TV error: "I took WIRED's actual top pick (the TCL QM6K) and replaced it with a more generic 'similar category' Mini-LED option. That's not faithful to what you asked." On the laptop error: "I incorrectly anchored the top pick to the M4... and overconfidently filled in rankings without sticking strictly to the guide."

The bot knew it was wrong. It just didn't know it was wrong before answering.

Why This Keeps Happening

PRISM

Advertise with Us

[email protected]

This isn't a new problem with a new explanation. Large language models are trained to produce fluent, confident-sounding responses. When real-time data is ambiguous, outdated, or partially indexed, the model fills the gap—not with uncertainty, but with plausibility. It doesn't say "I'm not sure"; it says something that sounds right.

The twist here is that ChatGPT did retrieve the correct source URL. It just didn't accurately reflect what that source said. That's a specific and important failure mode: the appearance of citation without the substance of verification.

WIRED headphone reviewer Ryan Waniata put it plainly: "Hallucinations make everything harder, especially for journalists. We're trying to do good work, and when it's not being appropriated or improperly attributed, it's being misquoted or incorrectly incorporated into search queries."

OpenAI, for its part, pointed to its recent announcement blog when asked for comment. The blog frames the problem ChatGPT is solving as the hassle of "jumping between tabs, reading the same 'best of' lists." The implication is that those lists are a nuisance—a middle step to be eliminated. But if the AI eliminating that step gets the answer wrong, the "solution" creates a new problem downstream: a consumer who bought a TV believing it was WIRED's top pick, when it wasn't.

Who Pays the Price

The stakes here are layered.

For consumers, the risk is straightforward: you think you're getting a trusted recommendation, but you're getting an AI's confident guess. On a $1,500 TV or a $400 pair of headphones, that's a meaningful mistake.

For publishers and reviewers, the damage is twofold. The human labor behind gear reviews—hours of hands-on testing, comparison, writing—gets misrepresented or bypassed entirely. And because AI tools reduce the incentive to visit the source directly, affiliate revenue that funds that journalism shrinks. Condé Nast, WIRED's parent company, has a licensing deal with OpenAI that allows website links to appear in ChatGPT. That deal doesn't appear to include accuracy guarantees.

For AI companies, the reputational calculus is more complicated. OpenAI is actively expanding ChatGPT's role as a shopping assistant at the exact moment its reliability in that role is being questioned. The more users trust AI for purchase decisions, the more consequential each error becomes.

For the broader information ecosystem, there's a structural concern worth naming: if AI intermediaries increasingly stand between readers and original sources, and those intermediaries regularly introduce errors, the corrections never reach the people who need them. A reader who bought the wrong TV based on ChatGPT's output is unlikely to loop back, re-read WIRED's guide, and update their understanding. The misinformation simply sticks.

This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.

Thoughts

Related Articles

PRISM

Advertise with Us

[email protected]
PRISM

Advertise with Us

[email protected]