Why Wikipedia Is Having a Moment in the AI Era
As AI misinformation spreads, the 25-year-old Wikipedia is being rediscovered by young users. But partnerships with AI companies create new dilemmas about its human-first identity.
When did you last visit Wikipedia? And why does that question suddenly matter?
In an era where ChatGPT and Google's AI provide instant answers to everything, the 25-year-old online encyclopedia Wikipedia is experiencing an unexpected renaissance. On TikTok, a video about "unironically buying a Wikipedia hat" hit 1 million views, while young creators evangelize the platform's reliability in a world drowning in AI-generated misinformation.
This isn't just nostalgic internet culture—it's a response to AI's growing credibility crisis.
When AI Gets It Wrong, Humans Get It Right
The timing isn't coincidental. BBC research from December 2024 found that major AI models like OpenAI'sChatGPT and Microsoft'sCoPilot frequently provide inaccurate news summaries. A January 2026 Guardian investigation revealed Google's AI Overview was serving users false medical information that could endanger their health.
Meanwhile, Wikipedia operates on a fundamentally different model. "Wikipedians"—volunteer editors—follow rigorous editing processes. Every article includes citations, public "talk pages" allow editors to discuss changes transparently, and a sophisticated monitoring system with approved editors and bots watches entries in real-time.
"The fact that we were all told not to use it in school is really frustrating because we just weren't taught how to actually use it," says Dean, a 22-year-old content creator who posted a viral TikTok defending Wikipedia's credibility.
The 'Old Internet' Strikes Back
The numbers tell a complex story. Wikipedia's Wikimedia Foundation raised a staggering $184 million in 2025—a $4 million increase from 2024. Yet monthly human page views dropped roughly 8% compared to 2024, attributed to people increasingly using generative AI and social media for information.
Still, Wikipedia remains the 9th most-visited website globally in 2025, with 1.9 trillion total article views over the past decade.
The platform's cultural cache among young users is undeniable. Instagram accounts like @depthsofwikipedia, featuring screenshots of bizarre Wikipedia pages, boasts 1.6 million followers. On TikTok, users share their "Wikipedia rabbit holes" and celebrate the site's user-friendliness compared to Google's AI summaries.
"I definitely use it more," says Chisom, 22, whose Wikipedia hat video went viral. "I used to use Google for celebrity info, but since they started doing the whole AI summary thing, that's so unhelpful to me."
The AI Partnership Paradox
Here's where things get complicated. In January, Wikipedia announced new partnerships allowing tech companies to train their AI models using Wikipedia Enterprise, a paid service providing scaled access to its content. This isn't unprecedented, but it raises fundamental questions about Wikipedia's human-first identity.
The irony is stark: users are flocking to Wikipedia as an alternative to AI misinformation, while Wikipedia simultaneously enables AI training that could perpetuate the problem.
Washington Post reported in August 2025 that "suspicious edits, and even entirely new articles, with errors, made-up citations and other hallmarks of AI-generated writing keep popping up" on Wikipedia, forcing human editors to clean up the mess.
Tech journalist Stephen Harrison, who covered Wikipedia for Slate, sees the LLM partnerships as tech companies' "recognition" that "their long-term future depends on nurturing projects like Wikipedia." But he worries about users "forgetting" about Wikipedia if they're mainly consuming its content through AI summaries.
The Human Element Under Pressure
Hannah Clover, a Wikipedian since 2018, identifies a subtler threat: "I worry that a lot of the sources we cite might become unreliable in the future. Sometimes you have sources that were previously reliable that become unreliable because they start publishing AI slop out of nowhere."
This creates a vicious cycle. As AI-generated content floods the internet, Wikipedia's human editors face the impossible task of distinguishing reliable sources from AI-generated "slop"—while the platform itself feeds the AI systems creating that content.
The solution, if there is one, might lie with the very humans driving Wikipedia's current moment. Independent creators like Depths of Wikipedia keep the brand alive, while TikTok's "old internet" nostalgia suggests genuine appetite for human-curated knowledge.
This content is AI-generated based on source articles. While we strive for accuracy, errors may occur. We recommend verifying with the original source.
Related Articles
The biggest shift since ChatGPT has arrived with AI coding agents. Why are developers suddenly afraid of their own shadow? We examine the paradox of technological progress and job displacement.
Americans with bachelor's degrees now account for 25% of unemployed workers, a historic high. AI-vulnerable occupations see sharp joblessness spikes as the white-collar labor market undergoes structural transformation.
Despite viral predictions of imminent AI transformation, economic realities suggest a slower, more gradual adoption curve ahead.
AI prediction engines are outperforming human experts in forecasting tournaments, with one bot placing 4th among 500+ competitors. What happens when machines become our crystal ball?
Thoughts
Share your thoughts on this article
Sign in to join the conversation