For generations, students were warned against using Wikipedia as a source. The reasoning was simple: anyone could edit it, making it unreliable. But as artificial intelligence floods the internet with misinformation, the crowdsourced encyclopedia is experiencing a surprising resurgence – not despite its openness, but because of it.
The Rise of AI Raises Questions About Trust
AI chatbots like ChatGPT and Google’s AI Overview have demonstrated a propensity for factual errors and misleading summaries. A BBC investigation in late 2024, and a Guardian report in early 2026, both found major AI models failing at basic accuracy. This has inadvertently highlighted the rigor of Wikipedia’s peer-reviewed, volunteer-driven system.
Wikipedia relies on a community of “Wikipedians” who enforce strict citation standards, engage in public discussion on “talk pages,” and monitor edits in real time. While not perfect, this process is demonstrably more reliable than many AI-generated summaries. The public is beginning to recognize this.
Funding and Engagement Surge Amid AI Concerns
The Wikimedia Foundation, which supports Wikipedia, raised $184 million in 2025 – a $4 million increase from the previous year. This growth coincides with growing awareness of AI’s unreliability. Users are rediscovering Wikipedia, not as a last resort, but as a credible alternative.
This shift is visible on social media, where creators and fans actively promote the platform. TikTok users share their enthusiasm, some even buying Wikipedia-branded merchandise. The site remains a top source in Google search results and, despite a slight decline in direct page views (attributed to AI summarizing content), still attracts 1.9 trillion views over the past decade. It was the ninth most-visited website in 2025.
A Delicate Balance: Collaboration with AI
Despite the renewed interest, Wikipedia faces new challenges. AI-generated edits – often inaccurate or fabricated – are appearing on the site, requiring constant human intervention. At the same time, Wikipedia is partnering with tech companies through “Wikipedia Enterprise,” a paid service providing AI models access to its content at scale.
This collaboration raises questions. How can Wikipedia maintain its human-powered integrity while relying on the very technology it often counters? Some argue that these partnerships are a necessary recognition of Wikipedia’s value by AI developers. Others, like tech journalist Stephen Harrison, worry about political attacks (such as Elon Musk’s criticisms and launch of a rival site, Grokipedia) and the risk of users “forgetting” Wikipedia exists if they only encounter its content via AI summaries.
The Future of Human Knowledge in a Digital World
Ultimately, Wikipedia’s survival depends on continued human involvement. The platform faces challenges in attracting and retaining volunteer editors, many of whom struggle with financial pressures. But the enthusiasm of a new generation, coupled with the growing distrust of AI-generated content, provides a glimmer of hope.
“Social media influencers rely on Wikipedia as a sort of invisible foundation for their knowledge,” Harrison notes.
Wikipedia’s story isn’t just about an encyclopedia; it’s about the ongoing tension between human-driven knowledge and automated information. As AI reshapes the internet, the enduring appeal of a site built on collaboration and verifiable facts suggests that the “old internet” – and its values – may not be so obsolete after all.
























