Bots, be gone.
The internet’s favorite encyclopedia has officially banned its 260,000 human editors from using artificial intelligence to write articles — a major crackdown as so-called “AI slop” floods the web.
The new policy, approved by volunteers at the Wikimedia Foundation’s flagship site Wikipedia, bars the use of large language models (LLMs) like ChatGPT from generating encyclopedic content, citing concerns over accuracy, sourcing and reliability.
Wikipedia leaders say AI-generated text often breaks the site’s core tenets, including strict standards around verifiability and neutrality, because chatbots are prone to so-called “hallucinations” — made-up facts, broken links and references that lead to nowhere.
Editors can still use AI in limited ways, such as translating articles from other languages or suggesting minor copy edits, as long as humans review every change and no new information is introduced.
Last year, Wikipedia came up with its own bot-detection guidelines for editors that highlight common “tells” of AI writing. Editors are trained to spot red flags like inaccurate or fake citations, overused phrases and cliches, wordy explanations and sudden style transitions.
Suspected cases are typically reviewed by other editors who can challenge, revise or remove questionable content.
Ilyas Lebleu, a volunteer Wikipedia editor in France and founding member of the WikiProject AI Cleanup squad, told NPR in September, “We started to notice a lot of articles which were written in a style that didn’t match the style we usually saw on Wikipedia.”
Last October, Wikipedia co-founder Jimmy Wales also blasted current AI models as unreliable, calling the situation a “mess,” per the BBC, and warning that the tech is not ready to replace human editors.
The policy change comes after months of debate among Wikipedia’s moderators, who accepted the new rules in a 40 to 2 vote.
Lebleu, who uses the handle Chaotic Enby on the site, helped write the new guideline, telling 404Media last week that the change has been a long time coming as the growing number of AI-generated articles had become unmanageable for editors.
“The mood was shifting, with holdouts of cautious optimism turning to genuine worry.”
Still, there’s concern among Wikipedia leaders and supporters that the AI takeover has already come too far. According to recent data, ChatGPT has already overtaken Wikipedia in monthly visits, with human page views down 8% in late 2025 as compared to 2024.
Between late 2023 and early 2024, ChatGPT saw a 36% increase in users, according to a recent Futurism report, while other platforms have only seen slight nudges one way or the other in user activity. “It’s reaching more of the internet, more quickly, than almost any other platform in history,” GWI senior data journalist Chris Beer told the outlet.
That shift is painfully ironic for the 25-year-old web resource, which has long been one of the internet’s most trusted information hubs and, most likely, helped train and inform the LLMs that support ChatGPT.
Speaking with 404Media, Lebleu warned the implications stretch far beyond Wikipedia, arguing the platform may just be the start of a broader reckoning.
“As anxiety over the AI bubble grows, I foresee a domino effect, empowering communities on other platforms to decide whether AI should be welcome on their own terms,” Lebleu said.













