Wikipedia bans one type of content after "heated debate"

Wikipedia bans one type of content after
Source: Newsweek

Wikipedia has banned content generated by large-language models (LLMs) from its platform with two exceptions.

The platform noted that artificial intelligence (AI)-generated content produced by ChatGPT, Claude, DeepSeek and Google Gemini often, "violates several of Wikipedia's core content policies."

Wikipedia provided two exceptions to its AI policy.

First, editors are allowed to use LLMs for copyedits, as long as the LLM doesn't create its own content as part of the text provided.

According to the announcement, "Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited."

Wikipedia acknowledged that it is possible for some writers and editors to have similar styles to an LLM.

"More evidence than just stylistic or linguistic signs is needed to justify sanctions, and it is best to consider the text's compliance with core content policies and recent edits by the editor in question," the announcement said.

The second exception for editors is for translation.

According to Wikipedia, editors, "are permitted to use LLMs to translate articles from another language's Wikipedia into the English Wikipedia, but must follow the guidance laid out at Wikipedia:LLM-assisted translation."

"A lot of LLM-generated text inserted from 2022 to 2026 remains on Wikipedia," the site reads.

"The purpose of this project is to identify and address the misuse of AI in articles."

In a statement to Newsweek, a spokesperson for the Wikimedia Foundation said, "Wikipedia's strength has been and always will be its human-centered, volunteer-driven model."

"Volunteers discuss and debate until a shared consensus can be reached on what information to include and how that information is presented," the spokesperson said.

"This process is done entirely out in the open. Every edit can be seen on 'history' pages, and every discussion point can be seen on article talk pages.

"Volunteers regularly discuss, review and evolve policies and guidelines over time to ensure Wikipedia continues to be a reliable, neutral resource for all."

"LLM's are incapable of synthesizing evidence into comprehensive arguments and instead rely on the synthesis already being available to them within their knowledge base. This is even ignoring the tendency of LLMs to invent and falsely generate sources."

Elsewhere, "LLM companies are also using their data to train their models," an individual posted.

"From Wikimedia's perspective, it's just prudent to avoid it, preventing 'model collapse' -- else they risk losing their newest revenue stream."

However, some critics questioned how Wikipedia would know.