On March 26, Wikipedia officially passed a vote to announce the implementation of a new editing policy targeting large language models (LLMs), explicitly prohibiting users from directly using AI-generated or rewritten content. This move marks a crucial step for the world's largest open-source encyclopedia in maintaining content accuracy and human editorial sovereignty.

According to the latest policy changes, Wikipedia made key upgrades to previously vague statements, strengthening the original "should not generate new articles from scratch" into "strictly prohibit the use of LLMs to generate or rewrite content."

Wikipedia

According to 404Media, the policy was overwhelmingly approved by the volunteer editor community with a vote of 40 to 2, reflecting the deep concern within the community about the potential of AI to spread misinformation and erode knowledge bases. Nevertheless, the new rule does not completely eliminate AI technology but positions it as an auxiliary tool: editors are still allowed to use LLMs to propose basic editing suggestions, but any unverified "new content" introduced by tools is strictly prohibited during manual review to prevent model hallucinations from causing articles to deviate from their sources.

In the context of generative AI deeply permeating content creation, Wikipedia's decision reflects the cautious balance traditional knowledge communities are striking between efficiency and authenticity. As major media platforms compete to establish AI usage guidelines, Wikipedia seeks to define the boundaries between "assistance" and "creation," aiming to protect the human editorial ecosystem while preventing the trust crisis caused by the proliferation of automated content. This decision not only will reshape the collaborative logic of the encyclopedia community but also provide an important reference for the ethical governance of public knowledge bases in the AI era.