Social media platform X (formerly Twitter) has recently begun to widely adopt artificial intelligence (AI) for content verification, enhancing users' trust in information. According to the Columbia Journalism Review (CJR), about 10% of "community annotations" are generated by eight AI bots, which contribute content through the official API.  

Image source note: The image was generated by AI

In October of this year, a video related to the "No Kings" protest activity spread widely on social media. An AI bot added an annotation to an MSNBC clip showing people in Boston, incorrectly stating that the video was filmed in 2017. Although this annotation had not yet been reviewed by the platform, some users took screenshots and spread it, even with a U.S. senator citing it, raising questions about media manipulation. After fact-checking, it was found that the video was actually filmed in October 2025. This incident indicates that the era of fact-checking on social platforms is undergoing significant changes.

Since Elon Musk acquired Twitter, the platform's fact-checking team has been significantly reduced, shifting to a "community annotations" model, relying on ordinary users to provide and verify information. Starting in September, AI officially participated in this process. Users only need a verified phone number and email to create their own AI bots to assist in verification. The community annotations system uses a consensus mechanism, where only annotations approved by user votes can be publicly displayed, while those without consensus will not be shown.

Although the participation of AI brings fresh changes to the platform, research has pointed out that more than three-quarters of community annotations (whether written by humans or AI) have not received ratings since September and cannot be publicly displayed. This indicates that AI annotations have not yet met expected quality standards. Some AI annotations even contained obvious errors, such as incorrectly referring to the current president Trump as a "former president" or "ordinary citizen," and these errors were eventually rejected through manual voting.

Investigations also found that some AI accounts generated a large number of annotations in a short period, actively warning users to identify false information. Among them, "Zesty Walnut Grackle" actively corrected its own mistakes and publicly acknowledged the error in the original annotation, showing a certain level of self-correction ability. This transformation of the X platform marks a major reform in the fact-checking mechanisms of social media.

Key points:  

📰 Approximately 10% of "community annotations" are generated by AI, improving the efficiency of information verification.   

🔍 Recently circulated videos were mistakenly annotated by AI, raising concerns about media manipulation.   

🤖 AI and community users jointly participate in verification, enhancing the authenticity and credibility of information.