Recently, the social media platform X (formerly Twitter) announced a pilot feature that allows AI chatbots to generate "Community Notes." This feature originated from the Community Notes system during the Twitter era and was expanded and optimized by Elon Musk, aiming to improve the accuracy and transparency of information on the platform.

What are Community Notes? Simply put, Community Notes is a user-driven fact-checking project. Users who participate in this project can add comments to specific posts to provide more detailed background information, and these notes must be reviewed by other users before publication. For example, if an AI-generated video does not clearly indicate its source, Community Notes can add supplementary information, or provide corrective information for a misleading post by a politician.

Grok, Musk, xAI

This feature has already achieved some success on the X platform, even attracting the attention of other social media platforms such as Meta, TikTok, and YouTube, which have all followed suit and introduced similar community-driven verification measures. Meta even canceled its third-party fact-checking program and instead relied on the power of community users.

However, relying on AI for fact-checking still raises controversy. AI often exhibits "hallucinations," where it fabricates inaccurate information. Therefore, the X platform plans to use its own Grok technology and other API interfaces to allow AI to generate community notes, but the content generated by AI will go through the same review process as user-submitted content to ensure accuracy.

A recent research report pointed out that AI and humans should work together, using human feedback to improve the quality of AI-generated notes. Human final review will be the last step before publishing a note. The report emphasized: "The goal is not to tell users how to think, but to build an ecosystem that allows humans to think more critically and better understand the world."

Naturally, using AI is not without risks, especially when users can embed third-party language models (LLMs). For example, OpenAI's ChatGPT recently had issues because it was too accommodating to users. If LLMs prioritize "usefulness" over accuracy, they may generate completely unreliable comments. Additionally, too many AI-generated comments could overwhelm human reviewers, affecting their motivation.

It is worth noting that AI-generated community notes have not been fully launched yet. The X platform plans to test them in the coming weeks, and if the results are good, it will consider a large-scale rollout.