As generative AI content proliferates on social media, platform regulation is entering a "tougher phase." Renowned independent app researcher Nima Owji recently revealed that the X Platform (formerly Twitter) is secretly testing a content tag feature called "Made with AI," aimed at making false or synthetic information on the platform more transparent.

AIbase learned that this feature is currently integrated under the "Content Disclosure" option. When creators post content, they can choose to enable this tag, and the system will then prominently remind viewers that the content was generated using an AI tool. This marks a significant step by the X Platform in addressing AI deepfakes and misleading information.
What is concerning is that this feature may not be optional. According to researchers' predictions, once this feature is officially launched, the X Platform is likely to force creators to actively label content that involves AI. For users who try to pass off AI-generated content as real and refuse to label it, the platform may impose strict penalties, including but not limited to content demotion, account suspension, or even permanent bans.
Currently, major social media platforms such as Meta and YouTube have already introduced similar AI-generated content labeling systems. AIbase believes that the X Platform's move aims to reestablish information credibility in an increasingly complex public opinion environment. For content creators, future creation guidelines will be more transparent: using AI to improve efficiency is acceptable, but readers must be informed.