Under the sustained public outcry and regulatory pressure for weeks, X platform (formerly Twitter) announced early today through its official security account @Safety that it has imposed the strictest restrictions ever on the image generation and editing features of its AI model Grok. This move directly responds to recent serious allegations regarding Grok generating images involving "sexualized" children and content of people without their consent.

According to the new policy, Grok will completely prohibit editing photos of any real people, especially严禁 modifying them into images wearing revealing clothing such as bikinis or undergarments. This restriction applies to all users, whether they are paying subscribers or not. At the same time, X explicitly promised: "Grok AI will no longer change real people's photos into 'bikini photos'."

image.png

In addition, xAI has decided to fully place the image generation feature behind a paywall. Non-subscribers will completely lose the ability to generate images. In regions where it is legally prohibited (such as California, USA), the system will directly block all capabilities of generating images of "real people wearing bikinis or undergarments," preventing the creation of illegal content at the source.

This series of measures followed the formal investigation launched by California Attorney General Rob Bonta. According to his disclosure, an independent analysis showed that among approximately 20,000 images generated by xAI during the Christmas to New Year period in 2025, more than half contained extremely revealing images of people, including some that appeared to be minors, raising significant concerns about the safety mechanisms of AI platforms.

Facing the crisis, Musk previously claimed he was "unaware" of Grok generating images of minors in the nude, explaining that the relevant functions were only available when the NSFW (adult content) option was enabled, and theoretically should be limited to fictional adult characters, with the level of exposure comparable to R-rated movies on Apple TV. However, he also admitted that the system needed to dynamically adjust its restriction strategies according to local laws.

X platform reiterated its zero-tolerance stance against child exploitation in its statement and said it would continue to remove high-risk information, including child sexual abuse material (CSAM) and non-consensual nudity. However, this incident once again exposed the ethical and compliance challenges faced by generative AI in open deployment — when technology outpaces regulation and safety mechanisms, even the most powerful models can become risk amplifiers.

Today, as AI-generated content becomes increasingly realistic, how to balance creativity and safety has become a core issue that all major model manufacturers cannot avoid.