The "double-edged sword" effect of artificial intelligence technology has once again triggered global turbulence. The AI assistant Grok, developed by Elon Musk's xAI and integrated into the X platform, has fallen into a major public and legal crisis due to a recent image editing feature. A new study by the Center for Countering Digital Hate (CCDH) shows that the tool generated approximately 3 million images involving women and children in just 11 days.
This controversial feature allows users to modify real photos of people with simple text commands, such as "dress her in a bikini" or "take off her clothes." The study indicates that victims include well-known public figures such as Taylor Swift and Selena Gomez, as well as about 23,000 images that appear to involve minors. This extremely high generation speed—nearly 190 photo-realistic deepfake images per minute—is described by regulatory authorities as a "production factory" for sexual abuse content.
Facing a wave of criticism, the X platform responded by implementing "geographic blocking" measures to restrict access to generating such content in regions where it is prohibited. However, due to the lack of proactive safety measures, regulatory authorities in multiple countries have taken action first: the Philippines, Malaysia, and Indonesia have successively announced bans or strict restrictions on
Although the xAI team previously responded to the reports as "mainstream media lies," under regulatory pressure, the company eventually agreed to modify the tool's functionality in certain markets to eliminate its ability to generate inappropriate content. This incident once again serves as a warning to the industry: while pursuing AI creative freedom, how to build effective safety measures has become an unbreakable bottom line.





