With the explosive growth of generative AI technology, AI content compliance has become a focal point for global regulation. Recently, Elon Musk's social media platform X and its AI assistant Grok have been caught in a multinational investigation over alleged generation and dissemination of unauthorized human-like images. On January 16, the Japanese government officially announced its inclusion in this regulatory initiative.
According to Economic Security Minister Kiko Noguchi at a regular press conference, the Cabinet Office has submitted written inquiries to the X platform, clearly requiring the platform to explain what specific measures have been taken to prevent Grok from generating deepfake images. These images are accused of seriously infringing on personal privacy, right of publicity, and intellectual property, causing extremely negative impacts on social platforms.
Noguchi stated during the meeting that users can still generate such controversial content through Grok, reflecting significant vulnerabilities in the platform's protective mechanisms. She vividly compared the technology to a "knife," emphasizing that the key lies in whether the user employs it for cooking or causing harm. Although AI technology itself is not inherently guilty, developers must take responsibility for supervision.
Currently, the Japanese government has issued a strict warning to X: if the algorithm filtering and security protection are not strengthened soon, the government will not rule out taking all necessary actions, including legal measures. The Japanese government also stated that this regulatory standard is not only aimed at the X platform; if other AI platforms commit similar violations in the future, they will also be included in the rectification scope.
Key Points:
🛑 Regulatory Intervention: The Japanese government has officially joined the international investigation, requiring Elon Musk's X platform to make timely corrections regarding the issue of inappropriate images generated by the AI assistant Grok.
⚠️ Firm Statement: The Japanese authorities have submitted written inquiries and issued warnings, indicating that legal action may be taken if the corrections are not made effectively to protect citizens' privacy.
⚖️ Core Issue: The investigation focuses on the infringement of the right of publicity and privacy caused by the deepfake content generated by Grok. The government calls on developers to establish a more comprehensive AI safety protection network.



