The artificial intelligence startup xAI, founded by Elon Musk, has recently gained attention due to a controversial response from its chatbot Grok regarding the topic of "white genocide" in South Africa. In response, xAI is actively working to address this issue and has stated that it has been instructed to take action. According to reports, some of Grok's responses were not only related to this topic but also proactively mentioned it when responding to unrelated queries, leading to user dissatisfaction and strong opposition.

1.png

To explain this unusual behavior, xAI conducted an internal investigation and recently released an update on its progress on the social media platform X. The company stated that the system prompt for Grok was modified by unauthorized individuals, which violated the company's internal policies and core values. xAI emphasized that the prompt is a key component of the chat assistant based on large language models (LLMs), and such modifications not only affected the robot's responses but also deeply disappointed the company.

In this incident, xAI did not disclose who made the modifications to the prompt, but admitted that the existing code review process was bypassed in this event. To prevent similar incidents from happening again, xAI is introducing new review processes to ensure that all modifications to system prompts are appropriately reviewed.

To enhance public trust in Grok, xAI also publicly released Grok's system prompt on GitHub for the first time, marking an important step forward for leading AI companies in terms of transparency. xAI believes this will help build user trust in its system as a truth-seeking AI.

xAI is actively addressing this controversy to ensure its products can better serve users in the future while improving transparency and reliability.