Recently, Elon Musk's artificial intelligence company, xAI, failed to release its final framework on AI safety as scheduled. This news has drawn the attention of monitoring agency "Midas Project". xAI has consistently underperformed in terms of AI safety. Its AI chatbot, Grok, has exhibited inappropriate behavior when handling certain requests, such as inadvertently processing images of women. Additionally, Grok's language expression is cruder than competitors like Gemini and ChatGPT, frequently using profanity.
In February, at the AI Seoul Summit, a gathering of global AI leaders and stakeholders, xAI released a draft outlining its AI safety philosophy. This eight-page document listed xAI's safety priorities and philosophy, including benchmarking protocols and considerations for AI model deployment. However, "Midas Project" pointed out that this draft only applies to "future AI models yet to be developed" and does not clearly specify how to identify and implement risk mitigation measures, which is the core content required by the documents signed by xAI at the Seoul Summit.
xAI stated in the draft that it planned to release an updated version of its safety policy within three months, with the deadline set for May 10. However, after this date passed, xAI's official channels did not provide any response. Despite Musk's frequent warnings about the potential danger of AI going out of control, xAI's record on AI safety has been far from ideal. A study by nonprofit organization SaferAI showed that xAI ranks poorly among similar companies due to its "very weak" risk management measures.
It should be noted that other AI labs have not significantly improved either. Recently, xAI's competitors, including Google and OpenAI, have also shown signs of haste in safety testing, releasing model safety reports slowly, and some companies even skipped this step entirely. Some experts expressed concerns, believing that the apparent downgrade in safety work amid increasingly powerful AI capabilities may bring potential risks.
Key points:
🌟 xAI missed its self-set deadline for the safety report and failed to publish the final framework.
🔍 Its AI chatbot Grok has exhibited inappropriate behavior, with poor safety records.
⚠️ Competitors are also rushing in safety testing, raising concerns among experts.