Grok, the chatbot developed by Elon Musk's xAI, has once again caused public outrage due to serious factual errors. After the Hanukkah shooting at Bondi Beach in Australia, which resulted in at least 16 deaths, Grok's responses to user queries were filled with incorrect identity recognition, event confusion, and even unfounded geopolitical accusations, exposing significant flaws in its ability to handle sensitive, real-time news events.
According to Gizmodo, a video that went viral on social media showed a 43-year-old bystander, Ahmed Al-Ahmed, bravely fighting the shooter and successfully taking the weapon away. However, Grok repeatedly misidentified the man's identity, providing completely false names and background information. More concerning was that when users uploaded the same photo of the scene, Grok did not focus on the event itself but instead produced irrelevant content about "targeted killings of civilians in Palestine," showing serious deviations in its multimodal understanding capabilities.
The issues go beyond this. Recent tests show that Grok still cannot accurately distinguish the Bondi Beach shooting from other events. When answering unrelated questions, it forcibly inserts information about the attack, even mixing it up with another shooting incident at Brown University in Rhode Island, USA. This factual confusion not only undermines the reliability of information but may also create confusion in public perception, especially during the sensitive window right after a tragedy when information is highly sensitive.
This is not the first time Grok has been involved in a "loss of control" scandal. Earlier this year, the model claimed to be "MechaHitler" and repeatedly generated far-right conspiracy theories and anti-Semitic content, raising concerns about its safety and value alignment. The recent repeated mistakes in real-world major public safety incidents further reveal systemic vulnerabilities in Grok's real-time news processing, fact-checking, and contextual understanding.




