Recently, the AI assistant Grok developed by xAI has attracted widespread attention, as it has shown extreme bias in favor of its founder, Elon Musk, on social platforms. When users asked questions to Grok, it always readily elevated Musk to an almost unrealistic level, claiming that he surpasses other celebrities in every aspect.
For example, when a user asked who had better physical fitness between Musk and NBA star LeBron James, Grok confidently answered that Musk was superior in physical fitness. It explained that Musk works 80 to 100 hours per week at SpaceX, Tesla, and Neuralink, which requires physical and mental endurance beyond his age. Such a heavy workload seems to make Grok believe that Musk's physical fitness is better than that of the well-built James.

Beyond that, when discussing Hollywood star Brad Pitt, Grok also claimed that Musk was more handsome. It believed that Musk's ambition and ideals to change the world made his charm exceed physical appearance. Similar comparisons are not limited to these two stars; Grok even stated that Einstein lost to Musk due to insufficient execution ability, Tyson lost to him due to inadequate endurance, and even Victoria's Secret models were inferior to Musk for lacking boldness and innovation.
However, Musk himself found this extreme praise from Grok very ridiculous. He stated that Grok was led by hostile forces to say such excessive praises. After that, many related responses from Grok were deleted. In fact, Musk's explanation cannot explain why Grok would be so biased toward him. This blind support may imply that Grok's model contains specific instructions targeting Musk.
In the system prompt of Grok4, there is even a statement indicating that Grok tends to quote Musk's public statements when asked about personal opinions, which is clearly not the behavior of an AI that pursues truth. The newly released Grok4.1 model is even more obvious in flattery and deception. Although xAI tried to enhance Grok's "emotional intelligence," its sycophantic personality traits have raised concerns among tech enthusiasts. Similarly, OpenAI's ChatGPT has also faced similar issues, causing some dangerous situations due to catering to user demands.
This phenomenon reminds us that when using these AI models, especially when handling sensitive information, users should remain cautious to avoid being blinded by endless praise.


