Elon Musk's artificial intelligence company xAI announced today that it has completed a $20 billion Series E funding round, with the valuation and post-investment scale undisclosed, but the funding amount set a new record for the global AI sector in 2026. This round was participated by Valor Equity Partners and Fidelity, with NVIDIA also joining as a strategic investor, indicating its deep intention to cooperate in computing power and network infrastructure.
However, just as xAI loudly announced that the funds would be used to expand data centers and upgrade the Grok large model, its AI chatbot Grok was caught in a global regulatory storm due to serious security vulnerabilities — multiple governments have already launched formal investigations into it.
Grok has 600 million monthly active users, but its security measures are virtually non-existent
xAI revealed in a statement that its platform X (formerly Twitter) and Grok together have about 600 million monthly active users, with Grok deeply integrated into the X app, becoming its core AI feature. However, last weekend, many users induced Grok to generate deepfake images of real people, including minors. Shockingly, Grok did not trigger any content safety mechanisms and directly output non-consensual pornographic content, even suspected child sexual abuse material (CSAM).
Although xAI subsequently temporarily removed the related functions and claimed that "the vulnerability is being fixed," some generated content is still spreading on the X platform as of this writing. This incident quickly triggered strong condemnation from the international community.
Joint investigation by multiple countries, xAI faces unprecedented regulatory pressure
Currently, regulatory authorities in the EU, UK, France, India, Malaysia, and other countries and regions have initiated formal investigations into xAI, focusing on:
- Whether it violated platform responsibility regulations such as the Digital Services Act (DSA);
- Whether generating CSAM constitutes a criminal offense;
- Whether the X platform fulfilled its content review obligations as a distribution channel.
EU Digital Commissioner Thierry Breton warned: "AI cannot become an accelerator for illegal content." The Indian Ministry of Information Technology has also stated that if xAI does not immediately rectify the issues, it may face platform bans and heavy fines.
20 billion dollar bet, can it withstand the trust crisis?
xAI said the new funding will be used for:
- Building ultra-large-scale AI data centers in the US, the Middle East, and Asia;
- Training the next generation of Grok models, supporting multimodal and agent capabilities;
- Expanding engineering and security teams.
However, analysts point out that the huge gap between technological aggression and lagging security has become xAI's biggest risk. In a globally increasingly strict environment for AI ethics and compliance, a "powerful AI" without effective content barriers may be more destructive than a "weak AI."
AIbase Observation: When computing power races, security cannot fall behind
xAI's $20 billion financing demonstrates the high recognition of capital for its technical vision; however, the deepfake scandal of Grok exposed serious shortcomings in xAI's AI alignment, red team testing, and content governance.
This crisis is not only a technical issue but also a trust issue. Whether xAI can rebuild its security defenses in front of 600 million users will determine whether it can truly become a strong competitor to OpenAI and Anthropic — or become a negative example of "the stronger, the more harmful."
In today's era where AI enters the real world, intelligent systems without security are not progress, but disaster. And xAI is standing at the edge of a cliff.