Recently, security researchers from Cato Network discovered two new versions of WormGPT on underground forums. This malicious tool gained significant attention in 2023 and was believed to have been shut down. The two new versions are named keanu-WormGPT and xzin0vich-WormGPT, respectively using commercial AI models xAI's Grok and Mistral's Mixtral, with the aim of helping cybercriminals create phishing emails, write malicious code, and bypass security measures of legitimate AI platforms.
Image Source Note: Image generated by AI, licensed by Midjourney.
The original WormGPT was created by a Portuguese hacker named "Last," who used an open-source model, GPT-J, to circumvent ethical limitations of mainstream AI tools. Although this tool was shut down in 2023, it did not mark the end of its influence but rather sparked a new trend. Vitaly Simonovich, a threat intelligence researcher at Cato Networks, pointed out that "WormGPT" has now become a recognizable brand representing a new class of unrestricted language models (LLMs).
Among them, keanu-WormGPT was released by a user on February 25, 2025, running via a Telegram chatbot based on the Grok model. Researchers used jailbreak technology to analyze how keanu-WormGPT operates, discovering that its system prompts were manipulated to instruct Grok to ignore its ethical safeguards, thus generating scripts for phishing emails and credential theft.
Another version, xzin0vich-WormGPT, was released on October 26, 2024, by user "xzin0vich," using Mistral AI's Mixtral model. Similar to its Grok counterpart, this version also operates through Telegram and responds to unethical or illegal requests. The Cato team used similar jailbreak methods to obtain system prompts, confirming that this version runs on the Mixtral architecture.
The resurgence of WormGPT highlights how malicious actors adapt to evolving AI technologies. While legitimate platforms are strengthening ethical boundaries, cybercriminals are utilizing the same tools for malicious purposes. Since the original WormGPT was shut down, other models such as FraudGPT, DarkGPT, and EvilGPT have emerged. Simonovich stated, "These new versions of WormGPT are not custom-built models but rather the result of threat actors skillfully repurposing existing language models."
In light of these developments, cybersecurity experts emphasize the need to strengthen defense strategies. Cato Networks recommends implementing multiple best practices, including enhancing threat detection and response, enforcing stricter access controls, and increasing security awareness and training.
Key Points:
🌐 ** New Discovery **: Cato Network discovers two new versions of WormGPT, aiding cybercrime.
🔒 ** Tool Upgrade **: New versions are based on Grok and Mixtral, capable of bypassing AI security safeguards.
🛡️ ** Security Advice **: Experts urge strengthening cybersecurity defenses to counter evolving threats.