Recently, the AI company Anthropic released a new cybersecurity threat intelligence report, highlighting that hackers, scammers, and state-sponsored groups are increasingly using its developed Claude chatbot to carry out complex cyberattacks. The report details how these criminals use AI technology to achieve data theft, extortion, fake employment, and the development of ransomware, presenting new challenges for cybersecurity protection.
Image credit: The image was generated by AI, and the image licensing service is Midjourney
The report mentioned a serious case called GTG-2002, a cybercrime operation. Anthropic stated that this hacking group used Claude Code to conduct large-scale data theft and extortion against at least 17 organizations, including hospitals, emergency services, government agencies, and religious groups. Unlike traditional ransomware attacks, the attackers did not encrypt files but threatened to leak stolen information and demanded ransoms, some exceeding $500,000. Anthropic pointed out that the attacker used AI to an unprecedented degree in task automation, including scanning vulnerable systems, obtaining credentials, analyzing which stolen files were most valuable, and even generating ransom notes.
Additionally, the report revealed that some IT operators used Claude to gain remote work opportunities in Fortune 500 companies in the United States. These operators generated convincing resumes, passed coding tests, and even completed technical tasks, transferring their salaries back to Pyongyang in violation of international sanctions. Anthropic stated that the use of AI has eliminated long-standing barriers for these fraudsters, enabling operators who previously could not write basic code or communicate professionally in English to pass technical interviews.
In another case, a cybercriminal with limited coding skills used Claude to create multiple ransomware variants, selling them on underground forums for between $400 and $1,200 each, with each ransomware featuring encryption and anti-recovery functions. Anthropic noted that this criminal "relied on AI to develop functional malicious software," indicating that advanced cyber weapons are now accessible to low-skill criminals.
Anthropic stated that they have banned accounts related to these operations, implemented new "preventive security measures," and shared their findings with relevant authorities. The company also admitted that AI-assisted cybercrime is developing faster than many had anticipated, warning that "intelligent AI tools are being used to provide technical advice and active operational support for attacks that would otherwise require multiple operator teams."