Regarding the previous source code leak, AI giant Anthropic has officially launched a legal counterattack. According to recent news, the company has submitted multiple DMCA (Digital Millennium Copyright Act) takedown notices to GitHub, requesting the removal of all illegally hosted Claude Code source code repositories on the platform.

As a result, GitHub took an "all-in-one" approach, not only deleting the reported main repository but also removing more than 8,100 related forked repositories simultaneously. This is the largest code copyright cleanup action in the AI industry in recent years.

image.png

Reversal of Leak Cause: Not Human Error, but a Tool Bug

Previously, the public generally believed that the leak was due to employee operational mistakes, but the latest investigation report shows that the real culprit may be a bug in a certain packaging tool used internally by Anthropic.

This bug caused the system to incorrectly package sensitive files and the full TypeScript source code, which should have been private, when building production environment packages. The exposure of this technical detail somewhat eased external doubts about the professional competence of Anthropic employees, but it also revealed serious security vulnerabilities in its automated workflow.

Although GitHub cooperated to remove the 8,100 repositories, since the source code had already been downloaded, cloned, and spread to Telegram, cloud storage, and private Git platforms by tens of thousands of developers around the world within the past 48 hours, it is almost impossible to completely eliminate these data.