On April 1, 2026, global leading AI laboratory Anthropic accidentally caused the shutdown of approximately 8,100 code repositories on GitHub due to a major copyright cleanup mistake. The incident originated when a software engineer discovered on Tuesday that Anthropic had inadvertently leaked the core source code of its newly released command-line application Claude Code. Subsequently, the code was quickly spread by AI enthusiasts and numerous branches were created to analyze the underlying large language model (LLM) call logic.

Faced with the leakage of core assets, Anthropic submitted a takedown notice to GitHub based on U.S. digital copyright law. However, due to an execution deviation at the technical operation level, the cleanup scope exceeded expectations, covering not only branches containing infringing code but also a large number of legitimate public code repository networks. Boris Cherny, the person in charge of Anthropic's Claude Code, publicly apologized for this, acknowledging that the large-scale ban was an accident and quickly withdrew most of the takedown requests. Currently, except for one main repository and 96 confirmed branches containing the leaked source code, the affected code repositories have gradually regained access.
Although Anthropic took timely measures to mitigate the damage, this incident exposed shortcomings in its code compliance management and copyright enforcement chain at a sensitive moment when the company was planning its IPO. For leading AI companies aiming for an initial public offering, source code leaks and subsequent chain reactions are not only a threat to their technical moat but also a severe test of their corporate execution ability and compliance. This incident highlights the importance of the security of developer tools and the accuracy of copyright protection actions in the context of the rapid iteration of generative AI.


