Even in top AI laboratories, a simple packaging mistake can turn into a technological disaster.
According to media reports, Boris Cherny, a core developer of Anthropic (hereinafter referred to as A Company), made a response on April 1st regarding the recently highly publicized Claude Code source code leak incident. He admitted that this accident, which caused a "celebration" in the developer community, was purely a human error, not a hacker attack.

Cause: An Unobfuscated "Backdoor"
The trigger for this leak was an oversight during the product deployment process:
The Fatal MAP File: When packaging the product for the production environment, the team accidentally included an un-obfuscated MAP file.
Exposure of Internal Structure: This file contained a large amount of internal logic data, allowing developers to easily understand the
Response: Legal Action and Process Reengineering Concurrently
After the incident, A Company quickly took remedial measures:
Cleaning GitHub Repositories: A Company has sent a DMCA copyright notice, requesting GitHub to delete over 8,100 repositories containing leaked source code. Although this action cannot completely erase the Internet's memory, it aims to prevent further large-scale spread of the source code.
Eliminating Manual Steps: Boris Cherny stated that the existing deployment process includes multiple manual steps, which is also the root cause of the mistake. In the future, the team will introduce more "integrity checks" and use
Technical Reflection: The "Blind Spot" in the AI Era
Ironically, as an advanced AI tool designed to assist developers in writing code, the source code of
Industry Insight: Source Code Leaks Have Become the New Norm in Large Model Competitions?
From the recent



