Even in top AI laboratories, a simple packaging mistake can turn into a technological disaster.

According to media reports, Boris Cherny, a core developer of Anthropic (hereinafter referred to as A Company), made a response on April 1st regarding the recently highly publicized Claude Code source code leak incident. He admitted that this accident, which caused a "celebration" in the developer community, was purely a human error, not a hacker attack.

111.png

Cause: An Unobfuscated "Backdoor"

The trigger for this leak was an oversight during the product deployment process:

The Fatal MAP File: When packaging the product for the production environment, the team accidentally included an un-obfuscated MAP file.

Exposure of Internal Structure: This file contained a large amount of internal logic data, allowing developers to easily understand the Claude Code core architecture and code implementation with this information.

Response: Legal Action and Process Reengineering Concurrently

After the incident, A Company quickly took remedial measures:

Cleaning GitHub Repositories: A Company has sent a DMCA copyright notice, requesting GitHub to delete over 8,100 repositories containing leaked source code. Although this action cannot completely erase the Internet's memory, it aims to prevent further large-scale spread of the source code.

Eliminating Manual Steps: Boris Cherny stated that the existing deployment process includes multiple manual steps, which is also the root cause of the mistake. In the future, the team will introduce more "integrity checks" and use Claude Code itself to check the results, achieving a higher level of automation.

Technical Reflection: The "Blind Spot" in the AI Era

Ironically, as an advanced AI tool designed to assist developers in writing code, the source code of Claude leaked due to a basic deployment mistake. Boris Cherny showed great vision, emphasizing that the key to solving the problem lies not in adding cumbersome processes or blaming employees, but in using automation to address human uncertainty.

Industry Insight: Source Code Leaks Have Become the New Norm in Large Model Competitions?

From the recent OpenAI to the current Anthropic, top AI companies seem unable to escape the struggle between security and efficiency while rapidly iterating. For global developers, this leak may be an unexpected "technological pilgrimage," but it also serves as a warning to all technology companies: On the path to AGI, the most fundamental engineering automation remains an insurmountable safety line.