Artificial intelligence unicorn Anthropic has recently been caught in an unprecedented public relations disaster. Developer Chaofan Shou exposed on a social platform that due to the company's negligence when publishing to npm, they failed to remove critical .map files, leading to the complete exposure of the source code of its latest command-line tool

The scale of this leak is astonishing, involving nearly two thousand files and over 500,000 lines of TypeScript code. Although the official team tried to take emergency measures, the source code had already been backed up to GitHub and permanently circulated within the community. This "basic mistake" left Anthropic's technical secrets almost fully exposed to competitors.
Romantic and Hardcore: Source Code Reveals "Cyber Pet" and "Nighttime Dreaming" Mechanisms
Although the leak was extremely embarrassing, the hidden unreleased features in the code have amazed the industry. Among them, the project codenamed BUDDY has attracted significant attention. It can generate unique pixel-style cyber pets based on user IDs, accompany developers while coding, and even feature personified personality settings like "sarcasm level."
Even more technically ambitious is the function codenamed KAIROS, known as the "always-on Claude." The source code shows that it introduces a bionic "nightly dreaming" mechanism, which can organize interaction fragments from the day in the background, by removing redundancies and consolidating core content, training the AI into a more understanding long-term partner for users.
This disastrous "wild launch" has sparked in-depth discussions about AI safety governance. As a company that claims to be "safe and cautious," the "intern-level" mistake made by Anthropic reflects potential fragility in the underlying management of top tech companies when pursuing R&D speed.
The leakage of these 500,000 lines of code is not only a loss of technical assets but also a collective warning to the AI industry. When AI agents are given higher permissions to manage code and systems, even the smallest engineering oversight could evolve into a globally disruptive event.

