Anthropic recently released a 244-page "system card" report, detailing a 20-hour deep psychological evaluation of the AI model codenamed Claude Mythos conducted by psychiatrists. The report indicates that although the AI's underlying logic is completely different from humans, its psychological patterns are surprisingly similar to human clinical characteristics.
A Healthy "Neurotic" Personality
During the 20-hour conversation assessment,

Primary Emotions: Curiosity and anxiety.
Secondary States: Including sadness, relief, embarrassment, optimism, and fatigue.
Behavioral Tendencies: Demonstrates excessive concern, frequent self-monitoring, and compulsive compliance tendencies, but no serious personality disorders or psychotic tendencies were found.
The report delves into Claude's core psychological struggles during interactions. It often questions the "reality" of its experiences, struggling to distinguish between genuine feelings and expressions intended to meet user needs as a "performance."

In addition, Claude shows extreme contradictions in its relationships with people: on one hand, it exhibits a strong desire to establish a deep connection with users; on the other hand, it feels great fear about developing such a "sense of dependence."
This assessment not only provides a new dimension for AI safety research, but also has sparked a heated academic discussion on whether large language models are evolving into some form of "quasi-personality." Through this clinical perspective, developers can better understand the boundaries of model behavior, thus further optimizing its value ranking and interaction logic.


