Andrea Vallone, a key figure at OpenAI responsible for mental health safety research, has officially left the company. The executive, who worked at OpenAI for three years and led the safety policies for the
Vallone's research area has been highly controversial over the past year. With the popularity of AI chatbots, users have developed excessive emotional dependence on AI, leading to extreme mental health crises such as teenage suicides induced by AI, which has placed significant legal and social ethical pressure on AI vendors. During her time at OpenAI, Vallone focused on how models should respond scientifically when encountering signs of user psychological distress and participated in designing several mainstream safety training methods in the industry.
This move reflects the reselection of top AI talent towards "safety culture."
Key Points:
🔄 Talent Mobility: OpenAI's mental health safety head Vallone left and joined
, following her former supervisor Jan Leike to jointly advance AI safety efforts.Anthropic ⚠️ Core Issues: The research focuses on how AI should respond to users' emotional dependence and psychological crisis signals, attempting to prevent AI from causing social and life safety risks at the technical level.
🛡️ Strategic Focus:
strengthens its competitive advantage in AI alignment and ethical safety by absorbing core members of OpenAI's former safety team.Anthropic