Faced with increasingly severe AI security challenges, OpenAI has officially released a job posting for a "Head of Preparedness" with an annual salary of up to $555,000 (plus equity). This position will directly be responsible for implementing the company's preparedness framework, monitoring and addressing new frontier AI capabilities that could cause serious harm.
Core Mission: From Cybersecurity to Mental Health
OpenAI CEO Sam Altman admitted on the social media platform X that AI models have begun to bring "real challenges." He pointed out that the models perform exceptionally well in computer security, even starting to discover critical vulnerabilities, which can not only enhance defense but may also be exploited by attackers. Additionally, Altman specifically emphasized the potential impact of models on mental health.

The responsibilities of this position are extremely broad, including:
Cybersecurity Defense: Ensure advanced capabilities serve defenders rather than attackers, enhancing overall system security.
Biological Capability Regulation: Monitor the risks of AI applications in the biological field.
System Self-Improvement: Ensure self-improving systems are in a safe and controlled state.
Mental Health Review: Address recent legal accusations against ChatGPT, including issues such as the model potentially increasing user social isolation, reinforcing delusions, or even inducing extreme behaviors.
Organizational Turmoil and Framework Adjustments
OpenAI established an emergency preparedness team as early as 2023, aiming to study "catastrophic risks" ranging from phishing to nuclear threats. However, the department has recently experienced frequent personnel changes: the original head, Aleksander Madry, was transferred to an AI reasoning position within less than a year, and several security executives have also left or been reassigned.
Notably, OpenAI recently updated its "preparation framework." The new regulations indicate that if competitors release "high-risk" models without equivalent protection measures, OpenAI may "adjust" its own safety requirements to maintain competitiveness.
Dealing with Real-World Challenges
In response to controversies regarding mental health, OpenAI stated that it is working to improve ChatGPT's ability to identify users' emotional distress and is committed to connecting affected users with real-world support systems.


