Recently, OpenAI officially released a job posting, offering a high salary to find a new Head of Preparedness. According to AIbase, the starting salary for this position is as high as $555,000 (approximately RMB 4 million), along with generous stock incentives.
This hiring is not a routine personnel change but stems from OpenAI's growing concerns about "frontier risks." CEO Sam Altman openly admitted on the social media platform X that current AI models are already bringing real challenges. For example, some models have strong computer security capabilities and can even autonomously discover critical system vulnerabilities; at the same time, the potential impact of AI on mental health has also drawn the company's serious attention.
According to AIbase, the new head will take on significant responsibilities, responsible for implementing the company's "Preparedness Framework." The core mission of this framework is to track and address extreme risks that could cause serious harm, covering areas such as cybersecurity, biosecurity, and even the uncontrollable nature of system "self-improvement." In short, this manager needs to ensure that defenders stay ahead of attackers when AI unleashes powerful capabilities.
Notably, this position was previously held by Aleksander Madry, who was recently transferred to the AI reasoning department, and several other internal security managers have also left or changed positions. Under the dual pressure of internal and external factors, OpenAI urgently needs a strong leader to fill this key vacancy and provide practical solutions to recent socially controversial issues such as AI-induced mental health crises.
Key Points:
💰 High Salary Recruitment:
offers a $555,000 annual salary plus equity for this position, responsible for addressing "catastrophic risks" that AI may cause.OpenAI ⚠️ Risk Upgrade:
warns that AI has already shown real threats in terms of mental health impact and automated network attacks.Sam Altman 🛡️ Framework Implementation: The new manager will implement
, focusing on monitoring cybersecurity, biosecurity, and potential dangers arising from model self-improvement.Preparedness Framework


