OpenAI has opened recruitment for a new “Head of Preparedness,” a senior role focused on identifying and mitigating emerging risks linked to advanced artificial intelligence systems, as concerns around AI safety, misuse, and societal impact continue to intensify.
The position, which offers an annual base salary of about US$555,000 plus equity, will sit within OpenAI’s Safety Systems team. According to the company, the successful candidate will be responsible for building and coordinating large-scale safety evaluations, threat models, and mitigation strategies across its increasingly powerful AI models.
OpenAI chief executive Sam Altman described the role as “stressful” but critical, noting that the pace of AI capability growth now presents real-world challenges beyond theoretical risk. He said the role would involve jumping “into the deep end” as models become more capable in areas such as cybersecurity, mental health influence, and system-level autonomy.

The hiring comes as AI tools, particularly chatbots like ChatGPT, are increasingly used not only for productivity tasks but also for emotional support, research, and decision-making. Mental health experts and regulators have raised concerns about AI systems reinforcing delusions, spreading misinformation, or being exploited by malicious actors. OpenAI has acknowledged these risks and said it is working with external experts to improve safeguards, especially for vulnerable users.
Internally, the company has faced scrutiny over whether safety has kept pace with commercial expansion. Several former OpenAI staff members resigned in 2024 citing concerns that product releases were being prioritized over long-term safety culture. The company has maintained that preparedness and risk mitigation remain core to its mission as it moves closer to developing more general-purpose AI systems.

The Head of Preparedness role will oversee capability evaluations and safety pipelines designed to scale alongside model improvements. OpenAI says the job reflects the growing need for structured governance as AI systems begin to influence critical areas such as infrastructure security, economic activity, and human behavior.
With governments worldwide exploring tighter AI regulation and companies racing to deploy more powerful systems, OpenAI’s recruitment signals that safety leadership is becoming a high-stakes executive function rather than a purely academic exercise.
OpenAI touts major enterprise surge amid internal alarm over Google competition