OpenAI says it’s hiring a head safety executive to mitigate AI risks
0 4 mins 2 hrs


OpenAI is seeking a new “head of preparedness” to guide the company’s safety strategy amid mounting concerns over how artificial intelligence tools could be misused.

According to the job posting, the new hire will be paid $555,000 to lead the company’s safety systems team, which OpenAI says is focused on ensuring AI models are “responsibly developed and deployed.” The head of preparedness will also be tasked with tracking risks and developing mitigation strategies for what OpenAI calls “frontier capabilities that create new risks of severe harm.”

“This will be a stressful job and you’ll jump into the deep end pretty much immediately,” CEO Sam Altman wrote in an X post describing the position over the weekend.

He added, “This is a critical role at an important time; models are improving quickly and are now capable of many great things, but they are also starting to present some real challenges.”

Reached for comment, an OpenAI spokesperson referred CBS News to Altman’s post on X.

The company’s investment in safety efforts comes as scrutiny intensifies over artificial intelligence’s influence on mental health, following multiple allegations that OpenAI’s chatbot, ChatGPT, was involved in interactions preceding a number of suicides.

In one case earlier this year covered by CBS News, the parents of a 16-year-old sued the company, alleging that ChatGPT encouraged their son to plan his own suicide. That prompted OpenAI to announce new safety protocols for users under 18. 

ChatGPT also allegedly fueled what a lawsuit filed earlier this month described as the “paranoid delusions” of a 56-year-old man who murdered his mother and then killed himself. At the time, OpenAI said it was working on improving its technology to help ChatGPT recognize and respond to signs of mental or emotional distress, de-escalate conversations and guide people toward real-world support.

Beyond mental health concerns, worries have also increased over how artificial intelligence could be used to carry out cybersecurity attacks. Samantha Vinograd, a CBS News contributor and former top Homeland Security official in the Obama administration, addressed the issue on CBS News’ “Face the Nation with Margaret Brennan” on Sunday.

“AI doesn’t just level the playing field for certain actors,” she said. “It actually brings new players onto the pitch, because individuals, non-state actors, have access to relatively low-cost technology that makes different kinds of threats more credible and more effective.”

Altman acknowledged the growing safety hazards AI poses in his X post, writing that while the models and their capabilities have advanced quickly, challenges have also started to arise.

“The potential impact of models on mental health was something we saw a preview of in 2025; we are just now seeing models get so good at computer security they are beginning to find critical vulnerabilities,” he wrote.

Now, he continued, “We are entering a world where we need more nuanced understanding and measurement of how those capabilities could be abused, and how we can limit those downsides … in a way that lets us all enjoy the tremendous benefits.”

According to the job posting, a qualified applicant would have “deep technical expertise in machine learning, AI safety, evaluations, security or adjacent risk domains” and have experience with “designing or executing high-rigor evaluations for complex technical systems,” among other qualifications.

OpenAI first announced the creation of a preparedness team in 2023, according to TechCrunch.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *