safety

Altman quits OpenAI’s safety group

OpenAI CEO, Sam Altman, has quite the internal Safety and Security Committee

Martin Crowley
September 17, 2024

OpenAI CEO—Sam Altman—has quit the internal Safety and Security Committee (SSC), which was originally formed in May to oversee key safety decisions related to the development and release of OpenAI’s projects.

OpenAI wants the committee to become a more independent oversight board, that still has the authority to delay launches if there are safety concerns.

The oversight board will now be chaired by Carnegie Mellon professor, Zico Kolter, and other members include Quora CEO, Adam D’Angelo, retired US Army General, Paul Nakasone, and former Sony EVP, Nicole Seligman, all of whom are members of OpenAI’s board of Directors, leaving many questioning just how ‘independent’ the new safety oversight board will be.

The newly structured group will continue to get “regular reports on technical assessments for current and future models” and has already reviewed OpenAI’s latest AI model—o1—and approved it for release, despite it being classified as ‘medium risk’ for developing bioweapons.

This move comes after five US senators wrote an open letter to Altman, citing concerns over his approach to safety, nearly half of OpenAI’s employees who were dedicated to mitigating long-term AI risks, quit the company this year, and multiple ex-OpenAI staff published reports, accusing Altman of trying to stop AI regulation to further OpenAI’s corporate objectives, which, given they have allocated a budget of $800,000 for federal lobbying (was $260,000 the previous year) does raise doubts about Altman’s intentions.