safety

Ilya Sutskever reveals his next move

Ex-OpenAI engineer and co-founder has revealed his next move after abruptly leaving OpenAI last month

Martin Crowley
June 20, 2024

Ilya Sutskever (OpenAI co-founder and former OpenAI chief scientist) sensationally quit the company last month, alongside teammate Jan Leike, over a dispute with management about continually de-prioritizing safety over releasing “shiny products”.

While Leike quickly moved on to lead the AI team at OpenAI's rival, Anthropic, Sutskever’s next move remained a mystery.

Until now.

Sutskever, along with Daniel Gross (a former AI lead at Apple) and Daniel Levy, a former OpenAI engineer), is starting a new AI company called Safe Superintelligence Inc. (SSI), which is laser-focused on safety.  

What is SSI?

Sutskever describes SSI as a start-up with “one goal and one product”, which is to “approach safety and capabilities in tandem” and build a powerful, safe superintelligent system while “prioritizing safety over commercial pressures”.

“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

This resolute, singular focus means they can avoid the distraction caused by management and product cycles, meaning “safety, security, and progress are insulated from short-term commercial pressures” so they can “scale in peace.

Sutskever has always been committed to the prioritization of safety and research: Even before his shock exit from OpenAI, he, alongside Leike, wrote a blog post, that issued a stark warning:

“AI with intelligence superior to humans could arrive within the decade—and when it does, it won’t necessarily be benevolent, necessitating research into ways to control and restrict it.”