Investment

Ex-OpenAI founder raises $1B

Former OpenAI co-founder and chief scientist, Ilya Sutskever, has raised $1B in funding for his AI safety start-up

Martin Crowley
September 5, 2024

Ilya Sutskever, the co-founder of OpenAI and former chief scientist at OpenAI, has raised $1B in funding for his start-up, Safe Superintelligence (SSI), which aims to develop artificial general intelligence (AGI) models, that are grounded in safety.

Investors include NFDG (an investment partnership run by ex-Github CEO, Nat Friedman, and SSI’s co-founder and CEO, Daniel Gross), DST Global, SV Angel, a16z (who opposed the controversial California AI safety bill, SB 1047), and Sequoia (who has previously funded OpenAI).

Sutskever and Gross plan to spend the $1B on computing power and hiring a top team of engineers and researchers, from Palo Alto, California, and Tel Aviv, Israel, to expand the current team of 10.

The endgame for SSI is to build safe AI systems that match (or exceed) the capabilities of the human brain, and to become a safety checkpoint for new AI models, serving as an “Underwriting Lab” which will test AI models, before they hit the market.

Although they don’t yet have a product and have stated that they are likely to be a long way from having one ready for launch, SSI has piqued investor interest, due to its unique focus on safety, even though it may take a while to become a profitable company. No doubt Sutskever’s credentials and background (he sensationally quit OpenAI a few months ago, over concerns over the lack of safety prioritization) have contributed to the successful funding round.