Ilya Sutskever, former chief scientist at OpenAI, has unveiled his next venture after stepping down in May. He, along with OpenAI colleague Daniel Levy and ex-Apple AI lead Daniel Gross, has founded Safe Superintelligence Inc. The startup's mission is to develop safe superintelligence, which they regard as “the most important technical problem of our time.”
On SSI’s minimalist website, the founders emphasize their commitment to addressing safety and capabilities together. They plan to achieve rapid advancements in AI capabilities while ensuring that safety remains a priority.
So, what is superintelligence? It refers to a theoretical entity possessing intelligence vastly exceeding that of the smartest human.
This initiative builds upon Sutskever's previous work at OpenAI, particularly his involvement in the superalignment team focused on controlling powerful AI systems. Following Sutskever’s departure, that team was disbanded, a decision that drew criticism from former lead Jean Leike.
Sutskever was also a key figure in the controversial ousting of OpenAI CEO Sam Altman in November 2023, later expressing regret over his involvement.
Safe Superintelligence Inc. asserts it will pursue its mission with a clear focus and a singular goal: to create a safe superintelligent AI.