After 'Palace Struggles' Fail, OpenAI Co-Founder Launches New Venture Focused on Safe Superintelligence

On Wednesday, Ilya Sutskever, co-founder and former chief scientist of OpenAI, announced his new venture, Safe Superintelligence (SSI). Partnering with former OpenAI employee Daniel Levy and AI investor Daniel Gross, SSI focuses on creating artificial intelligence systems that prioritize both safety and capability.

SSI is dedicated to "synchronizing safety and capability," aiming to advance its AI systems quickly while keeping safety at the forefront. This singular focus allows the company to sidestep the distractions typically faced by AI teams at organizations like OpenAI, Google, and Microsoft. SSI's goal is to build secure superintelligence—AI that exceeds human cognitive abilities—without compromising on security and assurance due to short-term market pressures.

Sutskever highlighted that SSI functions as a pure research entity, choosing not to launch any commercial products or services for now. He clarified that the company's concept of safety is comparable to nuclear safety, setting it apart from conventional "trust and safety" protocols. While details about SSI’s plans and funding remain under wraps, Sutskever and his team are committed to building a genuinely secure superintelligent system aimed at benefiting humanity.

Previously, Sutskever played a crucial role in OpenAI's leadership, particularly in generative AI. However, tensions over the company’s future direction led him and other board members to push for CEO Sam Altman's removal last November. Although Altman returned to his role, Sutskever resigned from the board and left OpenAI in May. When OpenAI was founded in 2015, its mission aligned closely with SSI's: to create superintelligent AI for the benefit of humanity. Despite Altman's claims that this principle remains intact, OpenAI has since transformed into a rapidly growing enterprise.

After departing OpenAI, Sutskever expressed enthusiasm for the future and promised to unveil more details soon. He remains optimistic that OpenAI will achieve safe and beneficial Artificial General Intelligence (AGI) under Altman’s leadership. Nonetheless, the establishment of SSI highlights Sutskever's deep commitment to AI safety and his determination to advance AI technology free from commercial constraints.

Most people like

Find AI tools in YBX