Ilya Sutskever, Ex-Chief Scientist at OpenAI, Unveils Exciting New AI Venture

Ilya Sutskever, a co-founder of OpenAI, has announced the launch of his new company, Safe Superintelligence Inc. (SSI), just a month after departing from OpenAI. Sutskever, who previously served as OpenAI’s chief scientist, established SSI alongside former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy.

At OpenAI, Sutskever was pivotal in advancing AI safety amidst the emergence of "superintelligent" AI systems. He collaborated closely with Jan Leike, who co-led the Superalignment team at OpenAI. However, both Sutskever and Leike left the organization in May due to a significant disagreement with OpenAI's leadership regarding their approach to AI safety. Leike now leads a team at competitor Anthropic.

Sutskever has consistently highlighted the complex challenges associated with AI safety. In a 2023 blog post co-written with Leike, he warned that AI systems with intelligence surpassing human capabilities could emerge within the next decade, raising concerns about their benevolence and the urgent need for research to control and manage such technology.

“Superintelligence is within reach. Building safe superintelligence (SSI) is the paramount technical challenge of our time. We have launched the world’s first dedicated SSI laboratory, with a singular focus: to create a safe superintelligence,” the SSI team declared in a recent Twitter post.

Sutskever expressed his commitment to this mission, stating, “SSI represents our mission, our name, and our complete product roadmap. Every aspect of our team, investors, and business model aligns with achieving SSI. We view safety and capability as intertwined technical challenges that require innovative engineering and scientific advancements.”

“Our plan is to push capability advancements rapidly while ensuring that safety consistently stays ahead. This approach allows us to innovate without the distractions of management layers or lengthy product cycles. Our business model protects our progress and safety from short-term commercial pressures.”

Sutskever elaborated on SSI during an interview with Bloomberg, although he refrained from discussing its funding or valuation. Unlike OpenAI, which transitioned from a nonprofit to a for-profit model due to mounting computing costs, SSI is intentionally structured as a for-profit entity from the outset. Given the growing interest in AI and the impressive credentials of its team, the company is poised to attract significant investment. “Among the challenges ahead,” Gross shared with Bloomberg, “raising capital won’t be one of them.”

SSI currently operates offices in Palo Alto and Tel Aviv and is actively seeking talented technical staff to join its ranks.

Superintelligent AI control, AI safety, Ilya Sutskever, Safe Superintelligence Inc.

Most people like

Find AI tools in YBX