Ilya Sutskever, a co-founder and former chief scientist at OpenAI, has launched a new AI venture called Safe Superintelligence Inc. (SSI). The startup’s primary objective is to develop an AI system that is both powerful and secure.
SSI’s approach emphasizes a balanced focus on safety and capability, allowing for rapid advancements in AI while prioritizing safety. The company aims to avoid external pressures often faced by AI teams at larger corporations like OpenAI, Google, and Microsoft. By maintaining a singular focus, SSI believes it can streamline its efforts and avoid distractions from management overhead or product cycles.
I am starting a new company: https://t.co/BG3K3SI3A1
— Ilya Sutskever (@ilyasut) June 19, 2024
The company’s business model prioritizes safety, security, and progress, shielding them from short-term commercial demands. This approach enables them to scale their operations without compromise. Alongside Sutskever, SSI’s co-founders include Daniel Gross, formerly of Apple, and Daniel Levy, who previously worked at OpenAI.
Sutskever’s departure from OpenAI last year was preceded by his efforts to remove CEO Sam Altman, and he hinted at a new project shortly after leaving. Other key figures like AI researcher Jan Leike and policy researcher Gretchen Krueger also left OpenAI, citing concerns about safety being sidelined in favor of product development.
While OpenAI continues to form partnerships with companies like Apple and Microsoft, SSI’s focus remains firmly on achieving safe superintelligence. In a Bloomberg interview, Sutskever stated that this will be SSI’s sole focus until it is achieved.
