Ilya Sutskever Launches New AI Venture

Ilya Sutskever, known for his role as co-founder and chief scientist of OpenAI, has unveiled his latest venture, Safe Superintelligence Inc (SSI). This new company, focused on building secure superintelligence, has been established with Daniel Levy and Daniel Gross, former director of AI at Apple.
Safe Superintelligence Inc: A New Chapter
SSI, based in Palo Alto and Tel Aviv, seeks to recruit top technical talent to pioneer safe superintelligence. Sutskever and his team emphasize a singular focus on creating revolutionary advancements in AI safety. “We are forming a team of the world’s best engineers and researchers dedicated solely to safe superintelligence,” stated Sutskever.
The company aims to push AI capabilities forward while ensuring safety remains a priority. Sutskever’s announcement marks the end of speculation following his departure from OpenAI in May. His decision came shortly before OpenAI’s announcement of GPT-4o, a new AI model capable of realistic voice conversations and interactions with text and images.
Addressing AI Risks
SSI intends to address the growing concerns around AI misuse. Recent incidents, including a UNESCO report highlighting AI’s potential to fuel Holocaust denial and the rise of deepfakes, underscore the need for secure AI development. Sutskever’s new company aims to tackle these issues, promoting safer AI technologies.
Sutskever’s Departure from OpenAI
Sutskever’s departure from OpenAI, announced via a post on X, ended nearly a decade-long association with the company. His resignation followed internal disagreements over AI safety and the dramatic firing and rehiring of Sam Altman in late 2023. Sutskever’s role as chief scientist has been filled by Jakub Pachocki, who led the development of GPT-4 and OpenAI Five.