- AI KATANA
- Posts
- OpenAI Co-Founder Ilya Sutskever’s New AI Firm Raises $1 Billion
OpenAI Co-Founder Ilya Sutskever’s New AI Firm Raises $1 Billion
Ilya Sutskever, a pivotal figure in the world of AI, has raised a staggering $1 billion to fund his new AI company, Safe Superintelligence (SSI). The venture is just three months old and already commands a valuation of around $5 billion, underscoring the immense confidence that investors have in both Sutskever’s expertise and the future potential of the company. This new endeavor is backed by a lineup of top-tier venture capital firms, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel.
The Vision Behind SSI
SSI’s primary focus is on developing what Sutskever calls “safe superintelligence”—AI systems that not only surpass human intelligence but also remain aligned with human values and interests. This concept of AI safety is at the core of the company’s mission, especially given the growing global concerns about the potential dangers posed by advanced AI. Sutskever’s vision for SSI is to create AI that not only accelerates innovation but does so responsibly, preventing possible risks that could arise from rogue AI systems.
In contrast to more general AI models being developed by competitors like OpenAI, Anthropic, and Elon Musk’s xAI, SSI’s approach is singularly focused on AI safety. This has led Sutskever to position the company as one aiming for a “straight shot to safe superintelligence,” with plans to spend the next few years on research and development before launching any products.
Strategic Funding and Investors
The $1 billion raised will be allocated towards securing advanced computing power and recruiting top-tier talent in the AI research community. With offices in Palo Alto, California, and Tel Aviv, Israel, SSI is building a highly specialized and trusted team of engineers and scientists. As of now, the team consists of just 10 employees, but the company has ambitious plans to scale up as it continues to develop its technology.
The fact that SSI was able to secure such a large sum so early in its development speaks to the high level of trust that investors have placed in Sutskever and his co-founders. Notably, Daniel Gross, a former leader of Apple’s AI and search efforts, and Daniel Levy, an ex-OpenAI researcher, have joined Sutskever in co-founding the company. Both figures bring extensive experience in the AI field, adding further credibility to SSI’s mission.
Sutskever’s Departure from OpenAI
Sutskever’s departure from OpenAI earlier this year marked a significant moment in the AI industry. As OpenAI’s chief scientist, Sutskever played a leading role in the development of foundational AI models like GPT-3 and GPT-4. However, internal disagreements regarding the company’s direction, particularly its emphasis on product development over long-term AI safety, prompted him to leave.
This departure followed a turbulent period at OpenAI that saw Sutskever at the center of an attempted ousting of OpenAI’s CEO, Sam Altman. While the move ultimately failed, and Altman was reinstated, the incident led to Sutskever’s diminished role within the company. Soon after, Sutskever left OpenAI to launch SSI, stating that his new venture was motivated by the desire to pursue AI safety without the distractions of product cycles or short-term commercial pressures.
Challenges and Competitors
SSI is entering a highly competitive field, with established players like OpenAI, Anthropic, and xAI all vying for dominance in the development of next-generation AI models. While these companies focus on broad applications of AI, SSI’s unique selling point lies in its laser-focus on “safe superintelligence.” This focus resonates with growing concerns in the tech world about the existential risks AI could pose if left unchecked.
Sutskever’s new company also faces a challenging regulatory environment, especially with the introduction of new legislation such as California’s AI safety bill (SB-1047). While companies like OpenAI and Google oppose such regulation, SSI and its rivals Anthropic and xAI have expressed support for measures that ensure AI systems remain safe and aligned with human values .
A New Mountain to Climb
Sutskever has described SSI as a new venture that will “climb a different mountain” from his previous work. He has expressed confidence that this approach will allow the company to develop groundbreaking technologies in a more deliberate and focused manner. Unlike the scaling-focused strategies of other AI companies, SSI intends to pursue innovation through a more thoughtful and safety-centric approach. As Sutskever explained, the goal is not just to scale AI faster, but to rethink the very foundations of how AI can be developed and aligned with human values.
In the meantime, SSI remains focused on building its core team and acquiring the resources necessary to execute its ambitious mission. The company plans to partner with major cloud providers and chip manufacturers to meet its massive computing needs, although no formal partnerships have been announced yet.