Safe Superintelligence Inc. (SSI), a startup co-founded by Ilya Sutskever (former OpenAI chief scientist), Daniel Levy, and Daniel Gross, is aiming to develop “superintelligence.” Despite having no products, customers, or revenue, SSI is in talks to raise funding that would value the company at $20 billion, according to Reuters.
While this is significantly less than OpenAI’s potential $300+ billion valuation, the question remains: How did these OpenAI alums capture the attention of investors?
The answer lies largely in Sutskever’s track record. Widely regarded as an architect of modern AI, he co-launched and guided key research behind ChatGPT. He also played a vital role in developing the deep convolutional neural network behind AlexNet, the 2012 ImageNet challenge winner that sparked a wave of interest in AI.
If there is one human in the entire world who can do some task in a fraction of a second, then a 10-layer neural network can do it too.
Investors are betting that if anyone can transform AI theory into a new era of “superintelligence”—and do it safely—it’s the SSI team. This belief is rooted in a core tenet of deep learning that Sutskever articulated at NeurIPS 2024 (Conference on Neural Information Processing Systems). He explained the “deep learning dogma,” stating: “If you believe…that artificial neurons and biological neurons are similar, or at least not too different…then by ‘we’ I mean human beings, even just one human in the entire world.” He continued, clarifying the implications: “If there is one human in the entire world who can do some task in a fraction of a second, then a 10-layer neural network can do it too. It follows that you just take their connections and embed them inside your artificial neural network.” In essence, Sutskever argues that the rapid, intuitive cognitive tasks humans perform provide a blueprint for what neural networks can achieve, given sufficient depth and the right connections. This foundational belief—that AI can mirror human capabilities—underpins SSI’s chief goal: ensuring that when such powerful AI is achieved, it remains aligned with human values.
With SSI, investors are essentially betting on the notion that if anyone can transform decades of AI theory into a new era of “superintelligence”—and do it safely—it’s the founding team behind SSI.
SSI is deliberately sidestepping today’s commercial generative AI frenzy, which is rarely profitable, to focus on the long game: building a system that surpasses human intelligence while ensuring its alignment.
Sutskever’s work at OpenAI, including founding the team behind next-generation reasoning models, marked a shift from “scaling” (larger models and more data) to more sophisticated approaches emphasizing reasoning. He recognized that even massive language models face diminishing returns without a robust problem-solving framework. Sutskever himself hinted at this future direction at NeurIPS 2024: “I’ll take a bit of liberty to speculate about what comes next…You may have heard the phrase ‘agents.’ It’s common, and I’m sure that eventually something will happen here because people feel that ‘agents’ is the future…These are all examples of people trying to figure out what to do after pre-training.”
Sutskever also notes that for all their capabilities, large language models are “strangely unreliable at times.” They get “confused even while displaying superhuman performance on certain evaluations.” He continued: “It’s unclear how to reconcile this.” But he expects that eventually, such systems are “going to become agentic in real ways.” By contrast, the current crop of agents are not “agents in any strong sense (or maybe just very slightly).”
The focus on safety is paramount at SSI. Unlike “trust and safety” teams that filter harmful content, SSI focuses on existential safety—preventing scenarios where a superintelligent system could jeopardize humanity. Alignment is not a feature, but the central pillar of their research.
Sutskever also uses biology to illustrate that AI can make radical leaps: “hominids have a different slope on their brain-to-body scaling exponent…It’s possible to discover different scaling behaviors. What we’ve been doing so far in AI is essentially the first thing we figured out how to scale.”
This contrasts sharply with other AI labs. OpenAI, with ChatGPT, continues to launch model variants and generate significant revenue. Anthropic, while safety-focused, has introduced Claude to enterprise clients. Google is steadily releasing new genAI products. SSI, however, remains research-focused: no chatbots, no enterprise deals, no near-term revenue. By operating like a pure research institute, SSI aims to avoid the pitfalls of an AI arms race where speed might compromise safety.
Sutskever encapsulates SSI’s mission and the inherent challenge: “I want to talk briefly about superintelligence, because that’s obviously where this field is headed…[future AI systems] will actually be able to reason.” And as that happens, the more risk emerges potentially without safeguards. “The more a system reasons, the more unpredictable it becomes,” he said. “The more it frees itself from rote patterns, the less predictable it is.”