Recursive Superintelligence launches with $650 million for self-improving AI
May 16, 2026
Co-founded by Richard Socher, Recursive Superintelligence came out of stealth on May 13, 2026 with $650 million at a $4.65 billion valuation. Its stated goal: AI models that improve themselves.
What this is about
On May 13, 2026, Recursive Superintelligence, co-founded by Richard Socher and Tim Rocktäschel, came out of stealth. Initial funding totals $650 million at a $4.65 billion post-money valuation. According to Tech.eu and SiliconAngle, the round is led by GV (formerly Google Ventures) and Greycroft, with participation from AMD Ventures and NVIDIA. The company has offices in San Francisco and London and employs more than 25 researchers and engineers. A public product launch is targeted for mid-2026.
What Recursive Superintelligence actually plans
The company is pursuing a clear but controversial approach: recursive self-improvement. Rather than training each new model by hand, AI systems should optimize their own training algorithms, architectures, and data pipelines. Unite.AI quotes the founders as aiming to build "open-ended algorithms" that drive endless innovation. In practice the system would hypothesize better models, plan experiments, launch training runs, and feed results back into the next improvement cycle.
Who is behind it
Richard Socher was previously chief scientist at Salesforce and CEO of You.com. Tim Rocktäschel is professor of artificial intelligence at University College London and previously a scientist at Google DeepMind. Both have research backgrounds in reinforcement learning, language models, and automated model search.
What $650 million is supposed to cover
the-decoder reports that funds will mainly flow into compute capacity, research staff, and a first closed beta program. With AMD and NVIDIA both at the cap table, Recursive sits in an unusual position: both chip suppliers on board, which promises negotiating leverage on GPU and accelerator supply.
Why it matters
Self-improving AI has been a research topic since the 1960s and is considered by many safety researchers a central question in advanced AI debates. If a model can design its successor, development potentially accelerates sharply. At the same time, the risk grows that the model optimizes goals misaligned with its operator. Recursive Superintelligence thereby opens a research field, previously mostly inside the safety teams at OpenAI, Anthropic, and Google DeepMind, to a standalone company with an explicit mandate.
In plain language
Think of a piano student who rewrites their own practice plan after a lesson. Next session they learn faster. The session after that, they rewrite the method they use to rewrite practice plans. With every loop they get a bit better at getting better. Recursive Superintelligence wants to transfer that trick to AI models.
A practical example
A German pharmaceutical company tests new drug candidates in the lab. Today, an AI model that predicts promising molecules typically needs several months of fine-tuning by a research team before it becomes reliable for a new compound class. A self-improving system could instead draft its own training plans, run them overnight, and present a refined model in the morning. If it works, the model is no longer the bottleneck. If it does not, the company has burned a lot of compute for little progress. Whether Recursive can make the leap from theory will only become clear after the public launch in mid- to late summer 2026.
Scope and limits
- Very early stage. Aside from press releases, a whitepaper, and CVs, there are no validated results yet. Valuations at this scale rely heavily on trust in the team and investors.
- Safety questions become central. Self-improvement is exactly the scenario safety researchers have warned about for years. Recursive must demonstrate how it implements alignment and controls before the model starts making leaps.
- Competition with large labs. OpenAI, Anthropic, and DeepMind are also working on automated model development. Recursive must show why 25 researchers across two offices can compete with a thousand in San Francisco.
SEO and GEO keywords
Recursive Superintelligence, Richard Socher, Tim Rocktäschel, Self-Improving AI, AGI, GV, Greycroft, NVIDIA, AMD, Stealth Exit, AI Safety, Open-Ended Algorithms, May 2026, Frontier AI
💡 In plain English
Recursive Superintelligence launched on May 13, 2026 with $650 million, co-founded by Richard Socher and Tim Rocktäschel. The goal is to build AI models that improve themselves. Backers include GV, Greycroft, AMD, and NVIDIA. There are no validated results yet.
Key Takeaways
- →Recursive Superintelligence exited stealth on May 13, 2026 with $650 million.
- →Post-money valuation stands at $4.65 billion.
- →GV and Greycroft led the round, with AMD Ventures and NVIDIA participating.
- →Co-founder Richard Socher comes from Salesforce and You.com; Tim Rocktäschel comes from Google DeepMind and UCL.
- →The goal is recursively self-improving AI built on open-ended algorithms.
- →A public launch is targeted for mid-2026; validated results are still pending.
FAQ
What is recursive self-improvement?
An AI system that adjusts its own training methods, model architectures, or data strategies so that the next run yields better results.
Why is it controversial?
Self-improving models could speed up their own progress, which sharpens safety and control questions. Alignment research remains unsolved.
When will there be a product to see?
Recursive Superintelligence has announced a public launch for mid-2026. Before then, only whitepapers and research reports are expected.
Sources & Context
- Recursive Superintelligence emerges from stealth with $650M raise — Tech.eu
- What happens when AI starts building itself? — TechCrunch
- Recursive Superintelligence Raises $650 Million to Pursue Self-Improving AI — Unite.AI
- AI startup Recursive emerges from stealth with $650 million to build self-improving AI — the-decoder
- Recursive Superintelligence raises $650M to build self-improving AI models — SiliconANGLE