AI startup community once again witnessed a crazy wealth creation myth. Reflection AI, founded by two former Google DeepMind researchers, secured $2 billion in funding just one year after its establishment, reaching a valuation of $8 billion, which is 15 times higher than its $545 million valuation seven months ago. Initially focused on autonomous coding agents, the company now positions itself as an open-source alternative to closed frontier labs like OpenAI and Anthropic, as well as a Western counterpart to Chinese AI company DeepSeek.
The startup was co-founded in March 2024 by Misha Laskin and Ioannis Antonoglou. Laskin led the reward modeling work for the Gemini project at DeepMind, while Antonoglou was a co-creator of AlphaGo, the AI system that defeated the world Go champion in 2016 and shocked the world. Their background in developing these cutting-edge AI systems serves as the best proof of their core competitiveness. They believe that top AI talent can build advanced models outside of established tech giants.
With this new round of funding, Reflection AI announced it has recruited a top talent team from DeepMind and OpenAI, and built an advanced AI training stack, promising to make it available to everyone. More importantly, Reflection AI stated that it has found a scalable business model aligned with its open intelligence strategy.
Currently, the Reflection AI team consists of about 60 people. CEO Laskin revealed that the members are primarily AI researchers and engineers spanning infrastructure, data training, and algorithm development. The company has already acquired computing clusters and hopes to launch a cutting-edge language model trained on tens of trillions of tokens next year.
In a post on the X platform, Reflection AI wrote that they have built something that was once thought only global top laboratories could achieve: a large LLM and reinforcement learning platform capable of training large expert mixture models at the forefront scale. When they applied this approach to the critical field of autonomous coding, they witnessed its effectiveness firsthand. With this milestone unlocked, they are now applying these methods to general agent reasoning.
Expert mixture models are a specific architecture driving cutting-edge large language models, a system that only large closed AI laboratories could train at scale. DeepSeek made breakthroughs in training these models at scale in an open way, followed by Qwen, Kimi, and other Chinese models.
Laskin openly stated that DeepSeek, Qwen, and all these models are their alarm bells because if they do not act, the global intelligence standards will be built by others, not by the United States.
Laskin added that this puts the US and its allies at a disadvantage, as businesses and sovereign nations often do not use Chinese models, fearing potential legal consequences. So either survive in a competitive disadvantage or face the challenge head-on.
The American tech community generally welcomed Reflection AI's new mission. White House AI and crypto czar David Sacks posted on X that he was glad to see more American open-source AI models. A significant portion of the global market would prefer the cost, customizability, and control provided by open source, and the US also wants to win this category.
Clem Delangue, co-founder and CEO of Hugging Face, said that this is indeed good news for American open-source AI. He added that the current challenge will be to demonstrate the ability to share open AI models and datasets quickly, similar to what the leading laboratories in the open-source AI field currently showcase.
Reflection AI's definition of openness seems to focus on access rather than development, similar to Meta's Llama or Mistral's strategy. Laskin said that Reflection AI will release model weights for public use, which are the core parameters determining how an AI system works, but the dataset and full training pipeline will remain largely proprietary.
Laskin explained that in reality, the most impactful thing is the model weights, as anyone can use them and start fine-tuning them. However, the infrastructure stack is only used by a few companies.
This balance also supports Reflection AI's business model. Laskin said that researchers can use these models for free, but revenue will come from large enterprises building products on top of Reflection AI models, as well as governments developing sovereign AI systems, which are AI models developed and controlled by individual countries.
Laskin said that once entering the enterprise sector, they naturally want an open model. They want things they own, which can run on their own infrastructure, control costs, and be customized for various workloads. Because they spend astronomical amounts of money on AI, they want to optimize it as much as possible, which is precisely the market they serve.
Reflection AI has not yet released its first model. According to Laskin, the model will mainly be based on text, and in the future, it will have multimodal capabilities. The company will use the latest round of funding to obtain the computational resources needed to train the new model, and the first model is expected to be released early next year.
Investors in Reflection AI's latest funding include NVIDIA, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citigroup, Sequoia, CRV, among others. This impressive investment lineup itself demonstrates the confidence of the capital market in this American open-source AI counterattack. In the current context where Chinese AI companies are leading the development of open-source frontier models, whether Reflection AI can fulfill its promises and truly become the flagbearer of American open-source AI remains to be closely watched.