During the 2026 CES exhibition, NVIDIA CEO Jensen Huang provided an authoritative assessment of the open-source AI wave in 2025: while open-source large models have reached the technological frontier, they still lag by about six months compared to closed-source "top three" models like Google Gemini, Anthropic Claude, and OpenAI GPT. This judgment accurately summarizes the core landscape of the current AI competition—open-source and closed-source models are racing side by side, with a manageable gap that is difficult to close.

2025: A Year of Open-Source Highlights and Closed-Source Counterattack

In early 2025, Chinese open-source forces amazed the world: models such as DeepSeek R1 and Tongyi Qianwen (Qwen) performed exceptionally well in tasks such as coding, multilingual processing, and reasoning, sparking optimism about "open-source as the mainstream."

However, in the second half of the year, closed-source giants made a strong comeback:

- Google's Gemini 3 series continued to set new records on multimodal and reasoning benchmarks;

- Anthropic Claude became the developer's first choice, thanks to its excellent code generation and engineering understanding capabilities;

- OpenAI's GPT-5, despite ongoing controversies, remains the top model in terms of API usage and commercial applications.

Although the open-source community was active, it could not challenge the systemic advantages of closed-source models in data scale, computing power investment, and engineering optimization.

Jensen Huang's 6-Month Rule: The Gap Exists, But It's Not a Chasm

Jensen Huang pointed out that the true value of open-source large models lies in democratizing AI:

- Download volumes surged, allowing every country, company, and developer to participate in innovation;

- They can be freely or low-costly deployed, greatly lowering the barrier to AI application;

- The technology is transparent, making it easy for auditing and customization, especially suitable for high-compliance scenarios such as government and finance.

But he also admitted, "Top closed-source models are still about six months ahead." This window period is precisely the result of the investments by the big players, including thousands of H100/B100 GPUs, trillions of tokens of training, and hundreds of millions of dollars in costs.

"Six Months per Generation": AI Competition Enters a Rapid Iteration Cycle

More importantly, the pace of AI evolution has been compressed into "one generation every six months":

- Closed-source companies release stronger models every six months, solidifying their leadership;

- The open-source community follows closely, quickly catching up through techniques such as distillation, fine-tuning, and MoE architectures;

- The result is a stable gap of about six months, neither widening nor shrinking.

For ordinary users and small and medium enterprises, open-source models are already sufficient to meet most scenarios—writing code, handling customer service, analyzing data, and generating content. Closed-source models, on the other hand, focus on high-precision, high-reliability, and high-concurrency commercial core scenarios.

AIbase Observation: Open Source Is Not a Replacement, but Coexistence

Jensen Huang's evaluation reveals a reality: open source and closed source are not zero-sum games, but rather the "dual engines" of the AI ecosystem.

- Closed source provides the technical ceiling and commercial benchmark;

- Open source ensures technology accessibility, innovation vitality, and supply chain security.

Especially against the backdrop of increasing global geopolitical tensions, having high-performance open-source large models has become part of a nation's strategic capability. The rise of Chinese models like Qwen, DeepSeek, and MiniMax exemplifies this logic.