Amid the long-term dominance of Chinese tech companies in the global open-source large model market, US tech giants are trying to regain their voice through differentiated competition.

According to media reports, Demis Hassabis, CEO of Google DeepMind, recently hinted on a social platform using a "four diamonds" icon that the new open-source large model Gemma 4 is about to be officially released. This comes exactly one year after the release of its predecessor, Gemma 3, aligning with Google's iteration rhythm in the large model field.

Major Specifications Upgrade: 120B New Model Challenges Local Operation Limits

Compared to its predecessor, Gemma 4 has made a leap in parameter scale:

  • Four Times More Parameters: It is rumored that this version will add a large model with 120B parameters, four times the size of the previous generation.

  • MoE Architecture: To balance performance and efficiency, the model is expected to use a MoE (Mixture of Experts) architecture, with only 15B activated parameters. This means that even with a large parameter model, it is still expected to run locally on consumer-grade graphics cards.

  • Ability Evolution: It is predicted that Gemma 4 will double the context processing capability and have deeper logical reasoning and complex task execution abilities.

Strategic Game: Encircling "Chinese Forces" in the Open-Source Community

FastTech analysis points out that although the current focus of US giants has shifted to closed-source business models, to prevent Chinese companies from completely dominating the open-source ecosystem, Google is releasing technical benefits in a planned manner:

  • Timing Strategy: Google chose to release the open-source version half a year after the launch of its main closed-source model Gemini 3.0 series, which can maintain commercial revenue from closed-source models while keeping influence in the developer community through open-source projects.

  • Localization Moat: The core positioning of Gemma 4 remains "localization services." By optimizing the performance of lightweight models, Google tries to compete directly with domestic open-source models in terms of end-side experience without touching its core commercial interests.

Industry Observation: The Open-Source Track Enters a "Parameter and Efficiency" Double Race Era

With the addition of Gemma 4, the competition threshold for open-source large models has been further raised. Industry insiders generally believe that although Google's priority in open-source is not the highest, its deep algorithmic foundation remains a formidable variable. Whether Gemma 4 can surpass the current leading domestic open-source models at the same parameter level will become the focus of attention in the global AI community this second half of the year.