Amid the long-term dominance of Chinese tech companies in the global open-source large model market, US tech giants are trying to regain their voice through differentiated competition.
According to media reports, Demis Hassabis, CEO of Google DeepMind, recently hinted on a social platform using a "four diamonds" icon that the new open-source large model
Major Specifications Upgrade: 120B New Model Challenges Local Operation Limits
Compared to its predecessor,
Four Times More Parameters: It is rumored that this version will add a large model with 120B parameters, four times the size of the previous generation.
MoE Architecture: To balance performance and efficiency, the model is expected to use a MoE (Mixture of Experts) architecture, with only 15B activated parameters. This means that even with a large parameter model, it is still expected to run locally on consumer-grade graphics cards.
Ability Evolution: It is predicted that
Gemma 4 will double the context processing capability and have deeper logical reasoning and complex task execution abilities.
Strategic Game: Encircling "Chinese Forces" in the Open-Source Community
Timing Strategy: Google chose to release the open-source version half a year after the launch of its main closed-source model Gemini 3.0 series, which can maintain commercial revenue from closed-source models while keeping influence in the developer community through open-source projects.
Localization Moat: The core positioning of
Gemma 4 remains "localization services." By optimizing the performance of lightweight models, Google tries to compete directly with domestic open-source models in terms of end-side experience without touching its core commercial interests.
Industry Observation: The Open-Source Track Enters a "Parameter and Efficiency" Double Race Era
With the addition of


