Google's Gemini3Deep Think has undergone a major upgrade today. This model, which focuses on deep reasoning, not only demonstrates "champion-level" capabilities in the field of programming, but also breaks multiple records in scientific research and hardcore engineering, marking a new dimension in AI reasoning capabilities.

image.png

Key achievements of Gemini3Deep Think:

Programming mastery: It achieved an impressive score of 3455 Elo on the algorithm competition platform Codeforces. This means it has joined the ranks of top human players, with only seven human players globally able to defeat it. For comparison, the strongest o3 model a year ago scored 2727.

Powerful brain for scientific research: The model shows strong logical rigor, having identified subtle flaws in a high-level physics and mathematics paper that even human peer reviewers missed. In addition, it successfully proved several difficult problems in the "Erdős conjecture."

From sketch to physical product: In the engineering field, it can directly analyze and render a hand-drawn sketch into a high-fidelity 3D model file (such as a notebook stand), increasing the efficiency of physical component modeling by ten times.

Comprehensive breakthroughs in benchmark tests: It achieved a score of 48.4% in the "Last Human Exam" (HLE) and led with an impressive accuracy rate of 84.6% on the ARC-AGI-2 benchmark.

Currently, Google has opened the new version experience to AI Ultra subscribers and has, for the first time, provided access via API to selected researchers and companies. This epic evolution of Gemini is seen as a strong response to competitors' reasoning models.