Recently, two major giants in the artificial intelligence field, OpenAI and Google DeepMind, have announced that their AI models won gold medals in the 2025 International Mathematical Olympiad (IMO). This achievement not only demonstrates the astonishing development speed of artificial intelligence systems, but also unexpectedly ignited a fierce competition between the two companies over the perception of "leadership."

The IMO is one of the oldest and most challenging high school mathematics competitions globally, and its results are considered an important benchmark for measuring artificial intelligence's reasoning ability. Last year, Google won a silver medal with a "formal" system that required human assistance. This year, both OpenAI and Google launched more advanced **"informal" systems**, which can extract information directly from natural language questions and generate well-reasoned answers without manual conversion. Both companies claim that their AI models correctly answered five out of six questions in the IMO exam, surpassing most high school students and the performance of Google's AI model from last year.

Robot Rivalry

Breakthroughs and Controversies in Reasoning Models

During interviews, researchers from OpenAI and Google's IMO projects stated that these gold medal achievements represent breakthroughs in AI reasoning models in unverifiable areas. This is especially significant because traditional AI reasoning models excel at solving problems with clear answers (such as simple math or programming), but they perform poorly on tasks with ambiguous solutions (such as assisting in complex research).

However, there was a heated dispute between the two companies regarding the **"who announced first" and "how it was announced"** of these gold medal results. OpenAI announced the news of its AI model winning a gold medal early on Saturday morning, which immediately drew criticism from the CEO and researchers of Google DeepMind. Thang Luong, a senior researcher at Google DeepMind and the project leader for the IMO, told TechCrunch that Google chose to wait for the official results to respect the participants and had collaborated with the organizers of the International Mathematical Olympiad to prepare the exam, and only announced the official results on Monday morning, which were supported by the IMO chairman and official scoring. Luong emphasized, "The International Mathematical Olympiad has its own scoring criteria. Therefore, any evaluation not based on this standard cannot claim that its results reached the level of a gold medal."

Differing Perspectives, Intensifying Competition

Noam Brown, who participated in the development of the IMO model at OpenAI, explained that the IMO had invited OpenAI to participate in a formal competition several months ago, but they declined because they were focused on developing a more research-oriented natural language system at the time. Brown said that OpenAI was not aware that the IMO was conducting informal tests with Google. OpenAI stated that they hired three former IMO medalists familiar with the scoring system as third-party evaluators to assess the performance of their AI model. After learning about the gold medal results, OpenAI contacted the IMO, but the IMO advised them to wait until after the awards ceremony on Friday night before announcing the results. The IMO has not yet responded to requests for comment from TechCrunch.

Although Google may be more rigorous in procedure, the underlying context of this debate is the grand picture of rapid progress among leading AI laboratories. This year, top high school students from around the world gathered at the IMO, but only a few achieved scores comparable to those of OpenAI and Google's AI models. This indicates that OpenAI, once far ahead, is now facing a more intense competition than ever before. As OpenAI is expected to release GPT-5 in the coming months, maintaining its image as a leader in the AI field is undoubtedly a key aspect of the current "atmosphere" battle.