Swiss mathematician Johannes Schmitt recently posted a groundbreaking research result on X: GPT-5, for the first time without any human intervention or guidance, independently solved a long-standing mathematical problem. Schmitt commented that GPT-5's solution demonstrated amazing creativity, as it did not follow the conventional logic of the field but instead borrowed techniques from other branches of algebraic geometry.

This breakthrough not only confirms the previous prediction by mathematics master Terence Tao about AI's potential, but also pushes the scientific community into a new phase where it must face the reality of "AI's independent contributions." Currently, the proof process is undergoing rigorous peer review.
Aside from the academic discovery itself, Schmitt's paper seems to be an avant-garde experiment on research transparency. In this highly digitized paper, the collaboration between humans and AI has been broken down to the extreme: the proof was completed jointly by GPT-5 and Gemini3Pro, the narrative text was written by Claude, and the rigorous Lean formal proof was assisted by ChatGPT5.2.
To achieve 100% traceability, each paragraph of the paper is accurately labeled with the producer and includes links to the original conversation records and prompts. While this approach ensures research integrity, it has also been criticized by some scholars as possibly becoming an "academic bureaucracy" that hinders innovation due to its extremely time-consuming and complicated procedures.
The deeper significance of this experiment lies in questioning the very nature of science. Although Schmitt's method is clear, it also exposes the dilemma of blurred human-AI boundaries—despite the AI generating the answer independently, the construction of prompts and the selection of output results still carry human intent. The scientific community now must answer a fundamental question: Does a contribution solely completed by AI qualify if there is no initial human intention? When AI becomes a daily research tool, such detailed labeling may be unsustainable, but Schmitt undoubtedly provides a valuable reference for future academic publishing standards in the AI era.






