Humans still outperformed AI at the 2025 International Mathematical Olympiad (IMO), even though models from Google and OpenAI achieved gold-level scores for the first time. While neither AI model earned a perfect score, five students did, highlighting the continued edge of top human talent.
Google announced that its Gemini model solved five out of six problems and scored 35 out of 42 points enough for a gold medal. The solutions were completed within the 4.5-hour time limit, a major improvement from last year when Google’s AI took days to solve four problems, earning only a silver.
OpenAI’s experimental reasoning model also scored 35 points. Researcher Alexander Wei called the result a “grand challenge in AI” and confirmed that former IMO medalists independently graded the model’s proofs.
The IMO confirmed that AI models were tested under the same conditions as the 641 human contestants from 112 countries. While impressed by the progress, IMO President Gregor Dolinar emphasized that AI still hasn’t reached the level of top human performers. “It’s exciting to see how far AI has come,” he said, noting that the solutions were “clear, precise, and easy to follow.”