Google’s Gemini
The International Collegiate Programming Contest (ICPC) draws thousands of the world’s best student programmers each year. In just five intense hours, teams tackle 12 advanced algorithmic problems, earning points only for fully correct solutions. Faster completions mean higher rankings.
How Google Entered the Contest
For the 2025 finals, Google deployed its Gemini 2.5 Deep Think model, connecting it to an ICPC-approved secure online environment. Human teams were even given a 10-minute head start before the AI began.

Record-Breaking Results
Unlike Google’s earlier AI built specifically for the Math Olympiad, Gemini 2.5 competed as a general-purpose model—but with extended “continuous reasoning” that let it think for hours without reset.
Its performance was remarkable:
- Solved 8 of 12 problems in the first 45 minutes
- Finished 10 problems overall, earning a gold medal—a feat achieved by only four of 139 human teams
- Ranked second overall, behind just one university team

The Puzzle Humans Couldn’t Crack
A highlight was Problem C, a complex, multi-dimensional optimization challenge involving the hypothetical storage and draining of “flubber.” Gemini completed it in 30 minutes, using dynamic programming with a nested ternary search to prioritize each virtual reservoir. No human team solved this task.
Broader Implications
Gemini’s success wasn’t a one-off. When tested on archived problems from the 2023 and 2024 ICPC competitions, it again delivered gold-level performance.
ICPC Executive Director Bill Poucher remarked, “Gemini’s achievement raises the bar for AI in academic standards and problem-solving.”
Google believes the system’s multi-step reasoning could transform areas requiring intricate logic, including biotechnology and semiconductor design.





