Google DeepMind has released a groundbreaking research achievement — AlphaEvolve, an AI coding agent that combines the Gemini large language model with evolutionary algorithms. This system not only automatically discovers and optimizes complex algorithms but also demonstrates impressive capabilities in Google's data centers, chip design, and AI training, even helping the Gemini model optimize itself. It is considered an "on-the-fly" breakthrough in the field of AI. AIbase provides an in-depth analysis of this technological milestone, revealing its core principles and wide-ranging impact.

image.png

Core Technology: The Perfect Fusion of Gemini and Evolutionary Algorithms

The core of AlphaEvolve lies in its unique self-evolution framework, combining the creativity of Google's Gemini series of large language models with the rigor of automated evaluators. Its workflow is as follows:

Code Generation: Utilizing Gemini Flash (focusing on speed) and Gemini Pro (focusing on depth), it generates diverse algorithm codes, covering scenarios from simple functions to complex programs.

Automatic Evaluation: Generated codes are validated, run, and scored by automated evaluators to ensure correctness and efficiency. Evaluators select high-quality codes based on performance metrics.

Iterative Evolution: High-performing codes are retained, mutated, or combined for the next round of optimization, similar to "survival of the fittest" in biological evolution. This process iterates continuously, ultimately producing innovative algorithms.

AIbase believes that this combination of large language models and evolutionary algorithms not only overcomes the "illusion" problems of traditional LLMs in code generation but also endows AlphaEvolve with strong adaptability in complex problems, making it a "super brain" for algorithm discovery.

Data Center Optimization: An Astonishing 0.7% Power Recovery

AlphaEvolve's application in Google's data center scheduling system Borg is a highlight. It proposes an efficient scheduling heuristic algorithm, recovering an average of 0.7% of global computational resources, equivalent to the computing power of tens of thousands of machines. This optimization has been deployed for over a year, saving Google millions of dollars in operational costs while reducing energy consumption. AIbase points out that this result shows AlphaEvolve's great potential in solving large-scale system optimization problems.

Chip Design Innovation: TPU Efficiency Upgraded Again

In the hardware domain, AlphaEvolve proposed Verilog code optimization suggestions for Google's next-generation Tensor Processing Unit (TPU), improving chip area and energy efficiency by simplifying redundant bits in critical arithmetic circuits. All optimizations have undergone strict validation to ensure functional correctness. AIbase notes that this result not only accelerates the design cycle of TPU but also provides new ideas for the future development of AI-specific chips.

AI Training Acceleration: Gemini Self-Optimization by 32.5%

AlphaEvolve's performance in AI training optimization is particularly noteworthy. It optimized the core operation of matrix multiplication in Gemini model training, increasing speed by 23%, thus shortening overall training time by 1%. More impressively, AlphaEvolve improved the runtime efficiency of the FlashAttention kernel by up to 32.5% through optimizing GPU low-level instructions. AIbase believes that this "self-optimization" capability marks a new stage of recursive acceleration in AI research and development. With AlphaEvolve, Gemini becomes faster and stronger.

Mathematical Breakthrough: Solving a 50-Year-Old Problem and New Solutions to "Kissing Numbers"

AlphaEvolve not only shines in engineering applications but also achieves breakthroughs in theoretical mathematics. It discovered a new algorithm for 4x4 complex matrix multiplication, breaking the record set by Strassen's algorithm in 1969. Among the more than 50 mathematical problems tested, AlphaEvolve reproduced known optimal solutions in 75% of cases and proposed better solutions in 20%. Specifically, it found a configuration of 593 spheres in the 11-dimensional space of the kissing number problem, breaking the previous record of 592. AIbase evaluates that this achievement highlights AlphaEvolve's great potential in basic scientific research.

Future Outlook: From Materials Science to Drug Discovery

Google DeepMind stated that AlphaEvolve's generality makes it applicable to any problem with clear evaluation metrics, and it is expected to play a role in materials science, drug discovery, and sustainable development in the future. Currently, Google is developing user interfaces and plans to launch an early access program for academic researchers to further expand its influence. AIbase predicts that as AlphaEvolve becomes open source or more widely used, it may become a key engine driving global scientific innovation.

AI Self-Evolution Opens a New Era

As a professional media outlet for AI, AIbase believes that AlphaEvolve's release is not only another masterpiece from Google DeepMind but also an important sign of AI technology moving toward a self-evolution era. Its breakthroughs in data centers, chip design, AI training, and mathematical research across multiple fields demonstrate AI's transition from an auxiliary tool to a core innovation engine. However, AIbase also reminds us that AlphaEvolve can currently only handle quantifiable evaluation problems, and its applicability range needs to be expanded further in the future.