Google data center corridor with blue server lights and AI optimization overlays
AI & Singularity

DeepMind's AlphaEvolve Is Now Optimising Google Itself — And Recovering 0.7% of Global Compute

AlphaEvolve is running in production at Google, recovering 0.7% of global compute and speeding up Gemini training by 23%. The AI-optimising-AI feedback loop is live.

DeepMindAlphaEvolveAI optimisationcompute efficiencyGemini

DeepMind’s AlphaEvolve — the Gemini-powered evolutionary coding agent first announced in May 2025 — has graduated from research project to production system. It’s now running continuously inside Google, optimising the very infrastructure that trains the models it uses to optimise.

The numbers are striking: AlphaEvolve is recovering 0.7% of Google’s global compute, achieving a 23% speedup on a key Gemini training kernel, and improving DeepConsensus for genomics analysis. That 0.7% figure sounds small until you remember Google runs some of the largest computing infrastructure on Earth. We’re talking about hundreds of thousands of GPUs worth of reclaimed capacity.

How it works

AlphaEvolve uses Gemini models to propose code changes, then evolves them through a process inspired by biological natural selection. It generates candidate solutions, tests them against real workloads, and keeps the improvements that survive. The key insight: it doesn’t just optimise one thing — it finds improvements across multiple domains simultaneously.

What is AlphaEvolve? AlphaEvolve is DeepMind’s Gemini-powered evolutionary coding agent that proposes, tests, and iterates on code optimisations using principles from natural selection. It works by generating many candidate code changes, evaluating them against real production systems, and keeping the best mutations.

The production deployment means AlphaEvolve is no longer a research demo running on benchmarks. It’s actively improving Google’s systems — and those improvements compound. A 23% speedup on a Gemini training kernel doesn’t just save time; it means more training iterations in the same budget, which means better models, which means better optimisations.

The feedback loop that matters

This is the story that should concentrate minds. AI optimising AI infrastructure is a genuine positive feedback loop:

  1. AlphaEvolve improves Gemini training efficiency
  2. Better Gemini models power AlphaEvolve
  3. AlphaEvolve finds more optimisations
  4. Repeat

Each cycle makes the next one more effective. This isn’t speculative — it’s happening now, in production, at scale.

The genomics angle is worth noting too. AlphaEvolve improved DeepConsensus, Google’s tool for accurate DNA sequencing. AI optimising AI is one thing; AI optimising tools that improve human health is another. The cross-domain value suggests evolutionary coding isn’t limited to the domain it was trained on.

Why it matters

The efficiency story is the story of 2026. Between Baidu’s ERNIE 5.1 achieving frontier performance at 6% of training cost and AlphaEvolve squeezing 0.7% more compute out of existing infrastructure, the narrative is shifting. Raw compute moats are eroding. The question isn’t “who has the most GPUs?” — it’s “who gets the most out of what they have?”

For NZ and smaller economies, this is genuinely encouraging. If frontier-quality AI can be built and run more efficiently, the barrier to entry drops. The compute advantage that keeps AI concentrated in a handful of US tech giants narrows — not to zero, but meaningfully.

There’s also an uncomfortable flip side. If AI can continuously optimise the systems that train AI, we’ve built an accelerating engine. The better it gets, the faster it gets better. That’s the definition of a feedback loop, and we’ve just turned it on in production.

🔍 THE BOTTOM LINE

AlphaEvolve running in production at Google is the clearest signal yet that the AI efficiency revolution isn’t coming — it’s here. When AI starts optimising the infrastructure that trains AI, the feedback loops kick in. Smaller economies like NZ should pay attention: efficiency gains make frontier capabilities more accessible, but accelerating loops make the pace of change harder to track.


❓ Frequently Asked Questions

Q: What does this mean for NZ? If AI efficiency keeps improving, NZ businesses and researchers can access frontier-quality AI without frontier budgets. The 6% training cost figure from Baidu and 0.7% compute recovery from AlphaEvolve suggest the economics of AI are shifting toward accessibility — but only for organisations paying attention.

Q: Is AlphaEvolve dangerous? Not in its current form — it optimises code, not goals. The concern is structural: production AI systems optimising other AI systems creates compounding improvements. The question isn’t whether this particular system is safe, but what happens when this architecture becomes widespread.

Q: What’s the difference between AlphaEvolve and other AI coding tools? AlphaEvolve doesn’t just suggest code — it evolves it through iterative testing against real production systems. It’s closer to natural selection than to Copilot. And unlike developer tools, it runs autonomously, continuously, inside Google’s infrastructure.


Sources

Sources: DeepMind Blog, Google