Ai Brain Agi
News

Jensen Huang Says AGI Is Here. DeepMind Says Define AGI First.

Nvidia's Jensen Huang declared AGI achieved. Days earlier, DeepMind published a cognitive framework showing today's AI has a 'jagged profile'.

Last week, Nvidia CEO Jensen Huang told podcaster Lex Fridman that AGI—artificial general intelligence—had already been achieved. “I think it’s now,” he said. “I think we’ve achieved AGI.”

Days earlier, Google DeepMind—including cofounder Shane Legg, who first popularized the term AGI in the early 2000s—published a research paper that said something different: today’s AI has a “jagged cognitive profile,” exceeding humans in some areas while trailing dramatically in others.

The timing was coincidental. The contrast was not.

Huang’s Definition: Build a Billion-Dollar Company

Fridman had offered Huang an unusual metric for AGI: Could AI start and grow a technology business to $1 billion in value? Huang said the time frame didn’t need to be 5-20 years. It was already done.

He then hedged: “You said a billion, and you didn’t say forever.”

The definition is idiosyncratic. Most researchers define AGI as AI that matches human cognitive abilities across a wide range of tasks—not specifically the ability to build a successful startup. Huang knows this. Later in the same podcast, he acknowledged that 100,000 AI agents “could never replicate Nvidia.”

But the comment generated headlines. “Nvidia CEO says AGI achieved” spread across social media, amplified by accounts eager to promote AI progress. The vagueness of “AGI” makes these claims easy to spread and hard to refute.

DeepMind’s Cognitive Framework

DeepMind’s paper, “Measuring Progress Toward AGI: A Cognitive Framework,” takes a different approach. Instead of a single threshold, it identifies 10 cognitive faculties essential for general intelligence:

  • Perception: Processing sensory information
  • Memory: Storing and retrieving information
  • Attention: Focusing on relevant stimuli
  • Learning: Acquiring new knowledge and skills
  • Reasoning: Drawing conclusions from available information
  • Planning: Setting and pursuing goals
  • Problem-solving: Overcoming obstacles to achieve objectives
  • Language: Understanding and producing communication
  • Social cognition: Understanding other agents’ mental states
  • Motor control: Interacting with the physical world

The key insight: today’s AI models have a “jagged profile.” They may exceed most humans in mathematics or factual recall while dramatically trailing even average people in learning from experience, maintaining long-term memories, or understanding social situations.

An AI model would need to match median human performance across all 10 areas to be considered AGI, the researchers suggest. Today’s models don’t come close.

The Definition Problem

The DeepMind paper is only the latest attempt to put AGI measurement on scientific footing. In 2025, researchers including Dan Hendrycks and Yoshua Bengio published an AGI framework that scored OpenAI’s GPT-5 at 57%—far short of matching a well-educated adult across all cognitive dimensions.

François Chollet’s ARC-AGI benchmark tests something different: not what a system knows, but how efficiently it learns new skills. The benchmark consists of visual puzzles that require spotting patterns, understanding spatial relationships, and inferring rules from examples—abilities humans grasp in seconds but AI models still struggle with.

This month, Chollet launched ARC-AGI-3, an interactive version where AI agents must explore novel environments, acquire goals on the fly, and learn continuously over multiple steps. These abilities come naturally to humans. They remain at the frontier of AI research.

Corporate Definitions Differ

While researchers debate cognitive frameworks, corporations have introduced their own definitions—often tied to financial metrics.

OpenAI’s 2015 founding principles said AGI would “benefit all of humanity.” Its 2018 charter defined AGI as “highly autonomous systems that outperform humans at most economically valuable work.” In 2023, according to reporting by The Information, OpenAI’s contract with Microsoft defined AGI as technology that could generate at least $100 billion in profits.

OpenAI made $13 billion in revenue last year. It burned through $8 billion in cash. It does not expect to break even until 2030.

The company is far short of the financial threshold in its Microsoft contract. Yet CEO Sam Altman has said OpenAI is “now confident we know how to build AGI as we have traditionally understood it.” In the same period, he called AGI “a very sloppy term.”

Why the Definition Matters

Huang is not just Nvidia’s CEO. He founded the company 33 years ago and has run it ever since, piloting it past near-bankruptcy to a $4 trillion valuation. When he says AGI is achieved, people listen.

But DeepMind’s framework shows why the claim is premature. Today’s AI can build a billion-dollar company—arguably the same feat a human founder might achieve. But that’s one narrow task, not the broad cognitive profile that defines general intelligence.

The 10-cognitive-faculty approach reveals the gaps. AI excels at reasoning about explicit information but struggles with social cognition. It can recall vast factual databases but cannot learn from experience the way humans do. It can plan within a defined domain but struggles with open-ended goal acquisition.

Huang’s “AGI is achieved” claim is, in this light, a marketing statement about a specific capability—entrepreneurship—rather than a scientific claim about general intelligence. DeepMind’s framework provides the measuring stick that makes the difference visible.

Sources

  • Fortune: “Nvidia’s Jensen Huang says ‘we’ve achieved AGI.’ But no one can agree on what that means” (March 30, 2026)
  • Google DeepMind: “Measuring Progress Toward AGI: A Cognitive Framework” (March 2026)
  • MarkTechPost: DeepMind cognitive framework analysis
  • ARC Prize: ARC-AGI-3 benchmark launch
  • The Information: Microsoft-OpenAI AGI contract details