AI & Singularity
LeCun Proposes “SAI” to Replace AGI as AI’s North Star
Meta Chief AI Scientist Yann LeCun and his research team have published a new paper arguing that “Artificial General Intelligence” has become an overloaded term that lacks scientific utility. The March 7 paper proposes “Superhuman Adaptable Intelligence” (SAI) as a better framework for measuring AI progress.

LeCun argues the AI industry has been racing toward a goal that may not be clearly defined.
Why It Matters
The AI industry has been racing toward a goal that may not be clearly defined. LeCun’s team argues that human intelligence isn’t truly “general” at all ” — humans excel at tasks relevant to our survival but struggle outside that range. Instead of asking whether AI can do everything humans can, the paper suggests focusing on adaptation speed: how quickly can a system learn new skills?
The Framework
The paper critiques current benchmark-focused approaches, noting that evaluating intelligence as a “static inventory of competencies” misses what actually matters. Key insights include:
- Human intelligence is specialized, not general ” — optimized for perception, motor control, planning, and social reasoning that mattered for survival
- SAI would measure ability to adapt beyond human performance on human tasks AND learn useful tasks outside the human domain
- Self-supervised learning and world models (like JEPA and Dreamer 4) offer promising paths forward
- The paper warns against “architectural monoculture,” noting that autoregressive LLMs dominate not because they’re optimal but because shared tooling creates momentum
The Argument
The core insight: future AI systems will likely need internal specialization and diversity across models, not a single monolithic system. Current “AGI” benchmarks may be measuring the wrong thing entirely.
Source: Meta AI Research, March 7, 2026