Stanford’s 2026 AI Index Report landed with a finding that should worry anyone trying to build AI policy on consensus: the people who understand AI best and the people affected by it most are living in completely different realities.
The numbers are stark. 73% of AI experts expect AI to have a positive impact on how people do their jobs. Among the general public? Just 23%. That is a 50-point chasm — not a disagreement about details, but a fundamental mismatch in how two groups perceive the same technology.
The Gap Goes Beyond Jobs
The perception divide doesn’t stop at employment. Similar gaps appear across multiple domains:
- Economy: Experts overwhelmingly expect AI to boost economic outcomes. The public is far more sceptical, with many seeing AI as a threat to economic stability rather than a catalyst.
- Healthcare: Experts point to AI’s diagnostic capabilities, drug discovery breakthroughs, and potential to expand access to care. The public worries about automated decisions in life-or-death situations.
- Daily life: While 59% of people surveyed globally see net benefits from AI overall, 52% feel nervous about AI products and services. You can think something is beneficial and still find it unsettling.
This isn’t ignorance on the public’s part. It’s a rational response to lived experience. When your industry starts laying people off and attributes half the cuts to AI, optimism about “positive impact” sounds like someone describing the view from a lifeboat.
Trust in Government: 31% and Falling
The report also reveals a trust vacuum in AI governance. Only 31% of Americans trust their own government to regulate AI effectively — the lowest figure among all countries surveyed. Globally, the EU is trusted more than either the United States or China to regulate AI responsibly.
This matters because the expert-public gap can only be bridged by institutions people trust. If the public doesn’t believe regulators will protect them, and the experts are telling them everything is fine, the result isn’t reassurance — it’s alienation.
The low U.S. trust figure also creates a perverse incentive structure. Companies that want to move fast see public scepticism as a friction to overcome, not a signal to heed. Regulators who are already distrusted may avoid acting boldly for fear of getting it wrong. The people with the most to lose end up with the least representation.
Meanwhile, Capabilities Keep Accelerating
The perception gap is widening at exactly the wrong time, because AI capability is not plateauing — it is accelerating. Several key findings from the report:
- Coding benchmarks: Performance on SWE-bench Verified rose from 60% to near 100% in a single year. AI that can resolve nearly all professional software engineering tasks is not a future scenario. It is a 2026 measurement.
- Mathematics: AI models can now win gold medals at the International Mathematical Olympiad. Yet the same models cannot reliably tell time on an analogue clock — what researchers call the “jagged frontier” of AI capability.
- Adoption velocity: Generative AI reached 53% population adoption within three years, faster than either the PC or the internet. But the pace varies enormously by country and correlates strongly with GDP per capita.
- Agent performance: AI agents jumped from 12% to roughly 66% task success on OSWorld, which tests real computer tasks across operating systems. Still failing 1 in 3 attempts, but the trajectory is steep.
These are not incremental improvements. They represent AI systems becoming competent at tasks that were firmly human territory just two years ago. The public’s nervousness is not a lagging indicator — it may be a more honest reading of what these capabilities mean for employment than the expert consensus.
The U.S.-China Race and the Talent Drain
The report also tracks a geopolitical shift that compounds the trust problem. The U.S.-China AI model performance gap has effectively closed. Chinese and U.S. models have traded the lead multiple times since early 2025, and as of March 2026, the top U.S. model leads by just 2.7%.
Meanwhile, the United States is losing its ability to attract global AI talent. The number of AI researchers and developers moving to the U.S. has dropped 89% since 2017, with an 80% decline in the last year alone. The country that hosts the most AI data centres and produces the most frontier models is simultaneously becoming less attractive to the people who build them.
This combination — closing capability gaps with China and declining talent inflows — adds urgency to the trust deficit. If the public doesn’t support AI development and the talent pool is shrinking, the structural advantages that have kept the U.S. ahead may erode faster than anyone expects.
Responsible AI: The Measurement Gap
Perhaps the most concerning finding for bridging the perception gap: almost all leading frontier AI model developers report results on capability benchmarks, but reporting on responsible AI benchmarks remains spotty. The industry measures what it’s good at and neglects what the public cares about most.
Documented AI incidents rose to 362 in 2025, up from 233 in 2024. And recent research found that improving one responsible AI dimension — such as safety — can degrade another, such as accuracy. There is no free lunch in AI governance.
If experts wonder why the public doesn’t trust them, the answer is partly in this measurement asymmetry. The industry publishes detailed benchmarks for how smart its models are getting and publishes far less about how safe, fair, or reliable they are. The public sees the headlines about capabilities. They also experience the failures. What they don’t see is evidence that anyone is minding the store.
What This Means
The 50-point gap is not a communication problem. It is a structural reality that emerges when:
- Experts benefit professionally from AI advancement — their careers, funding, and status are tied to the technology succeeding
- The public bears the downside — job displacement, surveillance, automated decisions they can’t contest
- Institutions lack credibility — 31% trust in government regulation means there’s no trusted mediator
- Measurement is asymmetric — capability is tracked rigorously, safety and fairness are not
Bridging this gap will take more than better messaging. It will require AI governance that earns trust rather than demanding it, safety reporting that matches the rigour of capability benchmarks, and policy that acknowledges the real costs of displacement instead of hand-waving them away.
The Stanford AI Index has done its job: it has measured the gap. What happens next depends on whether anyone in power is willing to act on what the numbers are saying.
SOURCES
- Stanford HAI — 2026 AI Index Report
- Stanford HAI — AI Index Public Data