Three government buildings representing EU, China, and US regulatory approaches to AI agents, dramatic overhead composition, photojournalistic style
Breaking News

China Draws First AI Agent Lines: Humans Must Keep the Keys While Agents Do the Work

China just became the first country to publish a dedicated agentic AI regulation framework — humans stay in charge, agents need permission, and Beijing wants to build the infrastructure for an 'Agent Internet.' The EU is retreating; the US is absent.

agentic AIChina AI regulationAI agentsAI governanceCAC

China just became the first major power to publish a regulatory framework specifically for agentic AI — and the core message is unambiguous: humans keep the keys. AI agents can act, but only with permission, only within boundaries, and only while you’re watching.

On May 8, 2026, China’s Cyberspace Administration (CAC), National Development and Reform Commission, and Ministry of Industry and Information Technology jointly released the “Implementation Opinions on the Standardized Application and Innovative Development of Intelligent Agents.” It’s the first time any government has treated agentic AI as a distinct governance object rather than just another chatbot variant.

Here’s why that matters — and why the contrast with the EU and US couldn’t be starker.

🔍 THE BOTTOM LINE

China just wrote the first rulebook for AI agents: humans decide, agents execute. The EU is rolling back its AI Act. The US hasn’t started. The divergence between the three biggest AI players will shape agentic AI globally for a decade.


What China’s Framework Actually Says

The document is surprisingly specific for a first attempt. Here’s what it covers:

Decision authority is tiered. The framework distinguishes between three types of decisions: those that must always stay with humans, those that can be delegated to AI through explicit user authorization, and those agents can make autonomously. The key qualifier: users must always retain “the right to know and the final decision-making power” over autonomous agent decisions.

Mandatory standards for high-stakes domains. Healthcare, transportation, media, and public safety all get sector-specific standards. No wild-west agent deployments in hospitals or on roads.

Anti-manipulation rules. The framework explicitly warns against agents using “human-like interaction techniques to create addiction, emotional attachment, or manipulative consumer behavior” — particularly targeting vulnerable groups like minors and elderly users. This is China acknowledging that AI agents aren’t just tools; they’re social technologies that reshape relationships and behaviour.

The “Agent Internet.” Buried in the document is something genuinely forward-looking: research into intelligent internet architecture, agent registration platforms, digital identities for agents, capability declarations, and multi-agent interoperability protocols. China isn’t just regulating agents — it’s planning infrastructure for a future where agents talk to each other, authenticate identity, exchange permissions, and assign responsibility. This is the seed of an “Agent Internet.”

Indigenous controllability. Open-source frameworks, domestic operating systems, local chip compatibility. China doesn’t want its agent infrastructure dependent on foreign technology. Shocker.

International standards ambition. Despite the sovereignty push, the document explicitly states China intends to “actively participate in international standards-setting” for intelligent agents. Translation: China doesn’t want to just follow rules written in Brussels or Washington — it wants to write them.

Why This Is Different from Everything Else

Every previous AI regulation — the EU AI Act, the US executive orders, China’s own earlier generative AI rules — treated AI as a content-generation problem. Can it produce harmful text? Does it leak personal data? Is the output truthful?

Agentic AI breaks that frame entirely. Agents don’t just answer questions — they do things. They place orders, make payments, delete files, send emails, submit forms, approve loans, control equipment. The question isn’t “what did the AI say?” It’s “what did the AI do — and who authorized it?”

China’s regulators have clearly grasped this distinction. The Geopolitechs analysis of the document notes that Chinese policymakers now recognise agents possess “autonomous perception, long-term memory, tool use, cross-platform task execution, and even multi-agent coordination capabilities.” This isn’t chatbot regulation. This is governance for semi-autonomous digital actors.

The Three-Way Divergence

This is where it gets interesting. Look at how the three biggest AI players are approaching agentic AI:

ApproachChinaEUUS
PhilosophyPro-innovation with guardrailsRegulate first, simplify laterDeregulate and hope
Agent-specific rulesFirst dedicated frameworkNone — AI Act covers generative AINone at all
Human oversightMandatory, tiered by riskImplied via high-risk classificationVoluntary commitments
Industrial strategyExplicit — agents as infrastructureCompliance burden on industryMarket-led
Direction of travelWriting new rulesRolling old ones backNot showing up

China is writing the rulebook. The EU just rolled its AI Act back 16 months and exempted industrial AI from high-risk rules. The US has no federal AI legislation at all — just voluntary commitments from companies that can be withdrawn at any time.

This is not a story about who’s “right.” It’s a story about who’s first. And being first to regulate a new technology has a habit of meaning you get to set the standards everyone else follows — ask anyone who’s ever had to comply with GDPR.

The “Meaningful Human Control” Convergence

Here’s the odd part: China’s framework shares DNA with Europe’s concept of “meaningful human control.” Both say humans should have final authority over consequential AI decisions. Both recognise that autonomous execution without oversight is the real risk.

The difference is execution. China is building the infrastructure — agent registration, digital identities, interoperability protocols — to make human oversight technically possible at scale. Europe wrote a risk classification system and then delayed it. The US wrote nothing.

If you’re a company building AI agents, which framework do you design for? The one that exists, or the ones that don’t?

What This Means for New Zealand

NZ’s AI Blueprint refreshed to 2030 identified the country as “high-use, low-trust” on AI — lots of adoption, not much confidence in governance. China’s framework highlights a problem NZ hasn’t started solving:

  • If NZ companies use AI agents (and they will — they already do), there are no domestic rules governing agent authority, liability, or oversight.
  • If NZ exports AI agent products, China now has a compliance framework. So does the EU (eventually). The US doesn’t. Pick your market, pick your compliance burden.
  • If NZ wants a seat at the standards table, China’s framework shows the kind of technical infrastructure — agent registries, identity protocols, interoperability standards — that will define how agents work globally. Getting involved early matters.

The Blueprint’s five strategic pillars include “Trust and Transparency” and “Governance.” Neither currently addresses agentic AI specifically. Given that South Korea’s AI Basic Act already regulates AI in student evaluation and China now has an agent framework, NZ is falling behind its Asia-Pacific peers on AI governance that matches the technology’s actual capability.

The Bigger Picture

China’s framework isn’t perfect. It’s a draft. The anti-manipulation rules are vague. The “indigenous controllability” requirement is protectionism dressed as sovereignty. The document’s enthusiasm for agents in “public opinion guidance” and “emotional intervention systems” should make anyone who values civil liberties deeply uncomfortable.

But it exists. It treats agents as a distinct category. It builds infrastructure for governance rather than just writing rules and hoping for compliance. It plans for multi-agent coordination, which is where the technology is actually going.

Meanwhile, the EU is simplifying and delaying, and the US is absent from the conversation entirely. Three superpowers, three regulatory philosophies, and the one that writes the rules first usually shapes the game.

For AI agents, that’s China. At least for now.

❓ Frequently Asked Questions

Q: Does China’s framework ban autonomous AI agents? No — it explicitly allows agents to make some decisions autonomously, provided users retain the right to know about those decisions and the final right to override them. It’s “controlled autonomy,” not prohibition.

Q: How does this compare to the EU AI Act? The EU AI Act was written for generative AI and classifies systems by risk level. China’s framework was written specifically for agents and focuses on decision authority boundaries. They’re solving different problems — but China’s framework is more technically specific about what agents can and can’t do.

Q: Should NZ companies care? Yes. If you sell AI products into China, you’ll need to comply with this framework. If you sell into the EU, you’ll eventually need to comply with the AI Act. If you operate only in NZ, there are currently no agent-specific rules at all — which means no legal protection when something goes wrong.

Q: What’s the “Agent Internet”? China’s framework discusses infrastructure for agents to authenticate, communicate, delegate, and coordinate — agent registration platforms, digital identities, capability declarations, and interoperability protocols. It’s planning for a future where agents interact with each other as routinely as websites do today.


🔍 THE BOTTOM LINE

China just wrote the world’s first rulebook for AI agents while the EU retreated and the US stayed home. The framework mandates human oversight, builds governance infrastructure, and positions China to set global standards. Love or hate the approach, being first matters — and right now, China is the only one at the table.

Sources

Sources: The Register, Geopolitechs, China Daily, CAC, Reed Smith