NZ’s AI Framework Has All the Right Words — And None of the Teeth
New Zealand’s Public Service AI Framework names transparency, fairness, and human oversight as its guiding principles. It is also explicitly non-binding. Researchers Deborah Te Kawa and Barbara Allen have a name for this: a “Pollyanna policy” — a governance approach built on optimism rather than accountability.
The critique, published via The Conversation and covered by RNZ on May 11, argues the framework underestimates institutional friction, offloads accountability to individual agencies with vastly different capability levels, and leaves Māori data sovereignty unguarded.
🔍 THE BOTTOM LINE
NZ has the rhetoric of responsible AI governance without the legislative muscle to enforce it — and that gap will widen as agentic AI enters the public service.
What the Framework Gets Right — And Wrong
The Public Service AI Framework isn’t wrong about principles. Transparency, fairness, and human oversight are exactly what any responsible AI framework should name. The problem is what happens next: nothing enforceable.
What is the Pollyanna principle? The Pollyanna principle describes a general cognitive bias towards positivity and optimism about outcomes. In AI governance, it manifests as stating good intentions and issuing non-binding guidance while trusting existing frameworks will absorb genuinely novel challenges. The result is aspirational language without accountability.
The framework assumes organisational readiness. The evidence says otherwise. The 2025 Public Service Census found that while a third of public servants had used AI for work, only 14% used it regularly. The gap between “we have a framework” and “agencies can actually implement it” is real and growing.
The Institutional Friction Problem
Governance in a typical public sector agency isn’t a clean, ordered structure. It’s an accumulation of layer upon layer — policy, operational procedure, ministerial expectation, legislative obligation, and professional conventions. New regulatory instruments rarely replace old ones; they’re added alongside them, interacting in unpredictable ways.
AI is what the researchers call a “flat” technology: it processes information as a statistical landscape, lacking the institutional memory to understand that a prompt today might quietly undermine political and constitutional compromises made over decades. When these models hallucinate legal facts, they risk overwriting Indigenous knowledge with plausible fictions.
The Robodebt Warning
Australia’s Royal Commission into the Robodebt Scheme demonstrated what happens when algorithmic systems are deployed without clarity about governance environments: catastrophic harm. NZ’s non-binding framework essentially asks agencies to self-regulate in a space where the technology itself disrupts traditional accountability chains.
The framework abdicates central responsibility, offloading accountability to individual agencies with vastly different levels of capability and resources.
Māori Data Sovereignty — A Constitutional Imperative, Not a Technical Add-On
In Aotearoa New Zealand, the governance vacuum has an added dimension. Māori data sovereignty is a constitutional imperative under the Treaty of Waitangi — not a technical consideration to be appended after deployment.
The current approach leaves the gate unguarded. Legal scholars Woodrow Hartzog and Jessica Silbey argued in their 2025 article “How AI Destroys Institutions” that AI systems are built to function in ways that degrade and destroy civic institutions — eroding expertise, short-circuiting decision-making, and isolating people from each other.
The sycophantic tendency of large language models to mirror user bias is particularly dangerous in a policy system grappling with the legacy of colonisation. The technology can simply reinforce an echo chamber.
Five Eyes Agencies Disagree With NZ’s Approach
NZ’s light-touch approach puts it at odds with its own intelligence partners. Five Eyes security agencies issued joint guidance in April 2026 calling explicitly for:
- Incremental deployment of agentic AI
- Continuous threat assessment
- Sustained human oversight
That’s binding-language adjacent — a clear signal that NZ’s closest intelligence partners think the “trust and hope” approach is insufficient for systems that can autonomously make decisions affecting citizens.
What Would Actually Fix This
The researchers aren’t calling for a ban on AI in government. They’re calling for what the framework promises but doesn’t deliver:
- Binding obligations — principles without a legislative mandate are aspirational without accountability
- Protected oversight roles — formal positions where officials interrogate AI output for bias and fabrication, rather than accepting speed as a proxy for quality
- Diagnostic work before deployment — agencies must understand their governance environment before AI can be usefully deployed
- Māori data sovereignty protections — constitutional imperatives under the Treaty, not post-deployment add-ons
The strategy, standards, and guidance documents point in the right direction. The question is whether NZ continues to rely on optimism, or builds the strong, ethical oversight capable of catching what the technology cannot.
❓ Frequently Asked Questions
Q: What does this mean for NZ? NZ’s public service is adopting AI faster than it’s building governance for it. The non-binding framework means each agency interprets principles differently, creating inconsistent protections across government. With Five Eyes partners calling for stronger oversight, NZ risks becoming the weak link.
Q: Is the framework completely useless? No. It names the right principles — transparency, fairness, human oversight. The problem is enforceability. Principles without a legislative mandate become aspirational without accountability, especially when agencies have vastly different capability levels.
Q: What should NZ do about it? The framework needs teeth: binding obligations, protected oversight roles, and Māori data sovereignty protections built in from the start, not appended after deployment. The Robodebt lesson from Australia is that algorithmic systems without clear accountability structures produce catastrophic harm.
🔍 THE BOTTOM LINE
NZ has built a governance framework that names all the right principles and enforces none of them. In a world of agentic AI, optimism is not a safeguard — it’s a liability.