NZ Organisations Are Running AI on Trust and Hope
Over 30% of New Zealand organisations are now trialling or deploying agentic AI across IT, cybersecurity, and business operations. Nearly all expect AI investment to increase this year. That’s the headline from Commvault’s latest State of Data Resilience – Australia & New Zealand report.
The fine print is where it gets uncomfortable: only 28% of NZ organisations conducted a thorough audit of AI security and governance implications before deployment. Just 37% are “very confident” they can identify when AI systems breach governance or compliance requirements. And only 39% feel highly confident they can spot compromised AI data access guardrails.
Translation: we’re plugging AI into critical systems faster than we can verify it won’t set fire to anything.
The Numbers That Should Worry You
- 30% of NZ organisations are trialling or deploying agentic AI
- 28% audited security and governance before deploying AI
- 37% very confident they can detect AI governance breaches
- 36% have extended identity management to non-human AI agents (vs 66% for human identity)
- 30% year-on-year data growth driven by AI-generated content
- 47% now in multi-cloud environments (up from 39%)
That gap between the 66% who manage human identities and the 36% who manage AI agent identities is the one that keeps security people awake at night. AI agents are increasingly interacting autonomously across systems, applications, and datasets — and most organisations are treating them like any other software tool. They’re not. They make decisions. They access data. They learn. And most NZ organisations can’t reliably tell you what they’re doing.
The “Move Fast” Phase, Without the “Fix Things” Part
The report’s underlying finding is blunt: many businesses are prioritising AI deployment speed over operational readiness. This is the “move fast and break things” playbook, except the things being broken are governance frameworks, data integrity, and accountability structures.
What is agentic AI? Agentic AI refers to AI systems that can autonomously plan, execute, and adapt multi-step tasks without continuous human oversight. Unlike traditional automation, agents can make context-dependent decisions, interact with external systems, and pursue goals independently. For example, an AI agent might autonomously triage cybersecurity alerts, escalate genuine threats, and patch vulnerabilities — all without a human in the loop.
When an AI agent goes rogue in a system you haven’t audited, with guardrails you can’t monitor, and identity controls you haven’t extended to non-human actors — that’s not a hypothetical risk. That’s Tuesday waiting to happen.
Why NZ’s Gap Is Wider Than Most
New Zealand’s regulatory environment for AI remains non-binding. Our AI Blueprint for Aotearoa sets voluntary guidelines, but there’s no equivalent of the EU AI Act’s enforcement mechanisms or Australia’s incoming automated decision-making disclosure requirements. The Commvault data confirms what’s been obvious on the ground: voluntary frameworks aren’t keeping up with deployment speed.
As Commvault’s APAC VP Martin Creighan put it: “AI is now central to how organisations operate but its value depends on the integrity of data behind it. That data must be understood, validated, and free of sensitive information.”
When only 28% of organisations have audited AI before deployment, that integrity is an article of faith, not a verified fact.
The Explainability Demand
One bright spot: the report finds explainability and transparency of AI systems are now among the highest priorities for NZ organisations evaluating AI-powered solutions. Compliance with regulatory and reporting requirements ranks alongside them.
That’s the right instinct. But prioritising explainability after deployment is like installing a smoke alarm while the house is on fire. The governance audit needs to come first — and for 72% of NZ organisations, it hasn’t.
🔍 THE BOTTOM LINE
NZ organisations are deploying AI faster than they can govern it. The 28% audit rate isn’t a statistic — it’s a warning. When (not if) an agentic AI system in NZ causes a data breach, compliance failure, or customer harm, the question won’t be “how did this happen?” It’ll be “why didn’t you check first?”
❓ Frequently Asked Questions
Q: What does this mean for NZ businesses? If your organisation uses AI in any decision-making capacity — credit checks, fraud detection, customer triage, HR screening — and you haven’t audited the governance implications, you’re in the 72%. The time to audit is before the incident, not after.
Q: What’s different about agentic AI vs regular AI tools? Agentic AI operates autonomously — it makes decisions, accesses data, and takes actions without a human approving each step. Traditional AI tools process data and present results for human action. The governance requirements for autonomous agents are fundamentally different because you can’t just “turn them off” without understanding what they’re connected to and what they’re authorised to access.
Q: What should NZ organisations do right now? Three things immediately: audit every AI system currently deployed for security and governance gaps; extend identity and access management to all non-human AI agents; and establish clear escalation paths for when AI behaviour deviates from expected parameters. If you can’t do all three, at least do the audit.
SOURCES
- Commvault State of Data Resilience – Australia & New Zealand report (May 2026)
- Scoop NZ: NZ organisations accelerate AI adoption amid governance gaps