Health NZ staff in Rotorua Lakes Mental Health & Addiction Services were caught using free AI tools — ChatGPT, Claude, and Gemini — to write clinical notes. Management’s response was a memo threatening formal disciplinary action. The union says staff are acting out of “enormous pressure,” not recklessness. Meanwhile, the approved AI scribe Heidi is cutting note-writing time by two-thirds in emergency departments that actually have it.
🔍 THE BOTTOM LINE
You can’t ban your way out of shadow AI. Health NZ has an AI tool that works — Heidi — but hasn’t rolled it out everywhere. Until they do, staff under pressure will keep using whatever gets the job done. The memo is the wrong response to the right problem.
What Happened
Staff at Rotorua Lakes Mental Health & Addiction Services were found using unapproved AI tools — ChatGPT, Claude, and Gemini — to write clinical notes. Health NZ issued a memo stating that even anonymised data sent to AI tools is prohibited, and warned of formal disciplinary action.
The PSA union, representing health workers, pushed back hard. Their position: staff are turning to AI because they’re drowning in workload. Threatening discipline for using tools that help them cope is the wrong approach — the right one is giving them approved tools that actually work.
Health NZ refused to confirm how many instances were detected or whether any staff have been disciplined.
Why Staff Are Using Shadow AI
Here’s the thing: clinicians aren’t being reckless. They’re being practical in a system that’s failing them.
What is shadow AI? Shadow AI is the use of AI tools that haven’t been formally approved by an organisation’s IT or governance teams. It’s the AI equivalent of shadow IT — staff using their own tools because the official ones are too slow, too limited, or don’t exist yet.
The pressure on NZ health workers is well-documented:
- Chronic understaffing across mental health services
- Mountains of documentation required per patient
- Growing waitlists and shrinking appointment windows
- Burnout rates that make the news on a regular cycle
When a clinician can write three patients’ notes in 11 minutes with AI assistance versus 45 minutes manually, of course they’re going to use it. The math isn’t complicated.
Heidi: The Approved Alternative That Proves the Point
Meanwhile, the AI scribe Heidi is rolling out to all NZ emergency departments after a successful trial at Hawke’s Bay. The results are striking:
- Senior doctors report eased cognitive pressure — not just time savings, but mental load reduction
- One clinician wrote notes for 3 patients in 11 minutes (normally 15 minutes per patient)
- No patient resistance to AI-assisted note-taking
- Formal evaluation still ongoing, but Health NZ isn’t quantifying savings yet
Heidi is the argument against the disciplinary memo. When you give clinicians proper AI tools, they use them happily and safely. When you don’t, they improvise — and that improvisation creates real risks.
The Real Risk Isn’t Discipline — It’s Data
The concern about unapproved AI tools is legitimate. Clinical notes contain sensitive patient information. Even “anonymised” data sent to ChatGPT or Gemini flows through servers operated by companies with no NZ health data compliance obligations. There are three real problems:
- Data sovereignty: Patient data flows to overseas servers with no NZ Privacy Act obligations
- Accuracy: Free AI tools can hallucinate medication dosages, diagnoses, or patient histories
- Accountability: If AI-generated notes contain errors, who’s responsible — the clinician who used the tool, or Health NZ that didn’t provide an alternative?
These are real risks. But the solution isn’t a memo — it’s making the approved tool available everywhere, fast.
The NZ Pattern: Frameworks Without Follow-Through
This story sits squarely in a pattern. NZ has:
- A national AI strategy (“Investing with Confidence,” launched July 2025)
- A Public Service AI Framework (v2025)
- MBIE’s Responsible AI Guidance for Businesses
- An AI Forum Blueprint refreshed just last week
All the frameworks exist. But a clinician in Rotorua writing mental health notes at 9pm on a Friday doesn’t need a framework. They need a tool that works. And right now, the gap between “NZ has AI governance” and “NZ clinicians have approved AI tools” is exactly wide enough for shadow AI to thrive.
What Other Countries Are Doing
Australia’s John Curtin Research Centre just released a report warning that Australia risks repeating its social media regulatory failures with AI in the workplace — waiting until harm is done before acting. The UK is pushing “middle power” cooperation to prevent AI market concentration. The EU just rolled back its own AI Act high-risk rules by over a year.
The global pattern is the same everywhere: regulation struggles to keep up with deployment. NZ isn’t unique in this. But NZ is small enough to move faster — if the will exists.
What Should Happen Next
- Accelerate Heidi rollout — not just to EDs, but to mental health, GP clinics, and community services. The tool exists. Deploy it.
- Drop the disciplinary threat — it’s counterproductive. Staff using AI to cope with workload aren’t the problem; the system that leaves them no alternative is.
- Create a clear approved-tools policy — clinicians need to know what they can and can’t use, and what to do when the approved tool isn’t available yet
- Invest in NZ-specific AI compliance — any AI tool processing NZ health data needs to meet Privacy Act requirements. This is a procurement problem, not a discipline problem.
❓ Frequently Asked Questions
Q: Is it actually dangerous to use ChatGPT for clinical notes? Yes, for two reasons: patient data flows to overseas servers without NZ privacy protections, and AI can hallucinate medical details. A wrong dosage in a clinical note isn’t a typo — it’s a patient safety risk.
Q: Why doesn’t Health NZ just ban AI entirely? Because clinicians need help, and AI tools genuinely reduce their workload. The PSA union’s position is correct: banning without providing alternatives just drives the problem underground.
Q: How does Heidi avoid the data risk? Heidi is registered with NAIAEAG (the National AI and Emerging Advanced Technologies Governance group), meaning it meets NZ health data compliance requirements. Free ChatGPT does not.
Q: What does this mean for patients? If your clinician is using an approved tool like Heidi, your data is protected and the notes are more likely to be accurate. If they’re using ChatGPT secretly, your data may have left NZ and the notes may contain errors. Neither scenario is ideal — which is why the rollout of approved tools matters.
🔍 THE BOTTOM LINE
Health NZ’s shadow-AI problem isn’t a discipline problem. It’s a deployment problem. The approved tool exists. The framework exists. The clinician demand exists. What’s missing is the urgency to connect them. Every day Heidi isn’t available in a ward is a day staff will keep reaching for whatever works — and who can blame them?