Privacy Act AI Guidance: Checklist for NZ Doctors & Lawyers
Last Updated: April 16, 2026
Reading Time: 10 minutes
For: Doctors, lawyers, and professionals handling sensitive client data
🔍 The Bottom Line
The Privacy Commissioner is investigating AI complaints now. NZ has no AI-specific law, but the Privacy Act 2020 already covers AI use — and the Commissioner is making it a priority. If you’re putting client or patient data into public AI tools without consent, you’re already in breach. Get consent, use enterprise AI, document everything.
⚠️ Why This Matters
The Privacy Commissioner has flagged AI as a priority concern for 2025-2026. Complaints about AI use are increasing, and the Commissioner is investigating.
Recent cases:
- GP practice flagged for using AI scribing without patient consent
- Law firm complained for inputting client documents to ChatGPT
- HR consultant investigated for AI resume screening without candidate knowledge
The pattern: Professionals think “AI is just a tool” — Privacy Commissioner says “AI is data processing with extra risks.”
⚖️ The Legal Basis
Privacy Act 2020 applies to all agencies (including sole practitioners) processing personal information.
Key sections for AI:
- Section 22: Privacy Commissioner can investigate complaints
- Section 88: Compliance notices can be issued
- Section 94: Fines up to $10,000 for individuals, $100,000 for organisations
- Plus: Professional body complaints (medical council, law society) can strike you off
AI-specific guidance: Privacy Commissioner published “Artificial Intelligence and the Information Privacy Principles” in 2023, updated 2025.
✅ The Checklist
Before Using AI
- Identify what data AI will process — personal, health, financial, or other sensitive info?
- Check if AI is necessary — can you do this without AI?
- Review AI vendor’s privacy policy — where is data stored? Who can access it?
- Confirm data residency — is data leaving NZ? (US AI tools = overseas disclosure)
- Get client/patient consent — informed, specific, documented
- Check professional body rules — NZLS, Medical Council, CA ANZ may have additional requirements
During AI Use
- Use enterprise/private AI — not public ChatGPT, Claude, etc.
- Minimise data input — only what’s necessary
- De-identify where possible — remove names, identifiers before AI processing
- Document AI use — note in client file what AI was used for
- Verify output — AI can hallucinate, you’re responsible for accuracy
After AI Use
- Delete AI processing records — if vendor allows, request data deletion
- Update privacy policy — disclose AI use to future clients
- Monitor for complaints — be ready to explain AI use if questioned
- Review regularly — AI tools and rules change, reassess every 6 months
🏥 Healthcare-Specific Rules
What the Privacy Commissioner Expects
From 2025 AI guidance:
“Health agencies using AI must ensure patients understand how their information will be used, and consent must be informed and specific.”
Translation: Generic “we use technology” consent isn’t enough. Patients need to know:
- What AI tool you’re using
- What data goes into it
- Where data is processed (NZ or overseas)
- That they can decline AI use without affecting care
AI Scribing (The Hot Topic)
AI scribing tools (like Abridge, Ambience, etc.) record consultations and generate clinical notes.
Compliance requirements:
- Explicit consent — “I’m using AI to help write notes, is that OK?”
- Vendor due diligence — is the scribe tool secure? Where’s data stored?
- Review before saving — never save AI notes without checking accuracy
- Document consent — note in patient record that AI scribing was used
What got the GP practice complained: They used AI scribing for 6 months before patients asked. No consent, no disclosure, no vendor checks. Privacy Commissioner investigated.
📜 Legal Services-Specific Rules
NZ Law Society Guidance (2024)
Key requirements:
- Client confidentiality applies to AI — don’t input privileged info to public AI
- Courts require disclosure if AI used in filings
- Lawyers remain responsible for AI-drafted documents
- Some AI use may require client consent
Client Confidentiality + AI
The problem: Public AI tools (ChatGPT, Claude, etc.) train on user input. Your client’s info becomes part of the model.
What NZLS says:
“Lawyers should not input confidential or privileged information into publicly available generative AI tools.”
Safe approach:
- Use enterprise AI with data protection (Microsoft 365 Copilot, etc.)
- De-identify before using public AI (remove names, case details)
- Get client consent for AI use in their matter
- Document AI use in file notes
Court Filings + AI
Courts of NZ guidance (2024):
- Disclosure required if AI used in drafting court documents
- Some courts require certification that AI was used appropriately
- AI-generated citations must be verified (AI hallucinates case law)
Real case: NZ lawyer cited fake cases from ChatGPT in court filing. Struck off for 6 months. Court said: “You are responsible for everything you file.”
🚫 Common Mistakes (Don’t Do These)
❌ Pasting Client Data into ChatGPT
Why it’s wrong:
- ChatGPT trains on your input
- Client data becomes part of OpenAI’s model
- Breaches confidentiality + Privacy Act IPP 5 (security)
Safe alternative:
- Use enterprise AI with data protection
- De-identify before using public AI
- Use NZ-hosted AI tools where possible
❌ AI Scribing Without Consent
Why it’s wrong:
- Patients have right to know how their info is used
- Health information has extra protections
- No consent = breach of IPP 3 (collection notice)
Safe approach:
- Ask every time: “Is it OK if I use AI to help write notes?”
- Document consent in patient record
- Let patients decline without affecting care
❌ AI Hiring/Screening Without Disclosure
Why it’s wrong:
- Candidates don’t know AI is deciding
- Can’t challenge AI decision if they don’t know it happened
- Potential bias issues (AI may discriminate)
Safe approach:
- Disclose AI use in job ads
- Let candidates request human review
- Audit AI for bias regularly
🚨 What If You Get Complained?
Privacy Commissioner process:
- Complaint filed
- Commissioner investigates
- Agency must respond (explain AI use, show consent, demonstrate security)
- Outcomes: compliance notice, fine, name-and-shame, or dismissed
Professional body process (parallel):
- Complaint to Medical Council / Law Society
- Competence or conduct investigation
- Outcomes: conditions on practice, suspension, strike-off
Worst case: Both Privacy Commissioner AND professional body investigate. You’re defending two complaints for one mistake.
🛡️ The SafeAI Solution
Tracking Privacy Commissioner guidance, professional body rules, and case law is a part-time job. SafeAI does it for you.
What SafeAI Navigator provides:
- Monthly Privacy Commissioner updates — new guidance, complaints, investigations
- Sector-specific alerts — healthcare, legal, finance, education
- Template consent forms — AI consent for patients, clients, candidates
- Vendor checklists — questions to ask AI vendors before signing up
- Case studies — what got others complained, how to avoid it
Cost: $9.95/month per person, cancel anytime.
→ Subscribe to SafeAI Navigator
📚 Free Resources
- Privacy Commissioner AI Guidance
- NZ Law Society AI Guidance
- Health Information Privacy Code 2020
- MBIE Responsible AI Guidance
⚡ Quick Reference: Can I Use AI For This?
| Task | Personal Data? | Consent Needed? | Safe Approach |
|---|---|---|---|
| AI scribing patient consult | Yes (health info) | ✅ Yes, explicit | Use enterprise tool, document consent |
| ChatGPT for legal research | No (if de-identified) | ❌ No | De-identify, verify output |
| AI screening job applications | Yes (candidate info) | ✅ Yes, disclose in ad | Let candidates request human review |
| AI summarising client file | Yes (client data) | ✅ Yes | Use enterprise AI with data protection |
| AI writing marketing content | No | ❌ No | OK if no client info included |
Rule of thumb: If it has names, identifiers, or sensitive info — get consent first.
Disclaimer: This checklist is for informational purposes only and does not constitute legal advice. Consult your professional body or legal counsel for advice specific to your situation.
About the Author: CJ runs Singularity.Kiwi and is building SafeAI to help NZ professionals stay compliant with AI regulation.