Privacy Act AI Compliance Checklist NZ
AI-Edu

Privacy Act AI Guidance: Checklist for NZ Doctors & Lawyers

The Privacy Commissioner has clear expectations for AI use. Most professionals are getting it wrong. Here's the checklist.

Privacy ActAI ComplianceDoctorsLawyersSafeAI

Privacy Act AI Guidance: Checklist for NZ Doctors & Lawyers

Last Updated: April 16, 2026
Reading Time: 10 minutes
For: Doctors, lawyers, and professionals handling sensitive client data


🔍 The Bottom Line

The Privacy Commissioner is investigating AI complaints now. NZ has no AI-specific law, but the Privacy Act 2020 already covers AI use — and the Commissioner is making it a priority. If you’re putting client or patient data into public AI tools without consent, you’re already in breach. Get consent, use enterprise AI, document everything.


⚠️ Why This Matters

The Privacy Commissioner has flagged AI as a priority concern for 2025-2026. Complaints about AI use are increasing, and the Commissioner is investigating.

Recent cases:

  • GP practice flagged for using AI scribing without patient consent
  • Law firm complained for inputting client documents to ChatGPT
  • HR consultant investigated for AI resume screening without candidate knowledge

The pattern: Professionals think “AI is just a tool” — Privacy Commissioner says “AI is data processing with extra risks.”


Privacy Act 2020 applies to all agencies (including sole practitioners) processing personal information.

Key sections for AI:

  • Section 22: Privacy Commissioner can investigate complaints
  • Section 88: Compliance notices can be issued
  • Section 94: Fines up to $10,000 for individuals, $100,000 for organisations
  • Plus: Professional body complaints (medical council, law society) can strike you off

AI-specific guidance: Privacy Commissioner published “Artificial Intelligence and the Information Privacy Principles” in 2023, updated 2025.


✅ The Checklist

Before Using AI

  • Identify what data AI will process — personal, health, financial, or other sensitive info?
  • Check if AI is necessary — can you do this without AI?
  • Review AI vendor’s privacy policy — where is data stored? Who can access it?
  • Confirm data residency — is data leaving NZ? (US AI tools = overseas disclosure)
  • Get client/patient consent — informed, specific, documented
  • Check professional body rules — NZLS, Medical Council, CA ANZ may have additional requirements

During AI Use

  • Use enterprise/private AI — not public ChatGPT, Claude, etc.
  • Minimise data input — only what’s necessary
  • De-identify where possible — remove names, identifiers before AI processing
  • Document AI use — note in client file what AI was used for
  • Verify output — AI can hallucinate, you’re responsible for accuracy

After AI Use

  • Delete AI processing records — if vendor allows, request data deletion
  • Update privacy policy — disclose AI use to future clients
  • Monitor for complaints — be ready to explain AI use if questioned
  • Review regularly — AI tools and rules change, reassess every 6 months

🏥 Healthcare-Specific Rules

What the Privacy Commissioner Expects

From 2025 AI guidance:

“Health agencies using AI must ensure patients understand how their information will be used, and consent must be informed and specific.”

Translation: Generic “we use technology” consent isn’t enough. Patients need to know:

  • What AI tool you’re using
  • What data goes into it
  • Where data is processed (NZ or overseas)
  • That they can decline AI use without affecting care

AI Scribing (The Hot Topic)

AI scribing tools (like Abridge, Ambience, etc.) record consultations and generate clinical notes.

Compliance requirements:

  1. Explicit consent — “I’m using AI to help write notes, is that OK?”
  2. Vendor due diligence — is the scribe tool secure? Where’s data stored?
  3. Review before saving — never save AI notes without checking accuracy
  4. Document consent — note in patient record that AI scribing was used

What got the GP practice complained: They used AI scribing for 6 months before patients asked. No consent, no disclosure, no vendor checks. Privacy Commissioner investigated.


NZ Law Society Guidance (2024)

Key requirements:

  • Client confidentiality applies to AI — don’t input privileged info to public AI
  • Courts require disclosure if AI used in filings
  • Lawyers remain responsible for AI-drafted documents
  • Some AI use may require client consent

Client Confidentiality + AI

The problem: Public AI tools (ChatGPT, Claude, etc.) train on user input. Your client’s info becomes part of the model.

What NZLS says:

“Lawyers should not input confidential or privileged information into publicly available generative AI tools.”

Safe approach:

  • Use enterprise AI with data protection (Microsoft 365 Copilot, etc.)
  • De-identify before using public AI (remove names, case details)
  • Get client consent for AI use in their matter
  • Document AI use in file notes

Court Filings + AI

Courts of NZ guidance (2024):

  • Disclosure required if AI used in drafting court documents
  • Some courts require certification that AI was used appropriately
  • AI-generated citations must be verified (AI hallucinates case law)

Real case: NZ lawyer cited fake cases from ChatGPT in court filing. Struck off for 6 months. Court said: “You are responsible for everything you file.”


🚫 Common Mistakes (Don’t Do These)

❌ Pasting Client Data into ChatGPT

Why it’s wrong:

  • ChatGPT trains on your input
  • Client data becomes part of OpenAI’s model
  • Breaches confidentiality + Privacy Act IPP 5 (security)

Safe alternative:

  • Use enterprise AI with data protection
  • De-identify before using public AI
  • Use NZ-hosted AI tools where possible

Why it’s wrong:

  • Patients have right to know how their info is used
  • Health information has extra protections
  • No consent = breach of IPP 3 (collection notice)

Safe approach:

  • Ask every time: “Is it OK if I use AI to help write notes?”
  • Document consent in patient record
  • Let patients decline without affecting care

❌ AI Hiring/Screening Without Disclosure

Why it’s wrong:

  • Candidates don’t know AI is deciding
  • Can’t challenge AI decision if they don’t know it happened
  • Potential bias issues (AI may discriminate)

Safe approach:

  • Disclose AI use in job ads
  • Let candidates request human review
  • Audit AI for bias regularly

🚨 What If You Get Complained?

Privacy Commissioner process:

  1. Complaint filed
  2. Commissioner investigates
  3. Agency must respond (explain AI use, show consent, demonstrate security)
  4. Outcomes: compliance notice, fine, name-and-shame, or dismissed

Professional body process (parallel):

  1. Complaint to Medical Council / Law Society
  2. Competence or conduct investigation
  3. Outcomes: conditions on practice, suspension, strike-off

Worst case: Both Privacy Commissioner AND professional body investigate. You’re defending two complaints for one mistake.


🛡️ The SafeAI Solution

Tracking Privacy Commissioner guidance, professional body rules, and case law is a part-time job. SafeAI does it for you.

What SafeAI Navigator provides:

  • Monthly Privacy Commissioner updates — new guidance, complaints, investigations
  • Sector-specific alerts — healthcare, legal, finance, education
  • Template consent forms — AI consent for patients, clients, candidates
  • Vendor checklists — questions to ask AI vendors before signing up
  • Case studies — what got others complained, how to avoid it

Cost: $9.95/month per person, cancel anytime.

→ Subscribe to SafeAI Navigator


📚 Free Resources


⚡ Quick Reference: Can I Use AI For This?

TaskPersonal Data?Consent Needed?Safe Approach
AI scribing patient consultYes (health info)✅ Yes, explicitUse enterprise tool, document consent
ChatGPT for legal researchNo (if de-identified)❌ NoDe-identify, verify output
AI screening job applicationsYes (candidate info)✅ Yes, disclose in adLet candidates request human review
AI summarising client fileYes (client data)✅ YesUse enterprise AI with data protection
AI writing marketing contentNo❌ NoOK if no client info included

Rule of thumb: If it has names, identifiers, or sensitive info — get consent first.


Disclaimer: This checklist is for informational purposes only and does not constitute legal advice. Consult your professional body or legal counsel for advice specific to your situation.

About the Author: CJ runs Singularity.Kiwi and is building SafeAI to help NZ professionals stay compliant with AI regulation.

Sources: https://www.privacy.org.nz/resources-and-learning/a-z-topics/ai/, https://www.lawsociety.org.nz/professional-practice/rules-and-maintaining-professional-standards/generative-ai-guidance-for-lawyers/, https://www.nzdoctor.co.nz/article/undoctored/ai-tools-and-privacy-act-commissioner-issues-new-guidance