The Social Media Warning, This Time for AI
A new report from the John Curtin Research Centre, backed by the SDA — Australia’s largest retail and fast-food union — doesn’t mince words. Australia is repeating its social media mistakes with AI in the workplace. And by the time regulators catch up, the damage will already be baked in.
The report warns that unchecked AI deployment in workplaces could intensify worker surveillance, unsafe workloads, and job insecurity. The comparison to social media is deliberate: Australia failed to regulate platforms early, and the consequences — misinformation, mental health harms, algorithmic manipulation — took years to address and remain unresolved.
“AI is so much more powerful than social media,” said co-author Dominic Meagher. “We do not have the luxury of getting it wrong this time.”
What the Report Calls For
The John Curtin Research Centre report makes three key recommendations:
- A national AI taskforce — a dedicated body to coordinate AI workplace regulation across government
- Fair Work Act review — updating Australia’s core workplace law to explicitly address AI-related risks including surveillance, algorithmic management, and automated decision-making
- Mandatory human oversight — requiring that AI systems in workplaces cannot operate without meaningful human supervision
The report also recommends establishing an AI expert advisory panel within the Fair Work Commission to assess AI-related disputes and ensure existing workplace protections continue to apply.
The Government’s Response: “We’re Working On It”
Workplace Relations Minister Amanda Rishworth told the AFR Workforce Summit the government has conducted a workforce “gap analysis” into AI’s effects on jobs. Preliminary results suggest AI has slowed growth in some occupations — filing clerks, keyboard operators — but the overall mix of jobs hasn’t changed faster than usual.
The government is developing the capability to monitor AI impacts by analysing labour market changes since ChatGPT’s November 2022 launch, with a focus on entry-level jobs and workforce composition. Employment outcomes for young tertiary graduates have been positive so far, countering early fears they’d be the first casualties.
Translation: the government is watching. But watching isn’t regulating.
Why NZ Should Pay Attention
Here’s the uncomfortable part: Australia’s regulatory gap looks wide from the report’s perspective, but from where NZ sits, Australia is ahead. Our non-binding AI Blueprint for Aotearoa doesn’t address workplace AI specifically. Our employment law has no provisions for algorithmic management or AI surveillance. And unlike Australia, we don’t even have a government-commissioned gap analysis underway.
The report’s findings about worker surveillance are particularly relevant for NZ. AI-powered monitoring tools — keystroke tracking, productivity scoring, emotion recognition, automated scheduling — are being deployed in NZ workplaces right now, particularly in retail, logistics, and contact centres. Workers have no specific legal protections against algorithmic surveillance beyond general privacy principles that predate AI by decades.
As Meagher noted: “Just because AI makes a decision, it doesn’t mean that it’s an excuse for the company to sidestep their obligations [to worker’s rights].”
The Collaborative vs Imposed Divide
The report makes one point that’s easy to miss amid the alarm: companies that work with their workforce on AI integration actually turn it into more profit. The ones that impose it from the top down face resistance, errors, and cultural damage.
This is the lesson most organisations still haven’t internalised. AI adoption isn’t just a technology decision — it’s a workplace relations decision. The gap between companies that consult and companies that dictate is showing up in productivity, retention, and incident rates.
What This Means for NZ Workers
If you’re a NZ worker in retail, logistics, admin, or any role where AI tools are being introduced to monitor, evaluate, or manage your work:
- You have fewer specific protections than Australian workers — and they’re already behind
- Your employer isn’t required to tell you when AI is making decisions about your work
- There’s no legal requirement for human oversight of AI in NZ workplaces
- The Privacy Act 2020 provides general principles but nothing targeted at algorithmic management
The SDA-backed report is focused on Australia, but the dynamics it describes — surveillance intensification, unsafe algorithmic workloads, job insecurity from opaque AI systems — are already present in NZ workplaces. We just haven’t had the report yet.
🔍 THE BOTTOM LINE
Australia is being warned it’s repeating social media’s regulatory mistakes with AI. NZ hasn’t even acknowledged it’s in the same building. The longer we wait, the harder the catch-up — and the more damage gets baked into workplace practices that will take years to undo.
❓ Frequently Asked Questions
Q: Does NZ have any AI workplace regulation? No specific AI workplace regulation exists in NZ. The Privacy Act 2020 provides general data protection principles, and employment law covers some workplace rights, but neither addresses algorithmic management, AI surveillance, or automated workplace decision-making specifically.
Q: What’s the Fair Work Act review about? The John Curtin Research Centre recommends Australia review its Fair Work Act to explicitly address AI-related workplace risks. This would include provisions for algorithmic surveillance, AI-driven scheduling, and ensuring existing worker protections apply when AI makes decisions about employment matters. NZ has no equivalent legislation under review.
Q: What can NZ workers do? Ask your employer what AI tools are being used in decisions about your work. Request transparency about monitoring and scoring systems. If you’re in a union, raise AI governance as a bargaining issue. The gap in legal protection means collective action and workplace agreements are currently your strongest tools.
SOURCES
- ABC News: Australia lacks national strategy to regulate AI in workplace
- John Curtin Research Centre AI Workplace Report (2026)
- SDA Union (Shop, Distributive and Allied Employees Association)