The good news is the computer thinks you have a nice personality. The bad news is the computer is now your boss’s surveillance tool, and it’s probably wrong about you.
Emotion AI — software that claims to read your facial expressions, voice tone, and biometric data to assess your mood, engagement, and stress levels — is spreading through workplaces like a rumour at a staff meeting. And like most rumours, it’s mostly unreliable, occasionally harmful, and impossible to take back once it’s out there.
Who’s Watching and What They’re Selling
The landscape is wider than you’d think:
- MorphCast licenses its facial emotion-detection tech to a mental-health app, a program that monitors schoolchildren’s attention, and McDonald’s — which scanned app users’ faces in Portugal and offered personalised coupons based on their supposed mood
- MetLife uses AI to monitor call-centre agents’ pitch and tone of voice in real time
- Burger King is piloting an AI chatbot embedded in employee headsets that evaluates friendliness. Her name is Patty
- Aware, a Slack integration, continuously monitors messages for “sentiment and toxicity”
- HireVue uses AI to interview and analyse job candidates — clients include Ikea, Regeneron, and the Children’s Hospital of Philadelphia
- Framery, which makes soundproof office pods sold to Microsoft and L’Oréal, has tested outfitting chairs with biosensors that measure heart rate, breathing rate, and nervousness
- First Horizon Bank AI monitors call-centre employees’ stress and shows them pictures of their families when levels get too high, which is either sweet or dystopian depending on your tolerance for this kind of thing
The global emotion-AI market is expected to triple to $9 billion by 2030. This is not a fringe technology. This is an industry.
The Science Says: It Doesn’t Work
Here’s the problem. Emotion AI is built on the work of psychologist Paul Ekman, whose theory of six basic emotions — anger, disgust, fear, happiness, sadness, surprise — has been widely challenged as oversimplistic and methodologically flawed.
Neuroscientist Lisa Feldman Barrett, who has studied the psychology of emotion for years, puts it plainly: “Your movements, whether it’s on your face or in your body or the tones that you emit, don’t have inherent emotional meaning. They have relational meaning.” They vary based on context, culture, the person’s face, the room temperature, vibes.
The data backs this up. In the US, people scowl when angry about 35 percent of the time. That means if you’re looking only for a scowl, you miss roughly 65 percent of angry people. Half the time people scowl, they aren’t angry at all.
“So imagine a situation where you’re in a job interview,” Barrett says. “You’re listening really carefully to the person, you’re scowling as you’re listening because you’re paying really, really close attention, and an AI labels you as angry. You will not get that job.”
A 2018 study found emotion-recognition AI rated Black NBA players as angrier than their white teammates — even when they were smiling. The tech replicates the biases of its training data. This should surprise no one at this point, and yet companies keep buying it.
The Shitty Technology Adoption Curve
Writer Cory Doctorow theorised what he calls the “Shitty Technology Adoption Curve”: extractive technologies come first to people in precarious circumstances — low-wage workers, gig economy participants — before they’re refined, normalised, and pushed up the ladder to people with more power.
Emotion AI started in call centres and truck cabs. It’s moving to white-collar offices now. Your Slack messages analysed for sentiment. Your Zoom meetings tracked for attention and positivity. Your next job interview scored by an algorithm that thinks concentration looks like anger.
The pandemic accelerated this. Remote work put employees out of sight. Trust between employers and workers is tanking. AI is upending the job market. The tools currently surveilling call-centre staff may soon replace them entirely. In the meantime, corporations are laying off thousands and looking for other ways to squeeze productivity out of the humans they haven’t fired yet.
The EU Drew a Line. The US Didn’t.
Last year, the European Union banned emotion AI in the workplace, except for medical or safety reasons. The regulation prompted MorphCast, which was founded in Florence, to relocate to the Bay Area — because of course it did.
The US has no equivalent federal protection. US federal law gives employers broad permission to monitor much of what an employee does on company time, property, and devices. A 2022 New York Times investigation found that eight of the 10 largest private employers in the United States track individual workers’ productivity. In one poll, 37 percent of employers said they’d used stored recordings to fire a worker.
At UnitedHealth Group, a monitoring program docked social workers for keyboard inactivity — even when they were offline because they were in counselling sessions with patients. The computer didn’t know the difference between “not typing” and “doing the actual job.” That’s the fundamental problem with reducing humans to data points.
What Actually Scares the Researchers
Here’s the twist. The more time researchers spend with this technology, the more they worry not about when it fails — but about when it works.
“I don’t need it made quite so literal,” one researcher told The Atlantic, referring to the way emotion AI makes workplace power viscerally explicit. If the robot in your Zoom call can accurately read your emotional state, then you now have a second job: performing the right emotions for the algorithm.
It’s one thing to be watched. It’s another to be graded on your happiness.
Karen Levy, a Cornell information scientist who studied biometric surveillance in the trucking industry, found that constant surveillance added its own form of stress without actually reducing crashes. Truckers had “a really notable degree of pride” and “a lot of autonomy to kind of do the work in the way that they saw fit.” That pride got picked away at as the computers began watching.
“There really is a pretty strong dignitary concern to being watched in some fairly intimate ways,” Levy said, “that have to do with people’s bodies and their spaces.”
🇳🇿 The NZ Angle
New Zealand doesn’t have specific legislation banning emotion AI in workplaces. The Privacy Act 2020 and the Privacy Commissioner’s guidance would treat facial-expression monitoring and biometric emotion tracking as significant privacy intrusions requiring clear justification and informed consent.
But “informed consent” in a workplace is a slippery concept. When your employer tells you that the Zoom app now tracks your “engagement” and your alternative is finding another job, that’s consent in the same way a hostage consents to giving up their wallet.
NZ unions have been relatively quiet on emotion AI specifically, though the Council of Trade Unions has flagged broader AI surveillance concerns. The EU’s ban provides a template. Whether NZ follows it or waits for the technology to arrive first is an open question.
🔍 The Bottom Line
Emotion AI is junk science wrapped in enterprise software, sold to managers who want to believe that managing people is a data problem. Hundreds of studies involving thousands of people show that emotions cannot be objectively read from faces or voices. The technology is biased, inaccurate, and deployed against workers who have no meaningful ability to opt out.
But the scarier scenario isn’t the bad technology. It’s the good technology — a world where the algorithm actually can read your mood, and now your job performance includes performing the right emotions at the right time. Where your facial expressions have bearing on your ability to feed your family. Where “I was just thinking” becomes a performance issue.
The EU banned this in workplaces. The US Senate is too busy requiring ID to use chatbots. And the $9 billion emotion-AI market keeps growing.
As Barrett put it when asked what she wished people knew: “I have been talking about this for a fucking decade.”
She’s not alone. We just haven’t been listening.
Sources
- The Rise of Emotional Surveillance — The Atlantic
- Lisa Feldman Barrett, Northeastern University
- Cory Doctorow, “The Shitty Technology Adoption Curve”
- Markets and Markets: Emotion AI Market Report
- EU AI Act Article 5