Daily Technology: March 24, 2026
UN warns AI is reshaping work conditions, DoorDash couriers now train AI, content moderators face trauma, and algorithmic management drives safety risks.
The International Labour Organization and International Telecommunication Union hosted a landmark webinar exposing the human cost behind AI development. The panel revealed that content moderators and data labelers—the workers who make AI systems usable—often work under extreme psychological pressure, constant surveillance, and poverty wages.
- Content moderators review graphic violence, sexual abuse, and atrocities for hours daily, often with minimal psychological support
- Data labelers in India report reviewing hundreds of traumatic videos per day for less than $2/hour
- Algorithmic management creates impossible targets—UK delivery workers report fatal accidents linked to algorithmic pressure
- NDAs silence workers, preventing them from discussing working conditions even with family
"The key issue is not whether AI will transform work; it already is. The central issue is how to ensure that this transformation advances decent work and social justice."
— Sher Verick, ILO Coordinator for Digitalisation and AI
When tech executives talk about "AI safety," they're rarely talking about the safety of the humans who make AI possible. The industry's most dangerous work happens in Nairobi, Manila, and Hyderabad—not San Francisco. Two-thirds of UK delivery couriers work under anxiety from algorithmic management. These aren't edge cases; they're structural features of how AI development works today.
DoorDash Launches "Tasks" App: Couriers Now Train AI for Extra Pay
DoorDash introduced a standalone "Tasks" app that pays its 8 million U.S. delivery couriers to submit videos and complete activities that train AI and robotic systems. Workers can earn money by filming everyday tasks—like washing dishes while wearing a body camera—to help train computer vision systems.
- Tasks include recording videos, taking photos of restaurant menus, and speaking in other languages
- Pay is shown upfront, determined by "effort and complexity of the activity"
- Data feeds DoorDash's in-house AI models and partners in retail, insurance, hospitality, and tech
- Not available in California, New York City, Seattle, or Colorado (regulatory concerns)
"This data helps AI and robotic systems understand the physical world... There are more than 8 million Dashers who can reach almost anywhere in the U.S. and who want to earn flexibly beyond delivery. That's a powerful capability to digitize the physical world."
— Ethan Beatty, General Manager, DoorDash Tasks
Gig workers are becoming the training layer for automation. DoorDash is paying couriers to train the systems that could eventually displace them—or displace other delivery workers entirely. The irony is stark: workers building their own potential replacements for a few extra dollars per task. Uber launched a similar program last year. The gig economy is evolving into the AI training economy, with the same workers in the same precarious positions.
The Hidden Human Cost of Making AI "Safe"
A year-long investigation into content moderation workforces in Nairobi and Manila revealed a class-based system where psychological trauma is exported to the Global South. Workers like "Grace" in Nairobi earn roughly $230/month reviewing beheadings, child abuse, and sexual violence—so that users in California never have to see it.
- 150,000+ workers in Sub-Saharan Africa alone do content moderation and data annotation
- 30-50 seconds average time per decision, with 800-1,200 decisions per shift
- $2/hour was the rate OpenAI paid Sama workers in Kenya to label toxic content for ChatGPT's safety filters (TIME, 2023)
- "Wellness rooms" with beanbags are offered, but therapy apps start with chatbots—the same AI being trained on workers' trauma
"The people who build AI systems, who write the research papers, who give the TED talks... earn six or seven figures. The people who perform the foundational labor of making those AI systems less toxic... live in cities where $230 a month is a living wage only if you share a room with two other people."
— Silicon Canals investigation
This isn't a bug—it's an architecture. Tech companies outsource content moderation through layers of BPO firms (Sama, Majorel, Teleperformance) specifically to diffuse liability and keep costs low. Workers sign NDAs preventing them from discussing their work. When a moderator develops PTSD from reviewing child abuse material, there's no equity, no voice, and often no legal recourse. The ILO is calling for international regulation, but enforcement remains the challenge.
Workers Selling Their Identities to Train AI
The Guardian reported on a growing phenomenon: workers in Cape Town, India, and elsewhere selling their biometric identities—voice recordings, facial scans, personal histories—to AI training companies. In Cape Town, workers earn around $14 per hour. In India, some make $50 in a week. The data trains AI systems that may never benefit these communities.
- Identity data includes voice recordings, facial scans, personal histories, and behavioral patterns
- Workers often unaware of how their data will be used or who will profit from it
- No long-term benefit—workers receive one-time payments while companies build permanent assets
- Ethical concerns about informed consent in communities with limited AI literacy
AI development is extracting more than labor—it's extracting identity. Voice data, facial geometry, personal histories become permanent training assets for corporations while workers get one-time payments. The industry frames this as "participation in the AI economy," but it's extraction dressed as opportunity. Workers selling their biometric data today may find that same data used to train systems that displace workers tomorrow.
What This Means for Workers
The pattern is clear: AI development depends on human labor that's physically in the Global South, psychologically precarious, and economically essential—but structurally invisible. When Silicon Valley talks about "AI safety," they're usually discussing existential risks to hypothetical future AGI, not the very real psychological harm happening to workers in Nairobi today.
Three shifts to watch:
- Gig platforms like DoorDash and Uber are pivoting their workforces into AI training pipelines—workers building their own potential replacements
- The UN and ILO are building regulatory frameworks, but enforcement requires international cooperation that doesn't yet exist
- Workers are organizing: UNI Global Union is building a global alliance of content moderators pushing for safe-work protocols
Sources
- UN News: "How AI is already reshaping working conditions" (March 2026)
- TechCrunch: "DoorDash launches Tasks app" (March 19, 2026)
- Silicon Canals: "I spent a year inside the content moderation workforce" (March 2026)
- The Guardian: "Thousands of people are selling their identities to train AI" (March 21, 2026)
- Bloomberg: "DoorDash's new paid tasks turn couriers into AI trainers" (March 2026)