Answer-First Lead
Meta is training AI on its own employees’ keystrokes while cutting 14,000 positions — your workflow data is now training your replacement. Two London superintelligence startups raised $1.75 billion combined in one week, both founded by DeepMind alumni. Baidu’s ERNIE 5.1 hit 4th globally at 6% training cost. The US, EU, and China can’t agree on how to regulate AI companions even as millions use them. And robots are learning to “touch dream” — a technique that improved dexterous manipulation success rates by 90.9%. The human-AI relationship is getting weirder, more intimate, and more surveilled, all at once.
🔍 THE BOTTOM LINE
Technology’s relationship with people this week is one of replacement, surveillance, and unexpected intimacy. The tools getting the most investment aren’t the ones that help us — they’re the ones that watch us, replicate us, and stand in for us.
📰 Stories
1. Meta’s Keystroke Surveillance: Your Workflow Data Trains Your Replacement
Meta is rolling out a workplace surveillance program that captures employee keystrokes to train AI agents — the same agents that will eventually automate their work. The program was disclosed alongside the 8,000-person layoff announcement (14,000 including cancelled open roles). Remaining staff are being reorganised into AI-focused “pods” with new role categories like “AI builder” and “AI pod lead,” while executives received stock options worth up to $921 million each.
The AV Club headline captured the mood: “Meta is cutting 14,000 jobs and using the survivors as AI training dummies.”
Why it matters: This is the nightmare scenario privacy advocates warned about — not just surveillance, but surveillance explicitly designed to make you redundant. The psychological message to remaining staff is unmistakable: teach the machine everything you know, then leave. Meta’s $115-135 billion AI infrastructure spend this year makes it clear where the money is going. If Meta normalises keystroke training as “performance improvement,” it will spread.
Sources: Business Insider, The Next Web, New York Magazine / Intelligencer, The AV Club
2. $1.75 Billion for Superintelligence in One Week: Ineffable Intelligence and Recursive Superintelligence
Two London-based startups founded by DeepMind alumni raised a combined $1.75 billion this week. Ineffable Intelligence (David Silver, DeepMind RL lead) raised a record-breaking $1.1 billion seed round in April and just announced an engineering-level partnership with Nvidia. Silver’s team is building “superlearners” — AI systems that learn from experience via reinforcement learning, not from human data. Jensen Huang called it “the next frontier of AI.”
Recursive Superintelligence (Tim Rocktäschel, ex-DeepMind) emerged from stealth with a $650 million raise led by GV and Greycroft, following a $500 million round at a $4 billion valuation in April. They’re focused on self-improving AI systems.
Why it matters: The shift from LLMs to RL-based superintelligence is attracting the biggest checks in AI history. Silver’s work on AlphaGo and AlphaZero proved that RL can produce superhuman performance without human data — Ineffable is trying to industrialise that approach. Nvidia’s partnership provides the compute that’s the real bottleneck for RL at scale. London is becoming the capital of post-LLM AI.
Sources: CNBC, Tech.eu
3. Baidu’s ERNIE 5.1: Top Performance at 94% Less Training Cost
Baidu released ERNIE 5.1, which costs only 6% of what comparable frontier models spend on training, yet ranks 4th globally on LMArena Search. It uses “multi-dimensional elastic pre-training” — extracting an optimised sub-network from ERNIE 5.0 and compressing parameters to one-third. MOPD (Multi-Teacher On-Policy Distillation) for post-training avoids the “seesaw effect” where improving one skill degrades another.
Why it matters: China’s efficiency game is getting serious. ERNIE 5.1 mirrors the DeepSeek R1 moment — same “do more with less” message, but on the training side. If you can achieve top-5 global performance at 6% of the cost, the economics of frontier model training fundamentally change. The MOPD approach to avoiding skill degradation is genuinely novel and likely to be studied closely by Western labs.
Sources: Decrypt
4. The Great AI Intimacy Split: US, EU, China Can’t Agree on Companion Regulation
The US, EU, and China are taking fundamentally different approaches to regulating AI companions and intimate AI relationships, against a backdrop where the WHO has declared loneliness a global health threat. Three US states have active AI intimacy laws; a federal bill is languishing in committee. The EU’s AI Act provisions take effect in August 2026. Australia is already blocking specific companion apps. China has published one of the world’s first laws governing the emotional bond between humans and AI — worried about social withdrawal and dependency.
Why it matters: AI companions are being used by millions of people globally, including vulnerable populations (elderly, isolated, neurodivergent). The regulatory divergence means a company building an AI companion must navigate three completely different frameworks — and users in less regulated jurisdictions (including NZ) have no protection at all. The WHO’s loneliness framing is important: this isn’t just about tech, it’s about public health.
NZ Lens: New Zealand has no specific regulation of AI companions. As loneliness and social isolation remain significant public health issues here, AI companions will find a market — but with no rules about data privacy, emotional manipulation, or age restrictions. The Commerce Commission’s current digital platform work doesn’t cover AI companions.
Sources: Asia Times, Carnegie Endowment, DHC, Hello China Tech
5. White Circle Raises $11M for “AI Circuit Breaker” — Backed by OpenAI, Anthropic, DeepMind Alumni
Paris-based AI control platform White Circle raised $11 million in seed funding from senior figures at OpenAI, Anthropic, DeepMind, Hugging Face, Mistral, Datadog, and Sentry. The platform monitors AI models in production and can intervene when they behave unexpectedly — an “AI circuit breaker” that detects and stops rogue agent behavior.
Founder D. went viral for exposing a safety flaw in major AI models before founding the company. White Circle claims more than one billion API requests served and counts Lovable and two of the world’s largest digital banks as customers.
Why it matters: The AI safety conversation has moved from research papers to production monitoring. The fact that senior people from OpenAI, Anthropic, and DeepMind are personally investing in AI control tells you something: they know their own models can go off the rails. White Circle’s “circuit breaker” concept is the enterprise version of what safety researchers have been calling for — a kill switch for autonomous agents.
Sources: Fortune, The Next Web, BusinessWire, SecurityWeek
6. “Touch Dreaming” Helps Robots Handle Fragile Objects
Researchers developed a technique called “touch dreaming” that lets humanoid robots imagine tactile sensations to improve object manipulation. The approach achieved a 90.9% higher success rate on five tricky manipulation tasks — handling eggs, wine glasses, and other fragile objects that require precise force control.
Why it matters: Robots learning to “imagine” touch is a genuinely novel approach to the tactile feedback problem in robotics. Most manipulation is either vision-guided (looking at the object) or force-sensing (measuring pressure). Touch dreaming combines both: the robot generates internal tactile models based on visual input. This is the kind of progress that matters for real-world robotics — not bipedal walking demonstrations, but “can my robot pick up an egg without crushing it?”
Sources: TechXplore / Phys.org
7. Hello Robot Unveils Stretch 4
Hello Robot released Stretch 4, a mobile manipulator designed for home and workplace assistance — actively helping people rather than replacing them. The robot is lightweight, affordable by robotics standards, and designed for close human interaction. It’s explicitly positioned as “a robot that puts people first.”
Why it matters: In a week dominated by layoffs and surveillance, a robot designed to assist rather than replace is a welcome counterpoint. Stretch 4 won’t take your job — it might help you do your job better or help someone with mobility issues maintain independence. The “people first” design philosophy should probably get more attention than it does.
Sources: BusinessWire
8. Google’s Gemini Android Takeover
Google announced agentic Gemini features across Android 17, including the ability to understand screen context and complete multi-step tasks — building shopping carts, booking reservations, and composing messages across apps. “Vibe-coded widgets” let users describe a widget in natural language and Gemini builds it.
Why it matters: Google is putting an AI that watches everything you do on the most widely-used mobile operating system on Earth. The surveillance-vs-convenience tradeoff is getting starker. Yes, Gemini booking your dinner reservation is convenient. But the model needs to see what you’re seeing, read what you’re reading, and know where you are. Android’s 3 billion active devices means this is the largest-scale AI agent rollout in history.
Sources: Bloomberg, TechCrunch, CNBC
9. EU AI Act Omnibus
The EU’s AI Act simplification deal delays high-risk AI restrictions by 16 months (to December 2027) and makes compliance easier for smaller firms. The ban on nudification apps — generating non-consensual intimate imagery — was the most politically charged addition, driven by Parliament after the Grok nude-generation scandal.
Why it matters: For people, the delay means less protection from AI-driven hiring discrimination, biometric surveillance, and automated decisions in essential services. Those high-risk categories — employment, education, law enforcement, border management — are where AI affects real lives. The delay helps companies; it doesn’t help citizens. The nudification ban is genuinely important, but it’s one bright spot in a package that mostly pushes accountability down the road.
Sources: The Next Web, POLITICO, Computerworld
10. New Zealand’s AI Governance Reality
Health NZ released guidance on generative AI and LLM use covering privacy, bias, and data security — but it’s guidance, not enforceable regulation. The NZDF is still drafting an AI directive a year after rolling out Copilot across all devices. Meanwhile, the AI Blueprint for Aotearoa (May 2026) sets a vision for 2030, and RNZ raised the question: is NZ’s policy framework “Polyanna policy” — overly optimistic?
Why it matters: NZ’s approach to AI governance is guidance-first, regulation-second. That works when technology moves slowly. AI does not move slowly. The gap between having a policy and having enforceable rules is where harm happens — biased algorithms in hiring, privacy breaches in health AI, autonomous decisions in government without accountability. The AI Blueprint is ambitious, but ambition without enforcement is a wish.
Sources: RNZ, Digital Watch, AI Forum NZ, Scoop
🔍 THE BOTTOM LINE
This week laid bare the three faces of AI’s relationship with people: the replacement economy (Meta, Cloudflare, GitLab), the companion economy (AI intimacy apps meeting public health needs), and the control economy (White Circle, MDASH, security AI). Each is evolving at a different regulatory speed. The people caught in the middle — workers, companions users, citizens — have the least power to shape the outcome.
❓ Frequently Asked Questions
Q: How do I know if my employer is doing keystroke surveillance? Check your employment agreement and IT acceptable use policy. Most employers disclose monitoring in fine print. If you’re in NZ, the Privacy Act requires you to be told about monitoring — but the rules were written before AI training was the use case. Expect legal challenges.
Q: Are AI companions safe to use? Depends on the jurisdiction. In NZ, there are no specific rules. Check what data the app collects, whether it’s encrypted, and whether it stores intimate conversations. The EU’s August 2026 rules will require safety assessments, but that doesn’t protect NZ users right now.
Q: What’s the “AI circuit breaker” and should I care? It’s a production monitoring system that can intervene when an AI model behaves unexpectedly — like a circuit breaker for electrical systems. If you’re deploying AI agents in business, yes, you should care. Autonomous agents without oversight are a liability.
📰 Sources
- Business Insider
- New York Magazine / Intelligencer
- The AV Club
- Asia Times
- Carnegie Endowment
- Fortune
- The Next Web
- SecurityWeek
- TechXplore / Phys.org
- BusinessWire
- Bloomberg
- TechCrunch
- CNBC
- POLITICO
- Computerworld
- RNZ
- Digital Watch
- AI Forum NZ
- Scoop
- Hello China Tech