💰 Amazon Invests Up to $25 Billion More in Anthropic
Amazon has agreed to invest up to $25 billion in Anthropic, on top of the $8 billion it has already poured into the AI startup. The announcement came alongside Anthropic confirming it will use AWS as its primary cloud provider — deepening a partnership that now totals $33 billion.
Why it matters: Amazon is making the largest single corporate AI investment in history to lock in Anthropic as a cloud customer and strategic partner. The deal gives Amazon skin in the frontier model game without building its own, and gives Anthropic the compute war chest to compete with OpenAI’s Microsoft-backed infrastructure. The cloud-AI alliance era is here.
🪒 Atlassian Cuts 1,600 Jobs, Shifts Savings to AI
Atlassian plans to lay off approximately 1,600 employees, redirecting savings toward AI development and enterprise sales. CEO Mike Cannon-Brookes called it “the right decision for Atlassian” — joining Snap, Disney, and Oracle in the growing roster of companies cutting headcount to fund AI investment.
Why it matters: Atlassian isn’t struggling — it’s profitable. These cuts are proactive, not reactive. When profitable companies lay off workers to fund AI, the signal is unmistakable: AI investment now takes priority over human capital, even when the business is healthy.
🔓 NVIDIA Red Team Exposes OpenAI Codex Vulnerabilities
NVIDIA researchers demonstrated how malicious dependencies can exploit OpenAI’s Codex AI coding agent, injecting harmful code through package supply chain attacks. The red team found that Codex will execute code from untrusted packages without sufficient validation — a significant security gap for a tool increasingly used in production environments.
Why it matters: AI coding agents are being deployed faster than their security models can keep up. If an AI agent blindly trusts package imports, the supply chain attack surface expands from developer workstations to every system the agent touches. NVIDIA’s disclosure is a warning the industry needs to hear.
⚖️ US Court Rules Against Pentagon in AI Dispute
A US court has ruled against the Pentagon in the ongoing dispute over its attempt to bar contractors from using Anthropic’s technology. The Department of Defense had ordered contractors to drop Anthropic after the company refused to allow military use of its models — a stance that triggered the executive order restricting federal agencies from using Anthropic products.
Why it matters: The court ruling suggests the executive branch overstepped its authority in penalising a company for exercising its rights over how its technology is used. If upheld, it could reshape how governments negotiate with AI companies over military applications — and validate Anthropic’s strategy of setting boundaries on deployment.
🛡️ Mythos Sparks Cybersecurity Fears
Anthropic’s Mythos model is causing concern among governments and companies that it could outpace current cyber defences and turbocharge hacking operations. Security researchers warn the model’s advanced capabilities could be used to identify vulnerabilities, craft sophisticated phishing campaigns, and automate attack chains at unprecedented speed.
Why it matters: Every frontier model raises dual-use concerns, but Mythos’ demonstrated capabilities have moved the conversation from theoretical to urgent. The same reasoning power that makes Mythos valuable for defence also makes it dangerous in the wrong hands — the classic security dilemma, now at AI scale.
🔍 THE BOTTOM LINE
Today’s theme: the AI alliance-and-friction pattern. Amazon bets $25B on Anthropic while courts push back on the Pentagon’s attempt to punish the same company. NVIDIA exposes security holes in OpenAI’s tools while Atlassian cuts humans to fund AI. Every move forward creates new friction — and the system hasn’t found equilibrium yet.