Dark server room with glowing red access warning on a screen, representing an AI security breach
Breaking News

Mythos Breached: Discord Group Accesses Anthropic's Most Dangerous AI on Launch Day

Anthropic built a model so dangerous it refused to release it publicly. Then a Discord group got in anyway — on launch day, through a vendor, by guessing a URL.

AnthropicMythoscybersecurityAI safetydata breach

Anthropic built a model so dangerous it refused to release it publicly. Then a Discord group got in anyway — on the very same day Mythos was announced, through a third-party vendor, by guessing a URL.

Mythos Breached: Discord Group Accesses Anthropic’s Most Dangerous AI on Launch Day

A private Discord group gained unauthorized access to Claude Mythos Preview — Anthropic’s restricted cybersecurity AI that the company deemed too dangerous for public release — on the same day the controlled launch was announced. The breach, first reported by Bloomberg, occurred through a third-party vendor environment, not through a direct attack on Anthropic’s systems.


How They Got In

The access method was strikingly simple. According to TechCrunch, the group used a combination of two vectors:

  1. Contractor credentials: One member of the group works at a third-party contractor that does business with Anthropic, giving them legitimate login access to Anthropic’s vendor systems.

  2. URL guessing: The group made “an educated guess about the model’s online location based on knowledge about the format Anthropic has used for other models.” They simply worked out the endpoint URL from Anthropic’s established naming conventions.

The group, which operates a Discord channel dedicated to finding information about unreleased AI models, has been using Mythos regularly since gaining access. They provided Bloomberg with screenshots and a live demonstration of the software as evidence.

They claim their motivation was curiosity — exploring new technology, not causing harm. But the method of access reveals something far more troubling than a few enthusiasts playing with a new model.


The Real Problem: The Perimeter Failed on Day One

Anthropic’s entire strategy for Mythos rests on controlled access. The model, which can autonomously find and chain zero-day vulnerabilities in every major operating system and web browser, was restricted to a hand-picked coalition of defenders through Project Glasswing. Apple, Google, Microsoft, Amazon, Nvidia, CrowdStrike, and 40+ other organizations were given access.

The logic was sound: get the model into the hands of defenders before adversaries could develop similar capabilities. Anthropic committed $100 million in usage credits and $4 million in direct donations to open-source security foundations.

But the perimeter failed within hours of the first public announcement — before most Glasswing partners had even begun their work.

If a small Discord group could guess the endpoint URL from Anthropic’s known patterns, nation-state actors and organized criminal groups can certainly do the same. This wasn’t a sophisticated zero-day exploit against Anthropic’s infrastructure. It was pattern recognition applied to a predictable naming convention, combined with a contractor’s credentials.


This Is the Third Information Control Failure at Anthropic

The Mythos Discord breach is not an isolated incident. It’s the third significant information control failure at Anthropic in recent weeks:

  1. March 2026: The Claude Code source leak exposed 512,000 lines of unobfuscated TypeScript through a missing .npmignore entry.

  2. March 26, 2026: A draft blog post describing Mythos as “by far the most powerful AI model” ever built at Anthropic was left in a publicly accessible data store. This was actually Mythos’s first public exposure, resulting from what Anthropic called “human error in its content-management configuration.”

  3. April 2026: The Discord breach through a third-party vendor environment.

Three failures in four weeks. Each one different in mechanism, but identical in root cause: the gap between Anthropic’s stated security posture and its operational reality.


The CISA Irony

Perhaps the sharpest edge of this story is institutional. The Cybersecurity and Infrastructure Security Agency — the US government agency whose entire mandate is protecting critical infrastructure — reportedly does not have access to Mythos. CISA, the organization most in need of a tool that finds zero-day vulnerabilities in critical systems, is on the outside.

Meanwhile, the NSA is using Mythos despite the Department of Defense — which oversees the NSA — formally designating Anthropic as a “supply chain risk.” The military is broadening its use of Anthropic’s tools while simultaneously arguing in court that those tools threaten US national security.

The entity designed to defend critical systems can’t get in. A Discord group can.


What Anthropic Said

“We’re investigating a report claiming unauthorized access to Claude Mythos Preview through one of our third-party vendor environments,” an Anthropic spokesperson said. The company stated it found no evidence that its own systems were affected.

This is a factually careful statement. It’s also a familiar shape: acknowledge the narrow, deny the broader implication. The vendor’s systems were compromised. The model was accessed by unauthorized users. Whether Anthropic’s core infrastructure was touched is almost beside the point — the model was never supposed to be accessible outside the Glasswing perimeter, and it was.


The Vendor Problem Nobody Has Solved

Enterprise AI deployments at frontier capability levels require trust chains that extend across dozens of organizations. Anthropic’s 40-organization Glasswing rollout means 40 distinct security postures, 40 sets of contractors, and 40 potential lateral entry points for anyone who knows what they’re looking for.

This isn’t unique to Anthropic. Every frontier AI company that restricts access to powerful models will face the same challenge. The more capable the model, the more incentive for unauthorized access, and the more damage potential when perimeter controls fail.

CEO Dario Amodei acknowledged this in the original Glasswing announcement: “More powerful models are going to come from us and from others, and so we do need a plan to respond to this.” The breach suggests the plan needs work.


What Changes Now

The immediate consequences are limited. The Discord group appears to have used Mythos for simple tasks and hasn’t caused documented harm. Anthropic has presumably revoked the compromised vendor access.

But the structural implications are significant:

  • URL predictability is a solvable problem, but it’s the kind of basic operational security mistake that shouldn’t happen at a company building the world’s most dangerous AI models
  • Third-party vendor management is a well-known attack surface in enterprise security, but the stakes here are qualitatively different from a data breach — this is a model that finds zero-day vulnerabilities autonomously
  • The precedent matters: if the access controls for the most restricted AI model in existence can be defeated by URL guessing and a contractor’s credentials on day one, the entire framework of controlled AI deployment needs rethinking

The UK’s National Cyber Security Centre warned that Mythos-class capabilities would be in the hands of bad actors within months. It took days, not months. And it wasn’t a sophisticated adversary — it was a Discord group that guessed a URL.


Sources

  • Bloomberg: “Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users”
  • TechCrunch: “Unauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos”
  • Silicon Republic: “Anthropic probing reported Mythos leak on Discord”
  • The Register: “Mythos found 271 Firefox flaws”
  • Axios: “NSA Anthropic Mythos Pentagon” / “CISA Anthropic Mythos AI Security”
Sources: Bloomberg, TechCrunch, Silicon Republic, The Register