Anthropic's Mythos Model Exposed by CMS Misconfiguration
A default-public setting in Anthropic's CMS accidentally exposed 3,000 unpublished assets, including a draft blog post revealing Claude Mythos - a new flagship model the company says poses serious cybersecurity risks.

On Thursday, March 26, Roy Paz at LayerX Security stumbled across something that wasn't meant to be public. Following a routine asset URL pattern on Anthropic's content management system, he found himself looking at approximately 3,000 unpublished blog drafts, internal documents, and corporate materials - all quietly sitting behind publicly accessible links because nobody had clicked a single checkbox.
The most consequential document in that cache was a draft blog post announcing Claude Mythos, internally codenamed Capybara. According to the draft, Mythos sits above the Opus tier in Anthropic's model lineup. The company describes it as "by far the most powerful AI model we've ever developed." The draft also warns, at length, about what that means for global cybersecurity.
Anthropic confirmed the model's existence to Fortune, which broke the story, calling Mythos "a step change" in AI performance. A company spokesperson attributed the exposure to "human error in the CMS configuration."
TL;DR
- A default-public access setting in Anthropic's CMS left ~3,000 unpublished assets exposed via guessable URLs
- Among the leaked drafts: an announcement for Claude Mythos (codename Capybara), a new model tier above Opus
- Mythos reportedly leads all models on coding, academic reasoning, and cybersecurity benchmarks
- Anthropic's own draft warned the model poses serious cybersecurity risks and had briefed U.S. government officials before the leak
- Cybersecurity stocks fell sharply on March 27 - CrowdStrike down 6.73%, Palo Alto down 4.30%
How the Leak Happened
The technical cause was mundane: Anthropic's CMS assigned every uploaded asset a publicly accessible URL by default. Materials remained discoverable unless a contributor explicitly switched them to private. Nobody had done that for a sizable chunk of draft content.
The kind of configuration that caused the issue looks like this in practice:
# CMS asset default settings (reconstructed from public reporting)
access: public # default - applied to all new uploads
require_auth: false # no authentication required to fetch asset URL
url_pattern: /assets/{uuid}/ # predictable URL structure
Anthropic said the issue was "unrelated to Claude, Cowork, or any Anthropic AI tools" and confirmed it "did not involve our core infrastructure, AI systems, customer data, or security architecture." Alexandre Pauwels, a cybersecurity researcher at the University of Cambridge who independently verified the exposed materials for Fortune, found the same cache via the same method as Paz.
Anthropic secured the data shortly after Fortune notified the company on March 26.
What the Drafts Contained
The Mythos Announcement - The draft blog detailed a model Anthropic describes as "larger and more intelligent than our Opus models - which were, until now, our most powerful." The internal name Capybara refers to the product tier; Mythos is the public-facing brand name. The draft called it "a generational leap" and said it "narrows the gap between human and machine software engineering." Specific capability claims include dramatically higher scores in software coding, academic reasoning, and cybersecurity assessments, plus what the draft called "recursive self-fixing" - autonomous vulnerability identification and patching within its own code.
The Cybersecurity Warning Inside the Launch Post - The same draft that announces Mythos also warns about it. One passage described the model as "currently far ahead of any other AI model in cyber capabilities" and said it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." Another section stated that Mythos "can run agents that work on their own with wild sophistication and precision to penetrate corporate, government and municipal systems." Anthropic's own language was that the model poses cybersecurity risks of serious scale - phrasing that, once leaked, triggered the market reaction discussed below.
The CEO Retreat Invitation - The cache also contained a PDF about an invite-only retreat at a 18th-century English manor in the U.K. that CEO Dario Amodei was scheduled to attend, as well as employee parental leave documentation and various draft images and graphics. Anthropic noted these were "early drafts of content considered for publication."
Cybersecurity equity indices declined across the board on March 27 as markets absorbed the effects of Mythos's claimed offensive capabilities.
Source: commons.wikimedia.org
The Cybersecurity Irony
There's a particular sharpness to having a model described as a cybersecurity threat exposed via a basic configuration oversight. Anthropic had been briefing U.S. government officials about Mythos's risks before any public disclosure, warning that large-scale cyberattacks "become far more likely once models at Mythos's capability level reach wide distribution."
The federal government had already granted Anthropic clearance to handle classified material, with most military applications running inside secure environments. A U.S. judge had also previously blocked the Pentagon's attempt to bar Anthropic's Claude from government contracts. Pentagon appetite for Mythos access, per reporting by Gizmodo, appeared strong before the leak.
Anthropic's stated plan was to release Mythos first to cybersecurity organizations for defensive testing, then expand to enterprise security teams via the Claude API, before any broader rollout. That sequencing reflects the same logic behind AI safety researchers' arguments about capability disclosure - that the people most likely to understand and defend against new attack vectors should see the tool first.
The concern is real. A Dark Reading poll conducted in early 2026 found that 48% of cybersecurity professionals now rank agentic AI as the top attack vector for the year, above deepfakes and traditional social engineering. Anthropic acknowledged this tension explicitly in the draft, noting that earlier Claude models had already been repurposed into malware development tools by threat actors.
Anthropic's leaked draft warned that Mythos-level models can run autonomous agents capable of simultaneously executing multiple hacking campaigns.
Source: commons.wikimedia.org
Market Reaction
The leak landed on Thursday evening. By Friday close on March 27, cybersecurity equities had absorbed the impact:
| Company | Ticker | Price | Change |
|---|---|---|---|
| CrowdStrike | CRWD | $366.21 | -6.73% |
| Zscaler | ZS | $134.19 | -5.17% |
| Palo Alto Networks | PANW | $149.64 | -4.30% |
| Global X Cybersecurity ETF | BUG | - | -4.50% |
| iShares Tech-Software ETF | IGV | $78.23 | -1.91% |
The Global X Cybersecurity ETF reached its lowest close since November 2023, pushing its year-to-date decline beyond 21%. Broader market weakness contributed - the S&P 500 fell 1.74% and the Nasdaq dropped 2.38% on the same day, with Deutsche Bank attributing part of the move to geopolitical factors.
Wedbush Securities analyst Dan Ives argued in a note that Anthropic's capability "validates cybersecurity's importance" and that Mythos would "complement rather than replace traditional vendors like Palo Alto, CrowdStrike, and Zscaler." Whether markets agreed depended on your time horizon.
The IPO Angle
Bloomberg reported before the leak that Anthropic is targeting an October 2026 IPO at a $380 billion valuation, guiding toward about $26 billion in annualized revenue by end of 2026. The accidental disclosure positioned Anthropic as the technical frontier leader without Anthropic having to say so directly.
Analysts at Proactive Investors noted that Capybara is described internally as a "research trophy" - a model too expensive for commercial scaling in its current form. That creates a real tension: Anthropic needs investors to believe it commands the frontier, but the revenue path from a compute-intensive model that can't be widely rolled out remains unclear ahead of the IPO.
The irony compounds. Anthropic has staked its brand on responsible AI development and the careful handling of model capability disclosures. Having its most sensitive unreleased model revealed via a misconfigured default setting is a different kind of disclosure than the company intended.
The model itself - Mythos - may well be everything the draft claims. Anthropic says it's in limited testing with early access customers via the Claude API. What the leak confirms is that Anthropic believes it's built something substantially more capable than current Claude Opus models, and that the company is aware of the dual-use risks that come with that capability level. The draft didn't hide either of those things. The CMS just made sure everyone could read it.
What to do if you're in security:
- Audit your own CMS and asset storage defaults - "public by default" configurations are more common than teams realize
- Monitor your threat model for agentic AI attack vectors, particularly autonomous agents chaining API calls without human approval loops
- Watch Anthropic's Claude API for any new tier announcements; early access to Mythos may come via direct application to Anthropic's security partner program
- Cross-reference with the AI-powered cybercrime patterns already emerging from earlier Claude and GPT models - Mythos capabilities will accelerate these trajectories
- Review your insider-AI risk posture; the leaked draft specifically named "shadow AI" - employees connecting consumer AI tools to work systems - as a distinct threat vector the model enables
Sources:
