en

Here's what next as Anthropic's most powerful AI model leaked via unsecured data cache

image
rubric logo Security
2
like buy dislike 11

Anthropic is testing the most powerful AI model it has ever built, and the world wasn't supposed to know yet.

A data leak reported by Fortune on Thursday revealed that the AI lab behind Claude has trained a new model called "Mythos," which it internally describes as "by far the most powerful AI model we've ever developed."

The model was discovered in a draft blog post left in an unsecured, publicly searchable data cache, alongside nearly 3,000 other unpublished assets, according to cybersecurity researchers who reviewed the material.

Anthropic confirmed the model's existence after Fortune's inquiry, calling it "a step change" in AI performance and "the most capable we've built to date." The company said it is being trialed by "early access customers" and acknowledged that a "human error" in its content management system caused the leak.

The draft blog post introduced a new model tier called "Capybara," described as larger and more capable than Anthropic's existing Opus models, which were previously its most powerful.

"Compared to our previous best model, Claude Opus 4.6, Capybara gets dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity, among others," the draft said.

It's the cybersecurity dimension that matters most for the crypto industry. The draft blog post said the model "poses unprecedented cybersecurity risks," a framing that has direct implications for blockchain security, smart contract auditing, and the escalating arms race between attackers and defenders in DeFi.

This week alone, Ripple announced an AI-driven security overhaul for the XRP Ledger after an AI-assisted red team uncovered more than 10 vulnerabilities in its 13-year-old codebase. Ethereum launched a dedicated post-quantum security hub backed by eight years of research.

And the Resolv stablecoin lost its peg after an attacker exploited a minting contract with no oracle checks and single-key access control, the kind of infrastructure failure that more capable AI tools could potentially identify before an attacker does, or exploit faster than defenders can respond.

For the AI token market, the leak raises a different question. Bittensor's decentralized network recently released Covenant-72B, a model that competes with Meta's Llama 2 70B, triggering a 90% rally in TAO and driving subnet tokens to a combined market cap of $1.47 billion.

A "step change" from a centralized lab like Anthropic resets the benchmark that decentralized AI projects need to match. The competitive distance between what a well-funded corporate lab can build and what a permissionless network can produce just got wider.

Anthropic said it is "being deliberate" about the model's release given its capabilities. The draft blog noted the model is expensive to run and not yet ready for general availability. The company removed public access to the data cache after Fortune contacted it.

The leak itself is its own cautionary tale. A company building what it describes as an AI model with unprecedented cybersecurity capabilities left the announcement of that model in an unsecured, publicly searchable data store due to human error. The irony needs no elaboration.