
short
- OpenMythos is a from-scratch rebuild of the Claude Mythos architecture, built solely from public research papers and educated guesses.
- Claude Mythos is the most powerful model at Anthropic, and was locked into Project Glasswing because he independently discovered 271 Firefox vulnerabilities and 32-step network attacks.
- The repo is theoretical scaffolding,code without trained weights. It reflects a separate effort by Vidoc Security that reproduced the Mythos vulnerability findings using ready-to-use models.
If Anthropic doesn’t show you what’s inside its most dangerous AI, someone on GitHub will guess it.
A developer named Kye Gomez posted OpenMythosan open source reconstruction of what the Cloud Mythos is believed to have looked like under the hood. The repo has picked up more than 10,000 stars on GitHub in just a few weeks after release, and comes with a comprehensive readme file full of equations, citations, and a polite disclaimer that it has nothing to do with Anthropics.
It’s speculation. But it’s organized speculation, in code.
Here’s a quick refresher on what Mythos is: The myths leaked into public view in late Marchwhen Anthropic mistakenly published draft materials describing it as the company’s most capable model yet — a level above the Opus. The follow-up, Mythos Preview, turned out to be too good to be published in the cybersecurity space.
According to Anthropic, Mythos found 271 vulnerabilities in Firefox during Mozilla testing. It became the first AI model to complete the company’s 32-step network attack simulation. Anthropic locked it inside Project Glasswing, a vetted alliance of about 40 partners, including Microsoft, Apple, Amazon and the National Security Agency.
The public can never touch it. So Gomez tried to figure out how it worked.
The central conjecture of OpenMythos is that Mythos is a recursive depth transformer – also called a toroidal transformer. Standard models stack hundreds of unique layers. Loop models take a smaller stack and run it through itself several times for each forward pass.
In other words, it’s the same weights that go through more iterations. Deeper thought, into the continuous latent space, before any token is emitted.
The repo argues that this would explain two of Mythos’ strangest qualities: it solves through new problems that no other model can tackle, but its raw memorization is uneven. This is the architectural imprint of the rings – the installation on storage.
OpenMythos cites Parcae, an April 2026 paper from UC San Diego and Together AI that solved the problem of long-term instability in toroidal models — a 770-million-parameter Parcae model that matches 1.3 billion constant-depth transducers in terms of quality, with predictable scaling laws for the number of toroids to run. The repo also borrows DeepSeek’s Multi-Latent Attention feature for memory compression, and the Mixture of Experts setting to handle scaling across domains.
What it doesn’t have are weights, so it’s essentially a portless technique.
OpenMythos theoretical. The code defines model variables from 1 billion to 1 trillion parameters, but you have to train them yourself — the readme file mentions a 3 billion parameter training script on FineWeb-Edu and a 30 billion code target modified by Chinchilla, the kind of computational bill that runs into the hundreds of thousands of dollars on the H100s. No one has done that yet.
So why does it matter?
Because it’s the second time in a month that someone has broken down the wall around Mythos. The first was a study from Vidoc Security, which Reproduced Many of the most troubling vulnerabilities in Mythos found use of GPT-5.4 and Claude Opus 4.6 within an open source proxy. Glasswing is inaccessible, and priced at less than $30 per scan. Different angle, same result: The moat around the Mythos may be thinner than the marketing suggests.
OpenMythos and the Vidoc version perform different tasks. Vidoc reproduced the Mythos output — the vulnerability discoveries themselves — using existing models. OpenMythos attempts to reproduce the architecture – the actual machine that produces that output. One of them says that you don’t need Mythos to find the bugs that Mythos finds. Eventually, the other says, you might be able to build something like Mythos yourself.
Anthropic almost certainly doesn’t share Gomez’s architectural guesses publicly, and many of the design choices in OpenMythos are outright hedges — the Readme file makes sure to be vague enough so users know this is just an approach. He repeatedly says “possible,” “suspected,” and “almost certain.” The Real Mythos may not be a toroidal transformer at all. Or it could be one that contains details that Gomez has not yet reverse-engineered.
What OpenMythos shows is that the research literature already has most of the pieces. Cyclotransformers, expert mix, latent multi-attention, adaptive computation time, Parcae stability fix – none of it is proprietary. The repo, more than anything else, is an inventory of what’s publicly known about how to build a Mythos-class model.
The repo is licensed by MIT, and already has 2,700 forks. The training text is out there, waiting for someone with a GPU cluster and a thesis to prove it.
Daily debriefing Newsletter
Start each day with the latest news, plus original features, podcasts, videos and more.




