Artificial intelligence giant OpenAI has nearly finalised an AI model with advanced cybersecurity capabilities, in a bid to compete with a dangerous new offering from its competitor, Anthropic.
Earlier this week, Anthropic unveiled a preview version of Mythos, its latest, most intelligent addition to the Claude family of AI models.
Dubbed 'Mythos', the new model's power is said to be “substantially beyond those of any model we have previously trained”, Anthropic said, adding that it was too powerful to be made publicly available just yet.
What is the hype about?
While reports speculate Anthropic CEO Dario Amodei's warnings about the formidable nature of his own AI model are also intended to turn heads and benefit the company, what he has said about Mythos is certainly alarming to some degree.
The reports say that one of Mythos's main abilities is to find deep vulnerabilities in software. While this can be used to defend the software, it can also be used to break into it, if it falls into the wrong hands.
Notably, the new model has exposed flaws in “every major operating system and web browser”, including one that had gone undetected for 27 years, as per an Economist report.
This has not only caused considerable fear about the ethics of such a model—straight out of endless sci-fi films on technology going rampant—but has also led to a number of tech giants participating in Project Glasswing.
"If Project Glasswing works, it could disarm many of America’s cyber-weapons," the report added, offering an idea of the scope of Mythos.
Project Glasswing is Anthropic's effort to offer a trial version of Mythos to companies—in order to boost their cyber defences and to flag issues with the new model's code before a public release.
What OpenAI is doing
OpenAI now plans to build a similar model to Mythos, an Axios report said, citing a source in the know.
The new advanced cybersecurity-capable model from OpenAI is also said to have a staggered release similar to Project Glasswing, but it is not yet known whether it will release to the public.
This comes after OpenAI introduced the 'Trusted Access for Cyber' pilot programme back in February 2026, following the release of GPT-5.3-Codex, which the company said was "most cyber-capable frontier reasoning model to date".
The programme will let certain invite-only organisations use the GPT-5.3-Codex to "accelerate legitimate defensive work", aided by $10 million in API credits.