A new report has revealed that the U.S. National Security Agency (NSA) is already using Anthropic’s highly restricted “Mythos Preview” artificial intelligence model, despite an ongoing conflict between the company and the Pentagon over national security concerns.
According to multiple reports, the NSA has gained access to Mythos, one of the most advanced AI systems ever developed for cybersecurity tasks. The model was deliberately withheld from public release by Anthropic due to fears that its capabilities could be misused for offensive cyberattacks.

Despite this, sources say the NSA is using the system internally, primarily to scan networks and identify exploitable vulnerabilities before adversaries can act.
The development is striking because the NSA operates under the U.S. Department of Defense, which had earlier labeled Anthropic a “supply-chain risk” and restricted its use across military systems.
This contradiction highlights a growing divide within the U.S. government: while the Pentagon has raised concerns about relying on Anthropic’s technology, intelligence agencies appear unwilling to ignore the strategic advantage the AI provides.
Mythos itself is part of a tightly controlled initiative that grants access to only a small group of organisations, including major tech firms and select government bodies. Its core strength lies in its ability to detect previously unknown software vulnerabilities—sometimes faster than they can be patched.
That capability is exactly what makes it both valuable and dangerous.

Cybersecurity experts warn that tools like Mythos could reshape digital warfare by accelerating both defence and attack strategies. In the wrong hands, the same system that protects infrastructure could be used to exploit it at scale.
For intelligence agencies like the NSA, the trade-off is clear: access to cutting-edge AI may outweigh internal policy disagreements, especially as cyber threats become more sophisticated and global.
The situation also signals a potential thaw in relations between Anthropic and U.S. authorities. Recent high-level discussions between company leadership and government officials suggest efforts are underway to redefine how such powerful AI systems can be safely deployed across federal agencies.
Still, the broader implications are hard to ignore.
This isn’t just about one AI model—it’s about who controls the most powerful digital tools in the world, and how governments balance security risks with the need to stay ahead in an increasingly AI-driven battlefield.
