Artificial intelligence security experts, policymakers, and industry leaders have gathered in Washington, D.C., to confront rising concerns about the safety of advanced AI systems, as a new model from Anthropic known as Mythos triggers fresh debate over how vulnerable modern digital infrastructure has become.
The high-level discussions come at a time when AI systems are being rapidly integrated into critical business operations, government services, and sensitive data environments, even as security frameworks struggle to keep pace with their capabilities and risks. The emergence of Mythos has intensified these concerns, with experts warning that it highlights how easily advanced AI tools could be exploited for malicious purposes.
Security professionals at the meeting included representatives from key global standards bodies such as SANS, NIST, OWASP, and CoSAI, organisations that shape cybersecurity best practices worldwide. However, participants acknowledged that despite their influence, existing frameworks are increasingly fragmented and struggling to address the unique challenges posed by artificial intelligence.

Rob van der Veer, chief AI officer at Software Improvement Group and a contributor to OWASP’s AI security initiatives, warned that models like Mythos are fundamentally changing the cybersecurity landscape by accelerating the discovery of system weaknesses. He noted that vulnerabilities in both traditional software and AI systems are now being identified at unprecedented speed, often before developers are even aware they exist.
According to him, this shift is tilting the balance toward attackers. “Weaknesses in AI systems can now be found faster and at scale—often before developers are aware of them,” he said, adding that this reduces the margin for error and increases exposure to potential attacks.
One of the central concerns raised at the meeting is that AI systems are not static technologies. Unlike traditional software, where vulnerabilities can often be patched once identified, AI models require continuous monitoring and adaptation. Experts emphasized that security must be treated as an ongoing process rather than a one-time fix.
A key issue under discussion is the lack of a unified approach to measuring AI security. Current evaluation methods often focus on how well systems perform tasks such as detecting threats or responding to prompts, rather than assessing how secure the systems themselves are against manipulation or attack.
Gary McGraw, co-founder of the Berryville Institute of Machine Learning, highlighted this gap, noting that the industry still lacks proper benchmarks for understanding whether AI systems are truly secure. He pointed out that this mismatch could leave organisations vulnerable as they increasingly rely on AI tools in high-stakes environments.
McGraw, who has been warning about machine learning security risks for years, said the industry is now reaching a critical moment. He argued that organisations must distinguish between AI systems that appear to perform well and those that are actually resilient to adversarial attacks.

Another major concern is the rise of adversarial threats targeting AI systems directly. Experts from the National Institute of Standards and Technology (NIST) warned that no fixed set of safeguards can fully protect against evolving attack methods. Instead, organisations must adopt a dynamic security approach that continuously adapts to new vulnerabilities.
This includes ongoing testing, internal “red teaming” exercises where systems are intentionally challenged to expose weaknesses, and rapid updates to defensive measures. The goal, according to NIST researcher Apostol Vassilev, is to make exploitation increasingly difficult and costly for attackers, even if it cannot be fully eliminated.
Despite these concerns, many experts remain cautiously optimistic. They argue that the cybersecurity industry has successfully adapted to previous technological shifts, such as the rapid expansion of software systems in the 1990s, and that similar adaptation is possible for AI.
McGraw compared the current situation to that earlier transition, noting that industries once overwhelmed by software complexity eventually developed stronger security practices. However, he also warned against overly optimistic narratives promoted by leading AI companies, suggesting that the reality of AI security is still far more complex than public messaging implies.
A key takeaway from the discussions is the need for greater coordination across the global AI security ecosystem. Experts believe that aligning standards and reducing fragmentation across different frameworks is essential for creating a coherent security strategy.

Without such coordination, organisations risk implementing inconsistent protections that leave gaps in their defenses. A unified approach, participants argued, would allow businesses and governments to deploy AI systems more safely while maintaining operational speed.
The emergence of Anthropic’s Mythos model has therefore become more than just a technological milestone. It has become a focal point for broader concerns about how society can safely manage increasingly powerful AI systems.
As adoption accelerates across industries, the challenge facing policymakers and security experts is clear: ensuring that innovation does not outpace the safeguards needed to protect it.
The discussions in Washington reflect a growing consensus that AI security is no longer a niche technical issue, but a foundational concern for governments, businesses, and society at large.