AI Just Solved the Wrong Half of Cybersecurity

Anthropic’s Project Glasswing and Claude Mythos just proved what many of us in enterprise security have quietly feared for years: the attacker’s advantage in the AI era isn’t theoretical anymore. It’s here. And the question isn’t whether we should be alarmed. It’s whether alarm is enough.

Let me be direct. When I read that Claude Mythos Preview autonomously discovered a 27-year-old vulnerability in OpenBSD, a system practically synonymous with security hardening, without any human guidance after the initial prompt, I didn’t feel wonder. I felt the specific dread of someone who has spent years building enterprise data security programs and knows exactly how long patching backlogs sit untouched.

This is not a capability preview. It is a before-and-after moment. And I think the industry is only halfway processing what it means.

1000s zero-days found in weeks

<1% of findings patched at launch

27 yrs oldest hidden bug surfaced

What Mythos actually demonstrated

Anthropic was unusually candid about something that deserves more attention: they did not explicitly train Mythos Preview to be a vulnerability discovery engine. These capabilities emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same model that helps you write better Python found root-level remote code execution bugs in FreeBSD and critical flaws in every major OS and browser.

That “emergent” framing matters because it means we cannot treat this as an isolated security tool. The next capable model from any frontier lab, whether Anthropic, OpenAI, Google, or a state-sponsored program, will carry these capabilities whether its developers intend it to or not. The window to prepare is not years wide. It may be months.

“The window between a vulnerability being discovered and being exploited has collapsed. What once took months now happens in minutes with AI.” – CrowdStrike, Project Glasswing partner

Project Glasswing itself is a genuine and commendable initiative. Bringing AWS, Apple, Cisco, CrowdStrike, Google, Microsoft, and NVIDIA into a coordinated defensive effort with $100M in model credits and $4M in donations to open-source security foundations is exactly the kind of pre-competitive industry coordination we rarely see. But I want to be honest about what it is and what it isn’t.

The discovery-to-patch gap is the real crisis

Here is the uncomfortable truth I keep returning to: at the time Anthropic made its announcement, fewer than 1% of the vulnerabilities Mythos had found were patched. Let that settle. The model surfaces thousands of critical flaws across infrastructure the entire internet depends on, and our remediation capacity, largely volunteer-driven open-source maintainers working at human speed, cannot absorb the volume.

This is the Glasswing Paradox. The thing that can see everything cannot fix anything. Vulnerability discovery has always been supply-constrained. AI just eliminated that constraint entirely, without touching the demand side of the equation, which is skilled humans who understand the code well enough to safely remediate what gets found.

From an enterprise data security product perspective, this reshapes the problem statement fundamentally. We have spent years building programs around mean-time-to-detect. We optimized for finding things faster. Now the detection problem is largely solved at the code level, and the bottleneck has shifted entirely to prioritization and remediation velocity. Your CISO’s job description just changed without anyone updating the job description.

What enterprise security teams should actually do

First: treat open-source dependencies differently starting today. Not eventually. Today. Mythos surfaced ancient vulnerabilities in projects maintained by tiny volunteer teams. If your enterprise depends on OpenBSD, FreeBSD, or any of the hundreds of libraries the Glasswing consortium is scanning, you need direct line of sight into your dependency graph and your patch lag on each node. SBOMs are table stakes. What matters is having a prioritization framework that accounts for the new discovery rate.

Second: prepare for AI-augmented adversaries now, not when you see evidence of it in the wild. Anthropic itself disclosed last year the first documented case of a cyberattack largely executed by AI. A Chinese state-sponsored group used AI agents to autonomously infiltrate roughly 30 global targets, with AI handling the majority of tactical operations independently. The capability that Glasswing is trying to get defenders using first is already in adversarial hands in some form. The asymmetry we need to close is time, not access.

Third, and this is the piece I see least discussed: we need to start treating AI models themselves as part of our threat surface modeling. Not just as tools we use, but as systems that hold credentials, consume APIs, write to production environments, and take autonomous actions. The same autonomy that makes Mythos remarkable at finding bugs makes any sufficiently capable model running in your environment an entity whose permissions, reach, and failure modes have to be governed like any other privileged principal in your stack.

The part that keeps me up at night

Anthropic has chosen, deliberately, not to release Mythos generally. They are betting that keeping it restricted to trusted partners gives defenders more runway than attackers. I think that’s the right call, and I respect the transparency with which they’ve explained the dual-use calculus. But it is a bet, not a guarantee. Competitors, both domestic and state-sponsored, may not make the same choice. A model that costs billions to train will face enormous pressure toward monetization.

Project Glasswing is a starting gun, not a finish line. The $4M donated to the Apache Software Foundation and OpenSSF through the Linux Foundation is meaningful and symbolically important. But sustaining the human expertise needed to actually fix what AI will keep finding requires the industry to treat open-source maintainers as critical infrastructure workers, with compensation, tooling, and organizational support that reflects that status.

We are right now in the gap between the discovery capability arriving and the remediation capability catching up. What we do in this window will define the security posture of enterprise systems for the next decade. AI just solved the discovery problem. Nobody has solved the fixing problem yet. That’s the half that matters.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.