The Future of Cybersecurity
Is the Assumption Underneath Our Security Programs Breaking?
I’ve spent the past few weeks doing something uncomfortable. I’ve been questioning whether the fundamental model my cybersecurity program is built on still works.
The team and the tools are fine. What I’m questioning is the assumption underneath all of it.
The cybersecurity industry has always assumed that when a software vulnerability is publicly disclosed, defenders have days or weeks to respond before an attacker can weaponize it. That gap is what gives us time to scan, prioritize, schedule a maintenance window, test a patch, and deploy it. Every vulnerability management program in every industry, and the CVE model as a whole, is built on some version of this assumption.
The gap we’ve relied on has been steadily shrinking, and its full disappearance is on the visible horizon.
In 2018, the average time between a vulnerability being disclosed and a working exploit appearing was over two years. By 2024 it had shrunk to 56 days. By 2025, 23 days. Today it is under one day.
While not the only driver, AI has contributed to accelerating this. The U AI Security Institute confirmed that current AI models can autonomously execute multi-step cyberattacks against vulnerable systems (with caveats). A separate research team showed that publicly available models can exploit most of the known exploited vulnerabilities they tested, most in under an hour. And a coalition of the who’s who in cybersecurity published joint guidance calling this a structural change that requires immediate action.
I was skeptical at first. I still am, to a degree. AI companies and the people around them have a financial incentive to hype their models. Every new release comes with these breathless announcements about how everything has changed, which is then parroted by a horde of “Linkedinfluencers.” I’ve watched this cycle enough times to know that what we end up getting in the real world always ends up being something far less.
But the independent validations keep stacking up. Government evaluations. Industry coalitions with no obvious commercial interest. And the underlying data on time-to-exploit compression, which predates any single model announcement, kept pointing in the same direction.
So I stopped asking “is this real?” and started asking “what breaks if it is?”
What Breaks
A lot of things within our security program depend on that time gap.
The most obvious is vulnerability management. The entire workflow assumes you have a window to triage, prioritize, and deploy fixes. If exploitation happens in hours, your scan-prioritize-patch cycle doesn’t complete before the threat is already at your door.
Then there’s the CVE model itself. The backlog is already so large it’s drowning in itself. By the time a vulnerability goes through the disclosure process, gets assigned a CVE, gets loaded into your scanner, and you start prioritizing it, anyone who was going to exploit it probably already has. CVEs are quickly becoming a compliance artifact and a historical record rather than a meaningful first layer of your defense strategy.
Incident response carries a similar dependency. We’ve always relied on breakout time, the delay between initial compromise and lateral movement. That window has been shrinking for years. Some threat actors are already below 30 minutes. If AI-augmented attackers push that below five minutes, most of our human-dependent containment workflows simply can’t keep up.
Your supply chain is exposed too. Third-party risk management assumes your vendors will patch their own systems within a reasonable timeframe and that you can respond when you learn of an attack on them. If timelines compress for them the same way they’re compressing for you, your supply chain risk profile changes fast. I’ve already seen this play out numerous times over the past few years and witnessed first-hand the operational impact on my own organization.
This doesn’t even begin to touch on AI “employees” operating within your organization.
How I’m Thinking About It
I don’t have a finished answer. I’m honest about that, and I think more security leaders should be. The ground is shifting. Despite nearly every vendor claim to the contrary, I don’t think anyone can see exactly where it’s going to settle.
But not everything breaks. Some things get more important. The mental model I keep coming back to is an evolution of the defense in depth model using four layers, each one independent so that if any layer fails, the next one catches it.
Proactively exploring, finding, and disrupting attack paths in your own environment before anyone else does. Not scanning for CVEs or known attack playbooks like existing BAS systems do. Actually testing your environment the way an attacker would, continuously, and disrupting the paths you find at machine speed. This is the biggest gap right now in most security programs and vendor solutions that I’ve seen.
Behind that, remediating known weaknesses. Patching, misconfigurations, rule tuning. All of it still matters, but none of it is your primary line of defense anymore. Think of it as hygiene rather than strategy.
Then detecting adversaries who got past those first two layers and containing them before they can spread. If you can spot an intrusion in seconds and contain it in under a minute, you can still prevent serious damage even when the initial exploitation happens instantly. This is where most mature programs have invested heavily in detection and segmentation, and those investments are paying off. But the velocity of response we will need to meet may drive us to a wholesale rethink of the centralized logging and detection concept.
And finally, recovering quickly when the worst happens. Clean backups that can’t be compromised by the same attack. Tested restoration procedures and operational resilience. Hours to restore, not weeks.
None of this is revolutionary. The fundamentals of a layered defense strategy are the same as they have been for centuries. What has changed is how fast each layer needs to operate.
The Uncomfortable Part
This is the most uncomfortable I’ve been in my career. I’ve always prided myself on seeing around the bend, and right now I’m having trouble seeing very far at all. Things are changing faster than I expected.
What makes it harder is that despite the claims from the expo floors, the vendor market hasn’t caught up. The problems I need to solve don’t have mature, proven solutions yet. That means making bets on teams and approaches that are still developing. For someone who has always told my team “we don’t buy roadmaps,” that’s a difficult position to be in.
I’m managing it by structuring commitments carefully. Limited initial spend, contractual exit ramps, performance milestones. If a bet doesn’t pay off, I lose a year of modest investment, not a multi-year millstone.
The security leaders who wait for the dust to settle before making changes are going to find themselves behind. The organizations that adapt now, even imperfectly, will be in a better position than the ones that wait for certainty.
Certainty isn’t coming. The question is whether you can make good decisions without it.
Michael Meis is a security leader with a passion for architecting security programs, leading people, and developing world-class security teams.
During his career, Michael partnered with the USDA CISO to develop one of the largest consolidations of security services in the federal government. Michael also led the H&R Block Information Security team through a transformation of their GRC operations to instill quantitative cyber risk management practices. Michael currently leads The University of Kansas Health System Cybersecurity team as they protect the critical systems, data, and people that provide lifesaving patient care.

