The longer it sits in the sun, the less likely you will have an enjoyable experience later. The same is true for vulnerabilities.
The Ponemon Institute’s 2018 Vulnerability Response Study from ServiceNow, found that of the nearly fifteen hundred respondents that reported suffering a security breach in recent months, 57% said the breach was the result of an unpatched vulnerability. Even more distressingly, 37% were aware of the vulnerability before the breach occurred but had failed to address the issue.
As soon as a vulnerability is publicly disclosed, attackers and researchers inevitably begin picking apart its details, first churning out a proof-of-concept and ultimately automating its exploitation, increasing the threat to your organization with each step.
Unfortunately, we often have quite the opposite cadence on the blue team. When a new vulnerability hits the news, it quickly becomes all the rage in the boardroom. Everyone wants to know what it is and what we're going to do about it, but that attention quickly fades. If remediation is going to be costly and complex, it gets relegated to the bottom of the 'do it later' pile to fester. Meanwhile, the odds of that vulnerability’s exploitation increases exponentially over time. Zero-day vulnerabilities aren’t usually what bite us; it’s the ones that have been hanging around for more than a year, just stinking up the place, waiting for their chance to cause chaos.
The answer lies in everything that happens after identification. The information security industry has spent a lot of time, money and effort on improving detection capabilities, reporting and timely patching by software vendors. Unfortunately, not nearly enough attention has been dedicated to improving the remediation process. The challenge comes down to motivation and accountability.
Vulnerability analysts are motivated to analyze vulnerabilities. The more complex and nuanced, the more impressive a discovery is; the more voluminous the report, the better job they must be doing. They are accountable for ensuring that every vulnerability is identified. Infrastructure, on the other hand, is motivated by ensuring up-time and reducing costs. They are accountable for delivering five nines under budget.
If you're considering motivation, you can see why infrastructure is naturally averse to applying patches. Patching takes time and money. Change is the antitheses of stability. Therefore, the traditional vulnerability management model encourages delaying patching, strangling update cycles with red-tape and endlessly meeting without measurable results.
After all, if a patch goes bad who usually ends up shouldering the blame? Most of the time, it’s the person who pushed out the patch, not the analyst who identified the vulnerability. However, if the patch is never applied and vulnerability never realized - who gets hurt? No harm no foul, right? Except we all know what happens when that gamble goes bad, as it inevitably will.
We need to take a hard look at who is ultimately accountable for the risk a vulnerable asset creates. What business service relies on that asset? Does its business value proposition account for the maintenance required to keep it viable throughout its life-cycle? If not, is the owner of that service, the risk owner as defined by ISO, willing to sign-off on the risk of leaving it vulnerable, or would they rather sign-off on the risk incurred in patching it? This paradigm shifts risk acceptance to where it belongs, the owner and beneficiary of the business service.
To get there though, we need processes and supporting systems in place that allow for the tracking, assignment, exception processing and reporting of vulnerabilities and their associated remediation. Vulnerabilities need to be prioritized and assigned to asset owners. If the asset owner doesn’t feel they can mitigate the vulnerability, they need an exception process that shifts accountability to the risk owner and tracks their decision. We need effective reporting that reflects risk by business service so that leadership can account for the total cost of those services and make intelligent decisions about their future. We need realistic reporting on our vulnerability remediation capability and capacity so that we can budget appropriately to fit the organization’s risk appetite. Finally, we need to ensure our vulnerability scanners, change management process, CMDB, and incident response are all interconnected so that our vulnerability management process accounts for all of the data we have available to the entire organization.
Root out the broken processes in your organization's vulnerability management life-cycle and bring them to light using risk accountability as your driver. Educate executive leadership on why this is necessary to support their desired risk posture. Use metrics and reporting to show how the program improves over time when accountability lands where it should. After all, no one in their right mind takes a gamble on old seafood, why should we treat old vulnerabilities any differently?
You can view the original posting of this article on LinkedIn here.
These Stories on Vulnerability Response