Despite an unprecedented wave of threats over the last year, many organizations still aren’t patching vulnerabilities in a timely manner, if at all. And infosec experts say there’s no easy fix for the problem.
One of the largest security events of this year has undoubtably been the exploitation of on-premises Microsoft Exchange servers via ProxyLogon, the name given to a server-side request forgery (SSRF) zero-day vulnerability with a designation of CVE-2021-26855.
When disclosed on March 2 of this year, Microsoft announced that ProxyLogon and three other closely associated vulnerabilities had been patched, as well as that they were being exploited in “limited and targeted attacks” by a state-sponsored Chinese threat actor named Hafnium.
The fallout from these zero-days was immense, and in some ways rivaled the earth-shattering SolarWinds supply-chain attack that was disclosed a few months prior. Even though patches were released for Exchange servers on the day of disclosure, the number of threat actors exploiting the vulnerabilities and the number of victims being exploited continued to rise.
RiskIQ, an intelligence vendor that worked with Microsoft to track the number of unpatched Exchange Servers (and recently reached an agreement to be acquired by Microsoft), found that from the 400,000 on-premises servers that needed to be updated on March 2, 82,731 servers remained vulnerable as of March 11. By late April, that number dropped to around 18,000. As of June 21, the number of ProxyLogon-vulnerable Microsoft Exchange servers was 15,100.
The idea of exploitation continuing after patches come out is far from new. For example, Fortinet’s Fortigate VPN faced a vulnerability that was disclosed and patched in 2019; despite being patched two years ago, there were reports as recent as April that the vulnerability was being exploited by ransomware threat actors.
A decade ago, infosec experts hoped that as enterprise security programs matured and awareness of cyberthreats and critical vulnerabilities continued to increase, patching rates and average time-to-patch would improve. However, that hasn’t happened, and recent research suggests that patching vulnerabilities has become more challenging. For example, Kenna Security recently analyzed all 100,000 CVEs published over the last 10 years. In 2011, there were 4,819 CVEs published, but last year the number was more than double at 11,463.
While Kenna found that the number of exploited vulnerabilities that led to breaches had fallen significantly over the last decade, the number of flaws organizations face has exploded. Meanwhile, organizations are still struggling to apply patches in a timely manner.
In 2019, the Department of Homeland Security issued a directive to improve vulnerability management within the federal government and bring the average time-to-patch for critical vulnerabilities to 20 days — down from 149 days.
Finding the root cause behind organizations’ failure to patch vulnerabilities is difficult in part because the issue itself requires multiple angles to even define. Once the issue is defined, however, certain trends start to emerge, and certain possibilities emerge for how the tech industry as a whole can improve patching rates.
The scope of the problem
Steve Stone, Mandiant senior director of advanced practices, told SearchSecurity that defining the scale of the “patching problem” is impossible.
“I’m not sure we can give you a perspective on what the world looks like,” Stone said. “I actually think part of the challenge is that I don’t think anybody can. I don’t think any organization anywhere can tell you how large or how small the problem is, because I don’t think anyone has that visibility. I actually think that’s indicative of how challenging of a problem this is.”
There are too many products, too many vulnerabilities, and such a varying level of visibility that the problem cannot be quantified in any reliable way. In fact, even in issues where there is some visibility — like in the case of RiskIQ and ProxyLogon-vulnerable servers — getting a complete picture of what any known statistic means is far from easy.
RiskIQ director of threat intelligence Steve Ginty explained that even though 18,000-or-so unpatched servers may seem like a big number nearly two months after ProxyLogon was patched, there were other factors to consider. Some of those are likely honeypots, he said.
“We know from some other research around web shells that there is a significant chunk of IP space that we scan that has this purpose. [Researchers’] servers are purposely out there to understand this type of activity,” Ginty said.
Moreover, while 18,000 (now 15,000) servers would be nothing to scoff it, the potential for honeypots makes what was already a small fraction of the total number of Exchange servers even smaller.
On the other hand, one doesn’t need exact numbers to know that a lot of organizations don’t have exemplary patching rates. Many vulnerabilities receive patches at the same time they’re disclosed, and threat actors continue to exploit vulnerabilities at extreme levels.
SearchSecurity asked multiple researchers and vendors for their impressions regarding how reliable organizations are about patching.
Cisco principal engineer Omar Santos, who works in the company’s Product Incident Response Team, told SearchSecurity that on an industry scale, it’s “all over the place.” However, he added that Cisco’s patch communication process has been effective in spurring patches due to its transparency and multi-pronged approach, which includes vulnerability reports and machine-readable advisories.
F5 Networks’ Brian McHenry, vice president of security product management for Big-IP and Nginx, said that it’s “highly variable” on an industry basis, but notes that there are higher levels of success with regular Patch Tuesday-style updates and when automation is utilized.
Mandiant’s Stone said that it varies based on the scale of the organization and what they’re doing from a security angle, but adds that they have had “excellent” patching experiences when working directly with clients who are in, for example, incident response situations. McHenry said that F5 has likewise had a positive response with direct outreach, as did RiskIQ’s Ginty.
Ginty said that there are more issues with organizations below the Fortune 500 or Forbes 2,000 because that’s where security resource constraints begin to more seriously occur. He called the patch cycle “relentless” for organizations without a large vulnerability management program.
Jake Kouns, CEO and CISO of intelligence vendor Risk Based Security, said that overall, “people are still horrible about patching,” and that the issue hasn’t improved for the last decade.
Setu Kulkarni, vice president of corporate strategy and business development at NTT Application Security (formerly WhiteHat Security), told SearchSecurity that the intent for improved security has significantly increased, and that “security has become a board-level conversation.” However, there are issues in translating the intent into practice.
Obstacles and roadblocks
The most glaring issue with staying on top of patching is that even though intent is there, a lack of resources can make it challenging to have complete awareness all of the time.
Among the experts SearchSecurity talked to about this topic, a common theme came up: Many organizations, especially smaller organizations, lack the resources to stay on top of all the patches and updates that need to be applied. Organizations need to prioritize due to the sheer hardware and software surface area they have to cover, and certain vulnerabilities then can slip through the cracks.
Mandiant’s Stone illustrated this with a question: “How many different vendors are used in your mobile device?” He explained that even on a personal level, when using a smartphone that isn’t connected to any corporate network, it takes many, many vendors to build both the hardware and software that results in the device held by the end user.
“Now do that at an organizational scale,” he said.
Stone added that simply asking why a customer doesn’t patch after a vendor puts out a blog doesn’t represent the full picture of why people don’t always patch everything. He called it, “at best, the midpoint rather than the starting point.”
“You as an organization know that that’s what you should be doing,” he said. “And I think that’s part of the challenge. Organizations have to do to at least two things before they get to that point [of thinking about specific big vulnerabilities and patching them]. One is to understand all the types of technology used by all of their users. And the second is understanding what those are used for and therefore what the prioritization can be. Then you can get into a discussion about the specific vulnerability with a specific act of exploitation, and is that being prioritized appropriately?”
McAfee CTO Steve Grobman told SearchSecurity that another issue comes into play when a lack of resources is combined with technical debt and decentralized technology infrastructure.
“There’s often decentralization of monitoring and managing technology infrastructure,” Grobman said. “So business units and application groups might have autonomous control over various technology resources. And a lot of times, their business goals or objectives don’t make cybersecurity hygiene the top priority.”
Grobman said that it’s important for organizations to ensure that they’re applying enough resources to retire technical debt and build the capabilities for good cyberhygiene practices like patch management or even applying mitigations.
One problem there, he mentioned, is that “There are many organizations that have awful patch hygiene and don’t have a major incident or breach.” As such, “it reinforces a false conclusion that patching isn’t important.”
The interview where SearchSecurity asked Grobman about patching was during RSA Conference 2021 in May. There, he gave a keynote on the importance of using data to make better cyber-risk decisions.
Staying on top of patches gets even more complicated because there’s little standardization in how patches are delivered. Some patches can be automated either through SaaS products or automation services, but some can’t. More vendors are applying a Patch Tuesday-style update format, but many aren’t. Patches are also communicated in many different ways, and combined with multiple types of updates (like full updates vs. hotfixes), prioritization can become even more challenging.
Proper communication and awareness of security updates can sometimes be an issue for enterprises. For example, the Reserve Bank of New Zealand said software vendor Accellion failed to properly notify the bank of a patch for a zero-day vulnerability exploited by threat actors. More often than not, however, customers do receive email alerts and notifications about critical bugs and urgent updates, but many fail to act on them.
Managed security service providers (MSSPs) can alleviate some of these issues for some organizations, especially SMBs, but every organization has different needs, and there’s no one-size-fits-all for what it will take to improve one’s own patching rates.
Instead, experts say it takes the right combination of solutions for each organization — but vendors may have their work cut out for them, too.
How to patch the patch rate
Infosec experts had several answers for how organizations without far-reaching budgets can begin to increase their patch coverage and improve time-to-patch today and in the near future.
NTT Application Security’s Kulkarni said that for SMBs, it can start at the buying decision level.
“Somebody is making a buying decision. So when you do buy software, first thing, check if it is available as a service. If it is, prioritize using the service. Second, if you have decided that the service version is not good enough and you’re going to buy on-premises, make sure when the vendor comes in that you push the vendor to provide some kind of an architectural blueprint, some kind of an inventory that you can put in a file and store somewhere,” he said. “And I think it’s absolutely all right to push your vendors to provide you that inventory and blueprint architecture to say, hey, here are all the systems that need to be patched.”
Kulkarni also argued that when SMBs use partners or consultants to come in for product implementation, the SMB should “push and ask for more when you’re paying $1,800 to $2,000 a day to have a consultant implement your systems.”
“Don’t just hold them accountable for functionality,” Kulkarni said. “Hold them accountable for security as well. And one of the things you should push them to do is give you an architecture, give you a document and give you an inventory.”
F5’s McHenry advocated for organizations to prioritize asset and inventory management, and to “make sure you know what you have and make sure you know what version it’s on” in order to make better decisions. Following this point, McHenry had a piece of advice about automation.
“Automate everything; automate all the things. That’s going to make it a lot easier for any organization at any scale,” he said.
Automation in security can cover a number of different practices, including continuous security monitoring and automated patch management. Every organization has different needs, but automation can help lessen the burden for teams, especially those with limited resources.
As an enterprise, F5 has a significant focus on automation. McHenry said his company works with customers who haven’t previously automated processes and helps them implement an automation for the first time just to see the impact. He said that once that first automation happens, “it’s kind of like a domino effect.”
“In my experience, almost everything can be automated with scripting and with tools that many vendors provide out of the box, F5 among them,” McHenry said. “So, automation, automation, automation all the way down.”
Kulkarni was likewise pro-automation, and Cisco’s Santos advocated for automation-adjacent practices like using cloud applications when possible, which are centrally updated by the cloud provider and don’t require patching by the customer.
But automation, and orchestration by extension, isn’t a “set it and forget” type deal in many cases. It makes some things easier and can take resources further, but it’s by no means a full replacement for a security staff. It can also be expensive depending on the organization’s needs and what’s being automated.
McHenry recommended information sharing practices between organizations to improve cybersecurity on both an individual and industry-wide level.
Jake KounsCEO and CISO, Risk Based Security
“Learn about what your peers are doing, even though you may be competitors,” he said. “It’s actually quite common in the banking industry for practices to be shared. But I think it’s something that more organizations can do — talk to their peer organizations. Even competitors, because we all benefit from better security on the internet.”
The vendor role in the patching equation is arguably the most important. Customers have benefited from vendors implementing automation practices, offering as much information as possible to users and providing direct outreach when appropriate.
But vendors can’t directly reach each of their 25,000 customers, and implementing automated processes for receiving and acting on vulnerability advisories is easier said than done. To that end, some vendors are supporting standardized ways to deliver advisories that customers can digest and act on more quickly. Cisco’s Santos is the chair of the OASIS Common Security Advisory Framework Technical Committee, a group dedicated to furthering a standard for machine-readable advisories. Cisco utilizes the framework via means like its OpenVuln API.
Then there are the vulnerabilities themselves. Risk Based Security’s Kouns said that a substantial amount of blame should be placed on those who release code with severe vulnerabilities in the first place.
“Vulnerabilities are going to happen. But if it’s an XSS vulnerability or SQL injection vulnerability, those are things that should not be happening anymore, yet they still happen all the time,” Kouns said. “So I think in general to say, yeah, there should never be a vulnerability ever again. That’s silly. But there’s some obvious patterns and some vulnerabilities that if someone invests in a security development lifecycle, they don’t happen.”
Risk Based Security’s main offering is its vulnerability intelligence platform, which is built to assist users in understanding vulnerabilities while rating vendors on their overall risk, including code maturity. Kouns said that when working with customers, he instructs them to work with vendors that “take security seriously” and “reduce the burden of what you have to patch.”
“It’s a little bit of a different shift of looking at it. Instead of sitting there going, ‘let’s get great at patching,’ let’s start selecting vendors that actually care about producing secure code,” Kouns said. “Then you don’t have to patch as much.”
Staying on top of patching as an organization can be hard for many reasons, including limited resources, a large surface area and decentralized environments.
However, closing some — if not all — of the distance between an organization and the burden of staying on top of vulnerabilities is possible. Automation, information sharing, choosing reliable vendors, and taking inventory can all result in a more secure organization. Vendors can likewise close the distance with transparency, direct outreach, substantial advisory options and further efforts to reduce the number of vulnerabilities that customers need to patch.
As Cisco’s Santos put it, “It’s an industry effort.”
Alexander Culafi is a writer, journalist and podcaster based in Boston.