20 years of Patch Tuesday: it’s time to look outside the Windows when fixing vulnerabilities

For two decades we have been patching our Windows machines every second Tuesday of the month, devoting time and resources to testing and reviewing updates that are not generally rolled out until they have been validated and it is confirmed that they will do no damage. This may be a reasonable approach for key equipment for which there is no backup, but is this process worthwhile anymore in the day and age of phishing and zero-days, or should resources and security dollars be reprioritized?

Twenty years after Microsoft first introduced Patch Tuesday, I’d argue that we need to move some of our resources away from worrying so much about Windows systems and instead review everything else in our network that needs firmware and patching. From edge devices to CPU code, nearly everything in a network needs to be monitored for potential security patches or updates. Patching teams should still be concerned about Microsoft’s Patch Tuesday, but it’s time to add every other vendor’s release to the schedule. I guarantee you that our attackers know more about the patches they need than do you.

The plan for applying patches to workstations

First, let’s consider workstations. In a consumer setting where the user typically does not have redundancies nor spare hardware, a blue screen of death or failure after an update is installed means they are without computing resources. In a business setting, however, you should have plans and processes in place to deal with patching failures just as you would plan for recovery after a security incident.

There should be a plan in place for reinstalling, redeploying, or reimaging workstations and a similar plan to redeploy servers and cloud services should any issue occur. Where there are standardized applications, deploying updates should be automatic and done without testing.

Unanticipated side effects should trigger a standard process to either uninstall a deployed update and defer it to the following month (under the assumption that vendors will have found the issues and fixed them) or if the failure is catastrophic, the operating system will have to be reimaged and redeployed. Testing for Windows workstations and servers should be at a minimum. The goal for these systems is to have a plan in place to deal with any failure, conserving resources for elsewhere.

Today’s attacks call for better monitoring and logging

Testing before the deployment of patches should be reserved for those systems that cannot be quickly redeployed or reimaged. Some systems, such as special-purpose equipment controlled by Windows machines in healthcare situations, should be treated with more care and testing and, if possible, isolated.

Update and patching resources should also reflect the fact that many of today’s attacks come not from vulnerabilities, but rather from attackers using “living off the land” techniques, finding ways into a network, and then taking advantage of binaries and code already on Windows machines and not flagged as malicious. That’s why additional resources should be spent monitoring “normal” activities and logging and flagging when operating systems start acting in abnormal methods.

Patching non-Windows assets is just as important

If you are still using the same local administrator password on internal workstations, you are long overdue to spend time and energy rolling out a solution to randomize these passwords. LAPS is now included in Windows 10 and 11 workstations and no longer requires you to manually deploy the LAPS toolkit. In addition, you can integrate the embedded LAPS and Intune.

You shouldn’t be just focusing on the patching status of your Microsoft assets. You need to be aware of your digital assets and flag those items that are critical public-facing infrastructure. If you cannot identify a key risk and patch it within 24 hours, you are doomed for future failure. Organized crime gangs lie in wait inside your network and have often identified your network assets better than you.

They know the hardware firmware that is out of date. They know the switches, firewalls and routers that have weaknesses better than you do. They know the cloud platforms that are misconfigured better than you do. Furthermore, they don’t have to wait four weeks to approve a change request.

Recently, many attacks have come through VPNs, proxies, or gateways. For example, the recent CitrixBleed vulnerability impacted Citrix NetScaler ADC and NetScaler Gateway. As CISA noted, “The affected products contain a buffer overflow vulnerability that allows for sensitive information disclosure when configured as a gateway (VPN virtual server, ICA Proxy, CVPN, RDP Proxy) or AAA virtual server. Exploitation of this vulnerability could allow for the disclosure of sensitive information, including session authentication token information that may allow a threat actor to ‘hijack’ a user’s session.”

The vulnerability was first patched on October 10, 2023, but over a month later, we are still seeing attacks using this exploit. Lockbit ransomware in particular is utilizing publicly available exploits to breach the systems of large organizations, steal data, and encrypt files. Many large firms in finance, legal and business are being targeted in this attack.

Shift resources to patching outward-facing entry points

The same resources that you used to devote to workstation testing for Patch Tuesday deployments should be shifted instead to outward-facing software and hardware used in remote access to your network. It’s in these outward-facing entry points that attackers are focusing more time and energy on, identifying vulnerabilities and entry points. Thus, you too should review your patching resources and move teams to be more proactive towards updating and patching these edge devices.

Furthermore, if your antivirus and security solutions cannot identify lateral movement inside your network, you need to review your tools and resources. Recently at Ignite, Microsoft announced a repositioning of their Defender product called Defender XDR. Combining the ability to monitor endpoints, cloud applications, identities, and email protection, it’s designed to automate the disruption of attack sequences and in particular lateral movement inside a network. As more of us move to include cloud services in our network offerings, protecting identity is key to fending off the increasing number of attackers coming into our networks by abusing our credentials.

Patch Management Software, Threat and Vulnerability Management, Vulnerabilities, Windows Security

Go to Source