In-Depth

IT's Security Dilemma: To Patch or Not to Patch

An out-of-band security fix from Microsoft put administrators in a familiar but tough spot: potentially damned if they patched and damned if they didn't.

Security administrators faced a familiar if uncomfortable position: just one day after Microsoft Corp. released an out-of-band patch to fix vulnerabilities in several versions of several versions of Windows, exploit code appeared in the wild.

The latest vulnerability -- with its near-same-day exploit code -- was too serious to go unpatched, experts warned, but it put administrators in a familiar if uncomfortable position: damned if they patched and damned if they didn't.

Industry watcher Gartner, for example, urged IT administrators to bite the bullet and patch their systems. "Microsoft made the right decision to issue an out-of-cycle patch for this vulnerability, given the evidence of active attacks against the Windows Server service and the ease at which exploits can spread across the majority of Windows systems," wrote Gartner analysts John Pescatore and Neil MacDonald. "Although Microsoft reports that these privately reported attacks have been limited and targeted, [it] is being very aggressive in pushing these patches out to prevent large-scale downtime for businesses around the world."

Gartner argued that the stakes were just too high for IT administrators to do otherwise.

IT pros have been in this situation before. For example, during the summer of 2001, Microsoft's Windows franchise was rocked by the first of the truly blockbuster exploits: Code Red. In June of that year, media outlets alerted users to a dangerous new vulnerability that affected Windows NT 4.0 and Windows 2000 systems running Microsoft's IIS 4.0 and IIS 5.0 Web servers. Microsoft itself did its part, e-mailing alerts to the more than 250,000 subscribers of its security mailing list and urging them to install a patch to fix the problem. Redmond also announced that it had dispatched representatives to many of its largest corporate customers to urge them to install the patch, too.)

The result, everyone now knows, was Code Red: a malware worm that infected tens of thousands of vulnerable, unpatched systems. Just why did so many systems remain unpatched, even though (and in spite of) Microsoft's counsel?

A closer look at Microsoft's history, particularly when it comes to software patches, helps answer that question. Earlier that same summer, after all, Microsoft required not one or two but three attempts to successfully patch a serious vulnerability in its Exchange 2000 Outlook Web Access Component. Prior to that, Microsoft had botched at least two service pack releases (Windows NT 4.0 SPs 2 and 6), more than a few NT 4.0 hotfix updates, and a variety of maintenance releases (Office 97 SR-1, Office 2000 SR-1). In many cases, Microsoft software patches have severely damaged systems; in some cases (Windows NT 4.0 SP2), they've rendered certain systems altogether unbootable.

This helped inculcate a prejudice among IT managers (who, understandably, tend to be risk averse): many viewed the prospect of installing an untested Microsoft software update as too risky.

Much has changed since that summer. To its credit, Microsoft has significantly improved its software patching process. Gartner conceded as much, citing "Microsoft's investment in a secure software development life cycle process" -- along with the increased use of network intrusion protection software in many enterprise shops -- as two factors that have "greatly reduced the success rate of attacks trying to exploit these types of vulnerabilities."

All the same, risk-averse IT administrators probably still approach the prospect of installing an untested Microsoft patch with some degree of trepidation. More to the point, the systems management and software development life cycles in most shops simply aren't designed with rapid (that is, untested) patch deployments in mind.

A Certified Information Systems Security Professional (CISSP) with consultancy and services firm Booz Allen Hamilton put it best. "The threat that comes from the outside is an unknown. In a case like this, it might never materialize. [So] if your systems go down because of an external threat ... you're not off the hook, but it's more understandable than if you install [a patch] and it takes them down," this security pro says.

"If in the commission of updating and applying a patch you broke your own system, you're kind of stuck on the hook for that. Chances are that [management] won't buy it if you said you were only doing it [i.e., installing the patch] to protect your systems."

That's why this security pro says it's still important to test -- even in cases where exploit code is in the wild. Call it quick-and-dirty testing.

"The best you can do is compress your testing schedule. The most immediate threat is that the patch could break your application, so you have to test it, even if the testing is compressed," this CISSP observes.

It's a rule for which few, if any exceptions are made. "When SQL Slammer appeared [in January of 2003], that was the only time in my life when I know that a patch was installed immediately. Other than that, the management process of the application system in question dictates how quickly a patch will be applied," this person concludes. "You have to weigh the risk of the patch taking down your system versus whatever damage could be done by the exploit."

Approaches to patching can vary from organization to organization, too, this CISSP points out. Shops that are certified according to the rigorous capability maturity model index (CMMI) tend to be much charier about how or when they install patches.

"A CMMI organization would very much prefer to run a full regression test rather than just throw it on the server and hope for the best," the CISSP concludes, "so the typical approach [in CMMI shops] is to thoroughly test it and then to apply it -- but to apply it not right away but during the next scheduled patch cycle."

Must Read Articles