In-Depth

Best Practices: Patch Management

To more rapidly test patches and keep network configurations from drifting, keep a closer watch on every device’s configuration

What’s worse, the vulnerability or the patch? Of course, no security manager can fully predict the effect of a new patch or an altered configuration upon an enterprise’s infrastructure, given the range of operating systems, devices, and differing patch levels at play. Yet many organizations don’t have a clear picture of all the devices (and in what configuration) in their enterprise, so predicting the effect of a patch, and testing it, is less intuitive and more trial-and-error.

To more rapidly test patches and keep network configurations from drifting, keep a closer watch on every device’s configuration, advises Drew Williams, vice president of corporate development at Configuresoft. Only then can organizations begin to predict the effect not only of new network devices, but also what impact patches might have on the environment. To discuss patch and configuration management best practices, Security Strategies spoke with Williams.

What’s the state of patch and configuration management today?

For the first time … we can actually remove risks, not just reduce them … We’re even dealing now with technologies that take an assessment of a system before problems surface.

Aren’t patches still a rather big unknown variable?

[Yes] and it’s one thing to just throw a patch on a system. If you don’t know if it’s going to affect key areas of your business, you don’t want to just throw that on there. [First,] you need a lot of background information on your settings, the devices themselves … and then you can make more intelligent decisions about where you focus your planning … and not have it cost you in productivity or open you up to security risks.

The onus is on making sure you collect enough data to make the decision about what it is you need to change, then based on data and facts, knowing what the consequences of any change is going to be in your environment. Today most folks are taking a [too] small snapshot of what they think is going on.

Why don’t more organizations collect more data?

The problem is, [patch management] isn’t as sexy as an attack going off and blowing up something in your network. So the CxOs of the world don’t necessarily see how … this can reduce the cost of doing business, and reduce the cost of buying tools you don’t need all the time. A lot of times people just don’t know what they don’t know about their environments, and a lot of the times they end up being exploited.

Is this a problem IT managers understand?

The IT administrators realize there is a problem, and it varies to the levels that the problem is stated in their network. Sometimes people don’t realize that as soon as you employ a new box, something has changed in your environment, as soon as you deploy an application on your box, something has changed in your environment … and without the details associated with what those new implementations are, and without having a current snapshot of your environment, you’re pretty much just shooting an arrow up into the sky and hoping it doesn’t hit anybody.

How can organizations create better snapshots of network and device settings?

Use a [software] manager with an agent-based architecture, because that’s where the greater level of detail is. These tools basically collect data at a very granular, detailed level, and these agents work in a way where basically once they’ve taken a snapshot of the network, any deltas are then noted. So the agents remain dormant, so you can maximize [available] bandwidth… until something has changed, when they become active, make note of the change and pass it up to the manager, and you can identify the deltas from where you were versus where you are currently.

To what extent can these scanning tools automate patch and configuration management, especially in a regulated environment?

Compliance is 90 percent process, 10 percent an application sitting on top of 100 percent technology to watch everything. So that’s where the human intervention comes in. They have to make sure that for the processes being deployed, that everyone understands them, and their behavior—how they interact with the computing environment—is just as consistent as the computing environment has been configured.

Compliance, policy, and all these federal mandates were born out of the fact that people just generally don’t operate by computers, and that’s why regulations are all process-driven. For example it says you will ensure that networks will be hardened, or that … users’ administration rights will be kept to a minimum.

So policy and compliance is where you’ll see more human involvement, but again to affect successfully a level of compliance in a computing environment, you have to have something that can collect enough information.

When it comes to patch management capabilities, what needs to improve?

You still have vendors producing code that requires a lot of maintenance, like Microsoft. However, … once you get over the hurdle of gathering a lot of information about a computing environment, you can get to the next step … which for our customers is stabilizing their computing networks … Gathering data and maintaining data enables them to stabilize their resources.

What do you mean by network stabilization?

It’s maintaining levels of operational control. Remember, as soon as something new is introduced into an environment, drift occurs immediately. So … it’s also knowing how something will behave once it’s introduced. It’s almost a cyber-biological environmental control system.

You mentioned maintenance-intensive operating systems. What do vendors need to do better?

They just need to start taking a little more time and try to produce code that works, rather than just producing revenue. At the end of the day, … the money-making process still drives the development process. Look at it now. Microsoft has been sailing along just fine making its gazillion dollars, but finally … large-scale NT 4 and 2000 and 2003 and XP users in these larger computing environments have generated enough frustration and angst that Microsoft is actually taking a much more detailed and responsible look at how their product is coded …

And when you look at the fact that new players are on the scene, like Linux—especially the open-source movement—and also compared to HP UX and its contemporaries, technology runs a lot smoother on those backbones. And Microsoft is feeling the heat … [saying] we will compromise a couple of degrees of productivity so our environment runs more safely and more efficiently.

You make it sound like an end-user nightmare.

You have patches for patches, for crying out loud.

Related Articles

Solving the Patch Management Headache
http://www.esj.com/news/article.aspx?EditorialsID=852

Patch or Perish: Symantec Notes Dramatic Increase in Threats
http://www.esj.com/Security/article.aspx?EditorialsID=1136

Human Error Tops List of Vulnerabilities
http://esj.com/news/article.aspx?EditorialsID=920

About the Author

Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.

Must Read Articles