In-Depth

Q&A: Preventing “Applications Gone Wild”

Software can establish a baseline of "normal" application activity, then sound the alarm when an app behaves erratically

Normal may be boring—unless you’re a security administrator. Then, applications operating within established parameters are a boon; that’s the kind of normal you crave.

Taking that a step further, software exists to create a baseline of normal activity, then sound the alarm when applications behave outside this "normal" zone, often indicating an attack, compromise, or infection.

To discuss this approach to securing applications, PCs, servers, and operating systems, Security Strategies spoke with Matthew Williamson, senior research scientist at Sana Security Inc. in San Mateo, Calif.

Sana gained a reputation for applying biological principles to security. Exactly what does that mean?

Sana is a host-based intrusion prevention software company, started in 2000, primarily by a guy called Steve Hofmeyr, who did a Ph.D. on biological approaches to computer security. From that, they built an intrusion-prevention product, PrimaryResponse … which is the sort of thing that kicks in when your body is dealing with things that are new. One power of the approach is the [software] has a learning mechanism; it will learn the [operating] behaviors of the executable, then detect the deviations … One cause of these could be an attack, a buffer overflow, or breaking into that system … [Notably,] you can detect things that are known or unknown.

So this approach enforces correct application behavior?

To be more precise, it’s enforcing normal application behavior. It learns the profile of the application running on the system, and our experience has been that different production systems have different characteristics based upon usage. Sana picks up on that and learns what each one is doing.

Now that’s the server product … [There’s] a desktop intrusion prevention system as well, and that has similar learning motivations, in terms of using learning mechanisms to improve manageability. Servers do very repeatable things, while desktops do lots of weird and wonderful things.

What happens when your desktop software detects abnormal behavior? Does it ask the end user what to do?

We’ve made the decision that asking the user to do that-- asking the user to make the right decision, when they don’t know either—[isn’t optimal]. So, in fact, the desktop product doesn’t have a user interface; it’s entirely through an administrator console.

The second question is, how accurate is it going to be? We’ve done a lot of work on that … A lot of it is aimed at Trojans and key loggers … that [sort of] information loss is a primary concern … Later releases [of the desktop product] will [extend] to other forms of malicious code.

What defenses do you have against malware that tries to deactivate your software?

You’re running in the same environment as the malicious code, so we do quite a lot of things to [ensure] we’re not easily removable. But the Trusted Computing standard will help with a large aspect of that. I personally feel like the Trusted Computing Group is very much in the [older] style of security—raising the bar on the enforcement mechanisms—rather than this nouveau way of thinking about it, which is detecting deviations from normal behavior.

If you think about a firewall, it’s there to protect you, but you have to open ports to do [things]… and you have no way of telling whether the traffic through that port is good or bad … It just so happens that there’s abuse of what you’re allowing it to do.

So one approach reacts to reality, while another tries to enforce a top-down security model?

Yes. And I think they have a good thing—you have to remember that security is always an arms race … [but] the firewall hole is a good example. That firewall is open or closed. It doesn’t matter how narrow that hole is, if you can get something through to exploit what’s beyond it, then [it can do damage]. The problem isn’t making the hole smaller or bigger, the problem is controlling the behavior, really.

Is this similar to HP’s recent initiative to throttle down network connectivity for PCs infected by a virus or worm, to slow their spread?

That’s kind of part of the inspiration for throttling, which is saying that a machine, when it’s normally running, has this sort of behavior and a machine when it’s trying to spread a virus looks totally different … So we can gradually detect, slow, and stop that, because it’s so different than normal behavior … That break from a binary, to a gray area of good or bad (with some scale in between)—this is the way this has to start going.

Why isn’t throttling already in wide use?

It turns out that one weakness of it is it’s quite effective at limiting the traffic but it’s harder to slow the spread unless you have it deployed everywhere. And really 80, 90, or 100 percent [coverage] is needed; at 50 percent you’re just getting started.

Are viruses so dangerous today because they can route around defenses?

Particularly for an Internet worm like Blaster or Slammer, a single machine can make connections or send packets to so many machines in such a short period of time … Really the first third of infected machines are doing the bulk of the propagating work, so intuitively that’s why you need to get good coverage. Even if you can take out three quarters of the machines, you still have a lot of [conduits].

The speed thing makes [antivirus] signatures less effective. It causes everyone major headaches because of network congestion … and really if you could just calm it down, the effect would be less … So if you can contain and slow the spread, you would achieve a much more peaceful virus outbreak, because the network wouldn’t be falling over because there wasn’t so much traffic on it. You’d be getting gradual alerts as this happened—not just dysfunctional alerts as pieces of your infrastructure broke … And if fewer machines get infected because it’s spreading more slowly, then that’s also good, too, from a cleanup point of view.

Is it difficult to maintain an environment with throttling controls on every PC and server?

It shouldn’t be difficult; it’s more just a question of deployment. What other piece of software is deployed on 100 percent of machines? Maybe the operating system, but there are many flavors of that. Or antivirus. If you wanted to roll this out, you’d have a hard time getting full coverage … That’s why people have started talking about putting it in networking equipment. … The Achilles heel of all this is not getting it on every computer.

What are organizations doing in the meantime?

The best defense we have at the moment is patching, but patching is arduous … and it’s hard work to roll out patches in a large enterprise, particularly if you have an IT policy that’s relatively relaxed. That means if you have something like PrimaryResponse, which deals with things that are unknown, or something like virus throttling, which helps you a little bit on network defense, it helps. So do automatic updates.

Are there just too many ways for machines to connect to each other today to realistically slow the spread of malware?

There are too many ways to connect to machines. What I think is, security has to be fitted around people’s usage, otherwise you get the tail wagging the dog … One example would be that you have to zip up an executable if you want to deal with it these days. And the reason you can’t send executables is because of viruses. Well, that solution affects productivity.

Again, it’s the binary thing: you can only accept zips or not accept zips, you can’t deal with the real problem. But the world will become a better place once we have some better generic controls on what programs can do.

Related Articles

HP Throttles Viruses, Cracks OpenView Identity
http://esj.com/enterprise/article.aspx?EditorialsID=1223

Tips for Spyware Eradication
http://www.esj.com/Security/article.aspx?EditorialsID=1215

About the Author

Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.

Must Read Articles