In-Depth

Which Bugs Will Bite? Vulnerability Predictions for 2004

Heterogeneous attacks, voice over IP shakedown, and prime time Web services easing security: predictions from an eminent security researcher for 2004 and beyond.

What’s the state of information security for practitioners today? A 2003 Greatest Hits list would surely focus on the overwhelming number of viruses and vulnerabilities security managers must keep track of, while also noting increasing resources for doing so. In fact, most research points to increased spending on security, continuing at least through the near future.

What, however, of threats to come? To forecast those challenges, Security Strategies spoke with Gerhard Eschelbeck, chief technology officer and vice president of engineering for security audit and vulnerability management firm Qualys. Eschelbeck, on the team of experts that creates the annual SANS top 20 list of vulnerabilities, last year tapped the data Qualys generates from watching customers’ networks to create the RV10 list, a constantly updated, real-time list of the top 10 vulnerabilities.

The thinking: every year, half of the vulnerabilities remain, half go away; why not peg the new ones as soon as possible? Eschelbeck’s RV10 does just that, and he extrapolates, based on those results, to the future.

Are your predictions for 2004 based on what you saw on the RV10?

Yes … six months after the initial release of the RV10, we did a follow up of what’s happened since, and what are some predictions for the upcoming year.

What was big in 2003?

A significant increase in RPC-based [remote-procedure-call-based] vulnerabilities. For September, October, [and] November, Microsoft RPC vulnerabilities accounted for about 30 percent of all vulnerabilities, and that number is quite troublesome because RPCs are not just Microsoft-based. But so far this year what we've mostly seen are Microsoft-based [versions].

What accounts for so many RPC vulnerabilities?

RPCs are pretty much the heart of client/server computing, and pretty much every OS [operating system] has RPCs—Windows to Unix to Linux, and if you print a document under Windows, you use RPCs, or if you share a drive on Unix through NFS [Network File System], you use RPCs explicitly. They're really at the core of client/server computing, so I really do expect in 2004 some significant findings of RPC security holes.

We'll most likely next year also see heterogeneous attacks exploiting RPC vulnerabilities. Unlike Blaster, which was homogenous, because RPCs are cross-platform; you could see a worm that could target multiple operating systems.

Are cross-platform RPC attacks a write-once, attack-multiple-times proposition?

There are some subtle differences you have to request [differently across operating systems], but it's not like apples and oranges. It’s easier.

So the number of RPC vulnerabilities rise?

I do expect that by summer next year, we'll have 50 percent of the RV10 vulnerabilities are RPC-based.

So half of the top 10 worst vulnerabilities; why are RPC vulnerabilities suddenly now a problem?

RPCs are widely used, but they're always a little on the edge because they're so powerful. And RPCs have been a troublesome spot for five or six years; security researchers are always finding vulnerabilities there. Again, this covers several different operating systems. I wouldn't be surprised if vendors are coming out in a month or two with big, big patches covering RPC vulnerabilities; I'm sure some big vendors are doing research now on RPCs.

Are RPCs so vulnerable because they’re inherently a connection from a system to something external to it?

The intention of RPCs when they were developed was to make it easy to develop client/server applications. So it made it easy for the Windows machine to talk to the Unix machine and say I want to execute some code on the Unix machine; that’s what they were made for, ease of use. That [all] has some implications from a security perspective.

What can companies do now to mitigate future RPC vulnerabilities?

Obviously we have patches available for some that are known today, and I think the most critical [thing] we've seen on worms like Blaster was that we have good protection on the outside—Blaster wasn't a big problem on the Internet—but not the inside. Companies put a tremendous amount of effort on the perimeter and patching their system quickly, but on the inside, there’s a lot of work to be done to bring networks up to the same quality as the perimeter.

What can companies do to improve internal security?

There are so many entry points to a company's network—VPNs [and] wireless access points, just to name two—and those are entry points for Blaster that brought down the network internally. Blaster was mostly an internal issue. For example, many companies have told me they were infected by an infected laptop that was brought into the office the next day, connected, and it spread through the internal network. So I recommend companies make the same efforts securing their internal networks as they do the external networks. That means vulnerability scanning … and also taking advantage of respective patches, for the network, laptops, and so on.

In 2004 will it be easier for security managers to cope with the endless stream of vulnerabilities?

Well … users need the security industry to help them prioritize issues, because every week you get 10 or 15 new vulnerabilities. and it's really hard for someone to keep up with all the information they're receiving. And … that's why we're so focused on keeping the RV10 up to date.

What other predictions do you have for 2004?

From a vulnerability standpoint, next year I think we'll see some of the first issues with voice over IP vulnerabilities.

Is that because voice over IP technology hasn’t been given a security shakedown yet?

Very true; if you look at the history, it's always been the case that when new technologies come out, they only receive scrutiny when they reach prevalence. Take Microsoft—the reason there are so many vulnerabilities found is that there are so many security researchers looking at it now. The same with voice over IP, it's a very important technology now gaining prominence … and from a security perspective the research has to be done as well so we at least accomplish the same level of quality and security that we have from today's phones.

Are organizations getting better at designing in security testing?

We have to get security as a quality measure into the software development process, that's something that I hope we will be accomplishing at some point in time. In the past it was always about features and functionality, and today it's all about making security part of the development process—and not only development, but from a testing perspective, so a product isn’t being shipped if security measures aren't being met.

Could many of today’s vulnerabilities been fixed if spotted during development?

All security vulnerabilities are the result of the past years where the focus was on features and functionality, and the focus wasn't on security. And there's a mind shift now. Many companies are putting intense efforts into improving their products [and] educating their development teams to put security at the front of their development process, and that has to continue.

What else do you foresee for 2004?

More on the software side, I see a continued evolution of applications moving toward Web services, for many reasons. We as an industry have realized that the distributed client/server model that we've been using for the past has many challenges, and Web services takes away some of the burden of maintaining infrastructure and systems.

Will Web services make some application vulnerability problems disappear?

I do see a significant trend here. The adoption of Web services will have a [good] pace in 2004. It's like the utility model—you use the water, electricity, phone, you don't worry how it's coming to your house. And it's the same thing here. You don't worry how they're being delivered. So … it takes away a lot of the significance of a patch process today, when you have to patch 50,000 or 100,000 computers. When you look at the Web services model, it's being patched in one location, and patch is available to everyone because of the shared model.

So the centralized nature of Web services could make life easier for companies?

Because of the centralized model of the provider, you can put a much different emphasis on security as opposed to when you have to do it as part of your business on a regular basis.

To see the RV10 list, visit http://www.qualys.com/services/threats/current.html

About the Author

Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.

Must Read Articles