Finding Security Holes in Your Web Applications
The dot-com boom’s legacy: buggy code. How do you find those bugs? Instead of trying to do code reviews with tools that were meant for developers, it's time to do them with tools meant for security
In June, the FTC slammed clothing retailer Guess Inc. for its poor Web site security, which resulted in a well-known attack disclosing sensitive, personal information about Guess clients. While the move was highly visible, Guess is hardly alone in harboring buggy code. In fact, according to the National Institute of Standards and Technology, software bugs cost businesses $60 billion per year. At least part of that, of course, is due to poorly written applications. Since most studies peg the number of bugs per 1,000 lines of code at 10-15, and with today’s applications running to many thousands of lines, all of the vulnerabilities add up.
What’s an organization to do with applications already in production that need to be excised of bugs? To talk about the state of Web application security and development today, and what organizations can do to strengthen it, Security Strategies spoke with Jeff Williams, CEO of Aspect Security, a firm that tackles application security.
How buggy are today’s Web applications?
Usually we find major security flaws in every Web application that we've looked at. This is a really huge issue for companies. The basic problem is people have all this code, it got written by people in the dot-com boom, probably by some guys in a garage or college, and now they have tons of lines of code and they don't know if they have vulnerabilities or not.
Did the need to have code, any code, outweigh applying strong enough security?
No question that the code written during the boom, and most code, was not written by people who know about security. Most developers come out of school without ever having heard word one about security. As a result, we have an enormous problem in this country. There are hundreds of thousands of developers, yet very few know about security.
Did the Internet change all that?
Even years ago, there was a good chance that your code wouldn’t be used on a network. But yes, there's been a change; if it's not network-enabled today, someone's going to write a Web front end for it eventually. People always assumed "it's an internal system, it's never going to be connected to the Internet."
So what’s a threat, even little one-off applications like time entry?
It's your online timecard application, or your human resources system, or your internal trouble ticket system. All code now is essentially accessible via the Internet. And the only reason that's important is that now there's a whole new range of attacks that weren't in the assumptions when that code got designed and written. So it's not built to withstand that level of heat.
How does a company know the extent of the problem?
Well, if you're a program manager, or a C-level in some company out there, you have an interesting problem. You’ve got all this code and you're not sure if it's strong enough to connect to the Internet, and your developers don't know either. Meanwhile, the security firms are jumping up and down and saying you have all this code full of holes, but they can't point to any, because you have all this custom code
So how do companies assess their systems?
Most companies I talk to are relying on vulnerability scanners to take care of this for them. There are some application-layer tools that will scan for flaws in applications, but here’s the problem: scanners are pretty good at finding known flaws, they have a database of signatures where they search for known bugs, but the problem is that most of this custom code is yours. It's a one-off, so the scanners only find flaws that they detect, that are common across all applications. So even if they're really simple flaws, if the scanner doesn't know to look at them …
The good thing is the scanners are pretty fast, and you can basically use your existing security staff to run the scanner tools and get a decent idea of where the flaws are.
What do companies do beyond vulnerability scanning?
The companies that have critical applications out there, the financial services and so on, they're relying on penetration testing to find the flaws. And penetration testing is good, because it's custom, and if there are easy, known flaws, penetration testing will find [them] eventually. The downside is that the penetration testing is only as good as the team doing penetration testing, and it also doesn't examine that entire site. I think of it as throwing rocks at a building from the outside; you don't know if you're doing damage on the inside. These applications have a very broad perimeter—they expose an awful lot—so it's often hard to know if you’ve damaged something.
So what are all the options a company has for rooting out bugs in Web applications?
Let's say you're charged with finding all SQL flaws in an application. You could scan them all … the scanner would try to look at all known flaws in the application and see if it got one. The problem is, unless the site blows up, the scanner isn't going to know it got one, and there's a lot of potential for false alarm[s]. It's real spotty. So then what you get is a list of possible SQL injections, then you have to go through and check them manually. So it’s very time-consuming to do that checking.
Penetration testing—you could do that to test SQL problems, but it's very time-consuming if you have to go to every form in the Web site and see if it can do SQL injection. And in a lot of cases, you might not even know it works, because the code may or may not execute, because there's no way to find out—you don't know if you blew it up or not; you’re typing “delete *” in a table.
Then you get to a line-by-line code review, and using that would be a fairly slow process, and you’re not going to know if a database query, say, got scrubbed anywhere along its path. There's a place where the application comes in with parameters, then there could be 30 or 40 places where there could be calls, and in order to find out if the SQL injection was successful, you'd have to trace through all of those. Automated code tools have the same problem: they're more like string searching. They'll find all the database transactions for you, but they're not going to trace the code all the way through.
The last approach, which I recommend, is that they use some kind of tool that will help the security analyst browse through the code. So the way we do it, you can find all the potential injection problems in 30 seconds. Then [with] the tool we use, the tool allows you to work backwards through the code to see if something represents a true SQL injection problem or not.
Instead of trying to do code reviews with tools that were meant for developers, it's time to do them with tools meant for security analysts.
So that semi-automated tool is Aspect’s?
Yes. It’s a tool-assisted human review of the source code; what we try to do is take the best of all these approaches and try to put them together; because you can never take the human out of the tool—that's decades away, before you can automate looking at the source code.
Where can companies start?
If a company was focusing on the OWASP [Open Web Application Security Project] "Top Ten" list of the most important Web application vulnerabilities, they’d be so far ahead of their competition. (For details, visit http://www.owasp.org/.)
Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.