Regulations, Fear Driving More-Secure Code Development
To counter security threats, developers can reverse-engineer their products, or take a less expensive and more effective approach
Are your applications secure? According to Gartner, 70 percent of attacks today target the application layer—not the network or operating system. While insecure applications can be reverse-engineered to improve their security, a less expensive and more effective option is simply to ensure developers write secure code.
To learn more, we spoke with Dr. Herbert Thompson, an applied mathematician who is the chief security strategist of application security services provider Security Innovation in Wilmington, Mass., and author of a new reference book for developers, The Software Vulnerability Guide.
When in the development process does security testing typically occur?
It’s interesting because about two years ago, whenever people would ask for security in general, or security audits, it would be right at the end of the development lifecycle. It was, “Hey we’re about to go live in two weeks. Can you check it out to make sure we don’t have any problems?” And that’s a very expensive way of doing it, because when you find problems, there may be architectural [repercussions].
Now, however, many companies are trying to move security much earlier in the development lifecycle. For example, we’re seeing a lot more code reviews during development, and a lot more design reviews. We’re seeing a ton of demand for developer training—going in and teaching courses to developers on how we help you make your application more secure. So much of that—getting developers to code [secure code]—is an educational issue.
Beyond training developers to write more-secure code, are organizations still bringing in experts to assist with code reviews?
Yes. … Over the last year, the growth has been huge in code reviews, and even in design reviews. So it’s a definite move by the industry, that myself and many others are seeing, where people are starting to take security as a very serious, sustained, software-development-lifecycle problem.
What’s finally caused companies to view security this way?
Well, where was the value argument three or five years ago? If you’re a manager at a large financial organization, and people are telling you about security, but you also see the price tag associated with it—external audits, and I have to get my people trained—one of the biggest challenges was, how do I justify that I’m not just spending money but spending on something that’s maintaining revenue?
But a lot of the recent legislation has made the value argument very clear to companies. It’s crazy—just California Senate Bill 1386 alone has caused companies to disclose potential breaches that would never have been disclosed three years ago. It would have been suicide for the company to come out and say, “Hey you know, guys, I think we may have exposed 40 million credit cards, but we think it will be okay.”
Now it’s “We have to disclose it to the country,” and that’s changed the argument for preventive security measures substantially, because the risk is so much greater. Maybe the risk of us getting [hacked] is about the same, but maybe the costs are so much greater.
Are regulations driving more-secure code?
They’re certainly driving fear, and fear drives budgets, and budgets are leaning towards almost this panic type of thing: I’ve just seen my peers in my industry be decimated because they’ve had to disclose a vulnerability or a penetration.
So there’s fear on the part of the CEO, which is always bad, and he passes that down to the CIO and the CSO, and then the question is, how can we be certain this doesn’t happen to us? If this was three or four years ago, the response would be that we need better network defenses—if we just added seven firewalls, instead of five firewalls, we’d be in better shape. But the problem is that more attackers are turning to application attacks.
How widespread are application-layer attacks?
Gartner’s recent estimate is that 70 percent of attacks are coming against the application layer. If you’re a network-defense type of CIO, that should be pretty interesting to you … [because] the only way to shore up those systems is to write the application with security built in. In a big, high-level picture sort of way, it’s about developers thinking “abuse case,” instead of just “use case.” It’s thinking, as I deploy this new feature, not about “Which enabling things will I allow my customer to do?” but rather, “How can this be abused by someone who wants to?” That’s a very different mindset.
One of the things we do for our customers when they’ve written an application, or have written an application and are about to go live, is to write a detailed threat model—here’s a list [of potential abuse]. …
Given the threat models you’ve created, ultimately where do most bugs creep in during development?
Most security bugs are in the extra side-effects of applications performing their functions. So I provide input A, the application produces result B, but as it’s producing B, it also does C, D, E, and F—unanticipated side effects of getting the job done. For example, a side effect might be that I write your data to a temporary file on the side. And it looks okay, but on the back end I’ve exposed your information to the file system.
Case Study: Finding and Fixing Security-Related Code Defects
Q&A: Arresting Bugs Earlier in Development Cycle Cuts Security Costs
About the Author
Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.