Q&A: The Shift in Data Breaches
Breaches are targeting application because that's where the data is.
With anti-virus and firewall protection in place at most enterprises, attackers have changed their strategy; they now go after the applications themselves, which offer a gateway to the data IT is protecting. This may be an easy target, because developers have not focused on application security. To learn more about this dilemma, we contacted Chris Eng, senior director of security research with Veracode (a cloud-based application risk management firm). Chris explains how data breach attacks are shifting more to the software side -- security's weakest link. He also shares for how enterprises can shift their focus to application security and testing.
Enterprise Systems: How are data breaches shifting, and what's driving this trend?
Chris Eng: Breaches are overwhelmingly shifting to the application layer because they’re directly connected to the data that attackers are after. A fundamental rule of security is that a system is only as strong as its weakest link. No matter how tightly the networks are locked down, your applications have to remain accessible to end users, customers, partners, and so on. An attacker just needs to find one coding flaw or logic error that he can exploit to gain access to valuable data. That data may be credit card data, personal health records, or even proprietary company information. For example, in the Aurora attacks that took place earlier this year, the attackers were interested in gaining access to source code.
Why are applications now the weakest link?
What’s really magnified the application security problem is the fact that the attack surface has grown significantly over the past five to ten years. These days, companies have hundreds -- sometimes thousands -- of applications of varying business criticality and exposure. Not only that, it’s nearly impossible to find someone who truly understands the application portfolio.
The other contributor to the increasing attack surface is the disappearance of the perimeter. That’s not to say that a logical network perimeter doesn’t exist; the firewalls haven’t gone away. However, client-side attacks are becoming more common -- so-called “drive-by” attacks triggered when a user visits a malicious web site that exploits a vulnerability in the web browser or a browser plug-in. There’s no need to breach the network perimeter -- you simply attack the users on the internal network and use those compromised workstations as a launch point for further attacks.
In addition to web browsers, there’s also been a surge in mobile devices over the past few years. Smartphones are being allowed to connect to corporate networks, making them another attractive target for compromise. People download and run applications on their phones without considering whether those applications are trustworthy.
Aren't current anti-virus/protection software products adequate to address these breaches?
Anti-virus software is better at detecting known risks than unknown risks. Once a virus or a piece of malware is identified, AV vendors write signatures for that malware and from that point forward, the product is theoretically able to detect it. The challenge is that AV products can be easily fooled by anyone with a basic understanding of how they work. Some AV products use heuristics in addition to signatures, essentially looking for certain behaviors rather than certain bits. This can increase effectiveness in some situations and may help protect against some of the drive-by attacks I mentioned earlier.
In terms of detecting exploitable vulnerabilities in your own custom software, AV provides no benefit. It’s simply not going to detect coding errors made by developers.
Has IT shifted its focus to application security (and testing), and if not, why not?
Traditionally, IT departments have not been very involved with application security. They are responsible for maintaining the supporting infrastructure, which may include applying system patches, maintaining SSL certificates, or configuring network equipment such as load balancers and firewalls. Most application security initiatives are driven either by a central security group or by an executive sponsor such as a CSO or CISO.
How has IT been addressing this problem?
IT departments may be introducing equipment targeted at detecting or preventing application-layer vulnerabilities. For example, there are devices called Web application firewalls (WAFs) that attempt to block common web attacks such as SQL injection or cross-site scripting, among others. The problem is that most companies configure their WAFs to detect but not block attacks, because they’re afraid of impacting legitimate end users by mistake. In practice, WAFs are best applied as a Band-Aid solution -- you can configure them to block vulnerabilities that you already know about, buying you more time to find the root cause and remediate the offending code.
There are other products targeted at IT departments to secure applications, but most of them at their core are anomaly detectors. This means that they need to be constantly tuned or baselined to understand what “normal” traffic looks like, otherwise they don’t know what constitutes an anomaly. This is similar to the process of training an IDS system. The difference is that applications evolve at a more rapid pace, so what constitutes “normal” web application traffic tends to change frequently.
The irony about adding equipment to protect applications is that doing so just increases the attack surface. For example, now the attacker can find and exploit vulnerabilities in the WAF itself.
What best practices can you recommend for application security and testing?
The biggest mistake is trying to bolt on security testing without integrating it into the development process. Testing is just the final phase in a robust secure development life cycle (SDLC) that ideally incorporates developer education, design reviews, and threat modeling in addition to code reviews, automated security scans, and penetration testing. There are no silver bullets; each component of the SDLC fills a niche, and each testing approach has strengths and weaknesses.
There is also a cultural aspect. In most enterprises, MBOs for developers boil down to “write functional code as quickly as possible.” Product specs rarely outline specific security requirements that must be met. QA efforts focus on basic positive and negative test cases but not adversarial testing. Ultimately, we need to do a better job of teaching developers about secure coding practices, and we need to hold them individually accountable for producing secure code.
Finally, it’s important to realize the risks associated with the software supply chain. Your organization could theoretically write perfectly secure code but still be vulnerable to security flaws in third-party libraries or other components that you integrate but didn’t write yourself. Off-the-shelf software that you’ve purchased, such as your billing system or your HR portal, also increase your exposure to security breaches. A comprehensive application security program should take third-party risk seriously, particularly as software development becomes increasingly decentralized.
What products or services does Veracode have that address the issues we've discussed?
Veracode’s cloud-based application risk management services platform provides several application security testing services. Our core service is static binary analysis, which you can think of as an automated code review that doesn’t actually require the source code. We can verify the security of every application developed internally or supplied by a third party. We also provide automated dynamic analysis (i.e., web application scanning) and manual penetration testing.
All of these services fit into the testing and verification phase of the SDLC or during acceptance phase of third-party software. For a highly critical application, you’d want to conduct as many different testing methodologies as possible to ensure the best possible coverage. For an internally-facing web application that doesn’t handle personal information, static analysis alone is probably sufficient.
Finally, our cloud-based services include developer training and certification in secure development practices.
James E. Powell is the former editorial director of Enterprise Strategies (esj.com).