In-Depth
Web 2.0 Pushes Need for Defensive Application Development
The Web 2.0 paradigm shift is fraught with both promise and danger
The Web 2.0 paradigm shift is fraught with both promise and danger. It gives users easy access to highly interactive, highly configurable, and highly collaborative applications. There's also the dark side of security vulnerabilities, particularly in cases where internal applications are combined with (or in turn consume) external sources. In fact, in many cases, external controls simply aren't (or won't be) available to provide acceptable security assurances.
What should a Web 2.0-savvy organization to do? Market research firm Gartner Inc. advises the application development equivalent of defensive driving: organizations should have developers build firewall-like features into their applications which enable them to detect attacks or abuse, as well as terminate malicious or damaging sessions.
The firm also advises enterprises to take defensive Web 2.0 application development further by incorporating features that detect code and license tampering, along with self-repair features that can recover from tampering or other malicious activities.
"The same Web 2.0 characteristics that enable creativity, productivity and collaboration also make the Web 2.0 ecosystem prone to successful attacks and theft," write Gartner analysts John Pescatore and Joseph Feiman. "The Web 2.0 global ecosystem increases the vulnerability of distributed software and exposes it to piracy and abuse, especially in places known for intellectual property … neglect."
There's a further wrinkle, according to Pescatore and Fieman: Web 2.0's usability -- which, from the perspective of many enterprise adopters, is one of its strongest selling points -- also opens it up to attack.
"User-friendly development technologies enable masses of individuals to become developers, while their secure application development expertise stays minimal," the pair points out. "This can lead to an explosion of defenseless applications that also serve as intermediaries for attacks on enterprises. With communal software ownership, hackers have easy and free access to attack and theft enablers. In such a world, reliance on external security controls, such as network security and identity and access management, alone is insufficient."
Pescatore and Feiman urge business-critical software have built-in security features. Over time, they predict, most enterprise IT organizations will require commercial software vendors to build security detection, protection, auditing, and self-healing features into their software packages and service-oriented software services.
The Gartner analysts also point out that many existing methods -- including network traffic analysis (which can be used to detect and prevent cross-site scripting (XSS) or SQL injection attacks) -- are largely ineffective because they lack insight into application logic and context. As a result, Pescatore and Feiman argue, the decision to terminate malicious sessions based solely on the conclusion of network traffic analysis software (without endpoint intelligence integration) can result in false blocking that affects business operations.
"Although it is possible to tune intrusion prevention systems to eliminate false positives with older-style static applications, the more dynamic world of Web 2.0 makes such tuning difficult to impossible for many enterprises. Network traffic encryption protects data in transit, but once a data stream reaches its destination -- for example, an application server or a browser -- it must be decrypted so that the code is executable once again, and data readable. Hence, it becomes open to attacks once again."
Because Web 2.0 introduces client-side code, they warn, the opportunities for data misuse are greatly increased.
"Enterprises, even today, cannot rely solely on network protection measures … such as firewalls or traffic encryption," Pescatore and Feiman write. "The situation will only worsen with more enterprises' applications and content exposed to a global audience equipped with sophisticated tools." They continue by asking -- rhetorically -- how it's possible to build firewalls around applications and content used in numerous third-party mashups.
"Individuals are exposed themselves, but they also expose to attacks those enterprises with which they have business interactions," they note. "Web 2.0 is an ecosystem of myriad online individuals who, typically, have limited or rudimentary protection. … That protection is significantly weaker than the typical due-diligence level found in enterprises."
The attack and abuse detection features they have in mind outstrip the capabilities of many of today's tools -- in part, they say, because such features will have (by definition) better insight into internal application logic.
"Attack and abuse detection should be built into or injected into application code's vulnerable points to detect and analyze runtime access attempts to those points, decide whether attempts are benign or malicious, report on attempts and (optionally) terminate malicious sessions," they counsel.
"Such detectors become part of an application; therefore, their analysis and decisions are based on the knowledge of the particular application's logic. These detectors can be more accurate than ones made by traditional network-based application firewalls, which do not have access to the internal structure of an application and are limited to inspecting network traffic to/from the application."
About the Author
Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.