Columns

Getting in Front of Security

Two efforts—one public, one private—create diametrically opposed solutions.

As you read this, there's a pretty good chance that I'm buying a Middle Eastern rug in Tulsa, car parts in Sarasota, and perhaps some cookware in Seattle. If I'm particularly unlucky, maybe I'm buying tickets to a rap concert. You see, in addition to writing and my IT career, I also teach at California State University. A recent security breach at the data center that processes California state employee's paychecks compromised personal information, including my name, social security number, and payroll deduction information.

I found it a bit ironic that despite not catching the hacker, officials said there was no indication that the data would be used for unlawful purposes. That's synonymous with saying that someone stole my checkbook but I have no reason to suspect that they'll write a check. Come on, folks—people who steal often do bad things.

It's these bad things that drive our need for security solutions. The immediate threat, of course, is identity theft. I'll be watching my credit reports carefully over the next year or two to see if any loans or credit cards have been taken out in my name. My existing credit cards are at risk, too, since it would be easy to change the billing address with the information the hackers gained in the theft.

My personal frustration ignited my thinking about the state of computer security. From networking intrusion to virus detection, the basic security paradigms really haven't changed much in the past. Today's predominant model uses a two-pronged approach: Secure known issues and use software to detect known attacks. From SYN attacks to the Melissa worm, computers are typically always one step behind the bad guys: A new attack is created, and anti-virus companies and computer professionals scurry to patch holes and write detection software.

However, there are some interesting efforts afoot to change that model, or at least add a third, more proactive prong to computer security. The efforts aren't by any means mutually exclusive, but they are diametrically opposed: On one hand, there are simulation tools that attempt to help identify weaknesses before they're exploited. On the other hand, there's the concept of end-to-end security. That means malicious code is never allowed to execute with the privileges necessary to do damage.

Simulation and Easel
The first approach is represented by CERT's Easel tool. CERT is a federally funded program run out of Carnegie Mellon University (www.cert.org). CERT tracks Internet vulnerabilities and computer security incidents, and publishes alerts and information to help keep the Internet secure. Easel, now in beta, is a general-purpose simulation tool that allows models of everything from networks to seismic activity to be modeled.

While Easel is very flexible, it is also very abstract; it's not going to tell you that a particular piece of infrastructure is vulnerable to a particular attack. What it will do, though, is help you understand the implications of a compromise of a particular piece of infrastructure.

For example, one of the demos on CERT's site is a model of virus infection and removal. The model allows adjustment of the rate of infection and rate of inoculation, which visually shows the thresholds needed for a rampant virus infection—or for successful virus removal.

Palladium's Potential
The second approach is on the trusted-code front, where Microsoft recently disclosed an initiative called Palladium (interestingly enough, it's also the name of a toxic metal).

Palladium is an initiative aimed at securing computers and networks through a web-of-trust approach. The theory goes like this: If every step where software is introduced to computers is controlled carefully, security policies will allow large networks to simply reject malicious code.

That all sounds good, of course, but the flip side is that Palladium is an end-to-end DRM (digital rights management) system. What that means is that the centralized and remote nature of Palladium's security will also allow centralized and remote management of applications and content—that is, much tighter controls on how users can use content and applications.

Ultimately, Microsoft's goal with Palladium is probably more about stopping piracy and maximizing revenue from content. The potential anti-virus implications are more of a silver lining than actual intent.

Linux advocates are also worried that Palladium may be an effective way for Microsoft to eliminate the threat that General Public Licensed (GPL'd) software, including Linux, poses to Microsoft. Because Palladium will involve digital signing of every piece of software, and because presumably Microsoft (or an ally) will control the key authority, it will be difficult and expensive to digitally sign GPL'd code. That, in turn, will break the community development system that has been so successful for open source software in general and Linux in particular.

Still Looking
Both simulation tools and end-to-end trusted code security probably have a place in the future of computer security, but I'm not entirely impressed by the power of Easel. It's an interesting academic exercise, but probably won't lead to any concrete innovations in computer security. While I'm impressed by the potential Palladium could have to reduce threats posed by viruses and DDoS tools, I'm worried that the trading-freedom-for-security philosophy behind the system will ultimately do more harm than good.

Surely there's a solution more practical than academic simulations and less threatening to the free market than Palladium. I haven't found one yet, though—have you?

About the Author

Laura Wonnacott is VP of Business and Technology Development for Aguirre International, and a California State University system instructor.

Must Read Articles