In-Depth

The (Ugly) Year in Security

For security administrators, RSA's disclosure of a hack was just one of many serious breaches that occurred in a very bad year.

Information security almost always entails a confrontation with the malicious, the threatening, and the unexpected. Some years are, on balance, better than others, and from this perspective, what kind of year was 2011?

Ugly. Very ugly.

That much is clear when one of the most trusted physical authentication schemes in all of information security gets hacked. That's precisely what happened in March, when RSA disclosed that hackers had compromised its SecurID token system. The SecurID breach resulted in successful attacks on at least one (and possibly two) prominent defense contractors: Lockheed Martin and (rumored but unconfirmed) L3 Communications. By June, RSA acknowledged that it planned to replace almost all of the 40 million extant SecurID tokens.

And so it was in 2011, as prominent data breaches dominated the headlines, even as the widespread malware attacks that used to snag most of the headlines were mostly missing in action. During the same period, we learned that all of us have already been hacked, even though a lot of us don't yet know it; that foreign governments and organized criminal groups are dabbling in cyber-crime, cyber-espionage, and cyber-warfare as never before; and that IT organizations still aren't doing a good enough job of following information security best practices.

The Almost Unthinkable: SecurID Hacked

Prior to this year, RSA's SecurID hardware token system had been regarded as theoretically vulnerable but as practically unassailable.

Yes, SecurID -- like any other security mechanism -- could be compromised. If IT organizations followed common best practices and adhered to RSA's own recommendations, however, SecurID was thought to be practically invulnerable.

Until March, when attackers managed to steal an internal RSA database that contained information about the serial numbers associated with specific SecurID tokens. The database matched the serial number of a specific token with the specific master "seed" used to generate its key. It seemed like a far-fetched scenario, but if an attacker could match the serial number of specific token with the respective company in which it was being used, she could conceivably "clone" a SecurID credential and use it to gain unauthorized access to a system.

That's just what happened, resulting in a confirmed attack on defense contractor Lockheed Martin and a rumored attack on Level 3 Communications in the May time frame. Attackers used targeted e-mail attacks to phish for information about tokens. Needless to say, they were successful. "There have been malware and phishing campaigns in the wild seeking specific data linking RSA tokens to the end-user, leading us to believe that this attack was carried out by the original RSA attackers. Given the military targets, and that millions of compromised keys are in circulation, this is not over," wrote Rick Moy, CEO of security testing specialist NSS Labs Inc., in a May posting on his blog.

RSA disclosed the breach in March, but waited until June to announce its intent to replace almost all of the 40,000 SecurID tokens in use -- and just like that, SecurID, an authentication mechanism used by government agencies throughout the world -- to say nothing of private enterprise customers such as Lockheed Martin -- shed its aura of invulnerability.

Brazen New World

The details of the RSA breach underscore another, altogether scarier, aspect of information security in the cyber era: the likelihood -- the inevitability -- of state-sponsored information warfare, espionage, or cyber-crime.

The attacks on both RSA and Lockheed Martin weren't the actions of an individual or a group, nor by an organized criminal hacking outfit (although organized criminal hacking, in its own right, has emerged as a serious and worrisome trend, too).

The RSA-related attacks are believed to have been orchestrated by a national government, with experts citing the Chinese military as the most likely actor.

Nor was the RSA attack the first or even the most spectacular case of government-sponsored hacking or information warfare. Last year, for example, saw the emergence of Stuxnet, the first malware worm designed to target embedded or control systems. When Stuxnet was first detected (in June of 2010), it was described as a destructive "industrial" worm because it targeted SCADA control systems manufactured by German giant Siemens. By August of last year, however, security pros -- and national security experts -- had developed a new appreciation for Stuxnet: according to Symantec Corp., for example, fully three-fifths of the systems it targeted were based in Iran.

Then, toward the end of last year, Iranian officials confirmed that Stuxnet had caused "limited" damage to Iran's uranium enrichment effort. This year, as more details came to light, Stuxnet came to be seen as one of the most sophisticated worms ever developed. It's now believed to be the product of a joint U.S.-Israeli cyber warfare effort; although its full impact still isn't known, experts believe that Stuxnet critically hindered Tehran's uranium enrichment program, perhaps delaying Iran's nuclear capability by several years.

The lesson, according to security professionals, is that hacking and cracking has become less of a nuisance -- less, even of an unorganized or tactical problem -- and more of an organized or strategic enterprise.

One upshot is that it's fast becoming the province of entities with more resources (such as criminal hacking groups) or more resources and bigger budgets (such as national governments).

Last year, for example, security researcher the Ponemon Institute said that organized crime was becoming more involved in cybercriminal efforts.

This year, Symantec highlighted a flourishing "crimeware" marketplace, suggesting that cybercriminals were trading attack kits, information, and cracking services. "While some of these kits have relatively simple capabilities -- containing limited exploits that target a specific program or operating system -- many kits are considerably more robust and include a number of tools with multiple exploits that target a range of applications across various operating systems," said Symantec's Report on Attack Kits and Malicious Websites.

Just this month, the Office of the National Counterintelligence Executive published a report in which it accused China and Russia of orchestrating cyber industrial espionage efforts against the United States.

"The computer networks of a broad array of U.S. Government agencies, private companies, universities, and other institutions -- all holding large volumes of sensitive economic information -- were targeted by cyber espionage; much of this activity appears to have originated in China," the report concluded.

IT Sloppiness, Not Cracking Smarts, To Blame for Most Attacks

This year also produced "Morto," the first worm designed to target Microsoft Corp.'s Remote Desktop Protocol (RDP).

Morto caused a huge spike in RDP-related traffic and actually managed to compromise several thousand Windows systems.

Its success was said to underscore that most intractable of information security problems: administrators frequently don't take simple, commonsensical steps to secure their IT environments. Put simply: Morto shouldn't have been an issue. Its success had nothing to do with a flaw -- endogenous or otherwise -- in Microsoft's RDP implementation. Nor did it involve an as-yet-unpatched RDP vulnerability. Its attack used a single username -- "Administrator" -- and a list of 30 common passwords, including "admin," "administrator," and the aptly-descriptive "letmein." For this and other reasons, Marc Maiffret, chief technology officer (CTO) with eEye Digital Security, called Morto "silly."

In a blog post (1999 Called, It Wants Its Morto Worm Back), Maiffret said that Morto reminded him of automated worms such as CodeRed, SQL Slammer, Sasser, and Blaster -- with a key difference. "[A]t least most of those were actually leveraging a software vulnerability to exploit and gain control of a system. Morto on the other hand appears to simply attempt to compromise systems by trying [approximately] 30 common passwords ... over RDP," wrote Maiffret, who co-discovered the original "Code Red" worm over a decade ago.

If IT shops simply followed common security best practices, Morto's impact could have been lessened, if not completely blunted -- and that's the rub. As we saw in other cases in 2011, IT organizations just aren't doing these simple things. Microsoft underscored this problem with its publication of the 11th edition of its Microsoft Security Intelligence Report (MSIR).

In most cases, Microsoft researchers claimed, malware attacks can and should be blocked from the get-go. For example, almost half (44.8 percent) of all malware attacks in 2011 required a user to complete an action before a computer could effectively be compromised. A full third of malware attacks exploited the Windows Autorun facility. Windows security best practices require that organizations enable User Account Control (UAC) -- which prompts a user when an application asks to run at an elevated privilege level -- and disable Autorun.

In too many cases, the MSIR suggests, shops just aren't following best practices.

Every Enterprise At Risk

This year might well go down as the year in which we all lost our innocence.

If the increased profiles of organized criminal groups and foreign governments in cybercrime, espionage, and all-out information warfare don't drive home the point forcefully enough, a study published in August by security researcher (and Intel Corp. subsidiary McAfee) certainly should. In the report (Revealed: Operation Shady RAT), Dmitri Alperovitch, McAfee's vice president of threat research, distinguished between two different kinds of Global 2000 companies: those that have been hacked and know it, and those that have been hacked and don't yet know it.

"I am convinced that every company in every conceivable industry with significant size and valuable intellectual property and trade secrets has been compromised ... or will be shortly," wrote Alperovitch.

Most victims, he continues, don't know, and won't ever discover, that they've been compromised. The upshot, he claimed, has been an "unprecedented" transfer of knowledge and wealth.

"What we have witnessed over the past five to six years has been nothing short of a historically unprecedented transfer of wealth -- [i.e.] closely guarded national secrets ... including from classified government networks ... source code, bug databases, [e-mail] archives, negotiation plans, and exploration details for new oil and gas field auctions, document stores, legal contracts, [and] design schematics," Alperovitch wrote.

"If even a fraction of [this information] is used to build better competing products or beat a competitor at a key negotiation ... due to having stolen the other team's playbook ... the loss represents a massive economic threat not just to individual companies and industries but to entire countries."

Must Read Articles