In-Depth
Quantifying the Threat from Insiders
Your greatest security risk may not come from outside attacks but from your own employees. Setting policies and procedures aren't enough to stop the problem, but new security event management technology can help.
Are malicious insiders—employees or hackers posing as employees—stealing your intellectual property? Loss due to insiders is a persistent problem, according to reports such as the annual CSI/FBI Computer Crime and Security survey. Yet when it comes to combating insider threats, many organizations falter. One issue is how to quantify the problem, then know how to apply resources to lower risk.
New security technologies should help. They provide organizations with a picture of what insiders are actually doing. Instead of just setting policies and procedures, organizations can set a baseline on employees’ activities, then adjust policies to compensate.
Security Strategies discussed insider-monitoring technology, and how to quantify risk, with Kris Lovejoy, vice president of technology and services for security vendor Consul. Lovejoy has worked with such organizations as the National Security Agency, EDS, and Trusecure, developing insider-risk metrics along the way.
How rampant is insider theft today?
[Recently] I was doing a four-country, four-day tour of Central America, and I visited with 16 prospects, and of the 16, six admitted that internal employees had been transacting business under [the guise of] clients and for the purposes [of stealing].
What exactly were the employees doing?
One of the banks in particular had a situation where there was a branch, and the manager and two employees had sort of colluded, and … were logging into client accounts [as the client] and … stealing small sums of money. After six months they’d managed to amass a quite-hefty sum, but because the transactions were quite small, it was very hard to gauge.
Don’t banks have software to catch such discrepancies?
There are money-laundering technologies out there. They would identify real-time patterns of behavior but would not be able to identify [small-scale transactions].
Why isn’t software already looking for that?
One of the things we find within an organization is organizations don’t know what they don’t know. An example: I was in Canada [recently] and I was sitting down with the CIO of a fairly large telecom, and I [asked] … [if] you have a customer called John Smith, and I want to know everything he’s done in the last week, could you answer that question? And he pulled in the auditor [who] said, I’m not sure, then looked uncomfortable and said, it’s the IT guy, I asked him to activate the audit [features].
So planning can only take you so far, at a certain point you have to analyze what’s really going on?
Correct. Planning can only take you so far. We know that people do bad things. We know that people will steal accounts, for instance. I don’t think the public understands the extent to which the theft of private data actually occurs. I remember being horrified when I went to do an assessment of a … large retailer. It had 6000 retail outlets throughout the U.S. and … their database administrator (DBA) had gotten fired but on his way out downloaded the client database, and had everyone’s information and the credit card data, and was holding it hostage. The fact is … they never made it public, because they actually paid this guy off.
To combat such insider threats, what questions should organizations be asking?
Tell me about all failed logins in the last week, tell me about who’s logged onto the database server. If these are questions you want answered, it [also] helps IT auditors know what is the optimal IT infrastructure to have enabled, and it helps … [IT] collect data they know is going to be [useful].
So what counts as an insider?
An insider is any user or system that is working within the organization, or which is using systems, services, applications, etc., within an application. That can be within a perimeter network, extranet, or internal network … operating under any level of authorization
In other words, [metaphorically speaking] once a human being has gotten through the door of [a] building and received a visitor ID, that visitor is an insider.
What’s the typical approach to combating insider risk?
What most organizations would do is attempt to … [enact] a series of policies and procedures and awareness programs … to prevent the likelihood that someone would do something bad. Unfortunately, what I discovered from my years of [working in the field was]… on the threat side, policies, procedures, and awareness programs tend not to be effective within organizations, because they’re not quantifiable.
Organizations can’t measure the reduction in insider threats after policies, procedures, and training?
[Often they know] it’s going to cost a lot of money, but they’re not sure what they’re going to get out of it … and [because] those programs were usually not carried out by the IS organization … companies tend to not make the right investments in policies, procedures, and education. [Then] on the vulnerability management side, the better the controls people integrate, the harder it is for people to do their jobs.
[Hence organizations need] a more effective way to control authorized user behavior and access to critical information assets, by implementing an infrastructure in which insider behavior … and bad things can be monitored. Or organizations can receive alerts [when] things that should not be happening are. So when it comes to controlling insider behavior … security event management technology [is] the solution to the problem.
What’s your formula for quantifying insider risk?
Risk equals: vulnerability times threat times cost. It’s fairly simple, and what it [says is] there is no risk if there is a vulnerability but no threat that [the] particular vulnerability could be exploited. There would be no risk if there was both a vulnerability and a threat but no cost associated with that vulnerability being exploited.
What’s Consul’s approach to applying tools toward that paradigm?
Consul’s solution … identifies, inside an organization, who are the users or systems who are acting under some level of privilege. It identifies the critical information assets, then it provides you [with] an infrastructure to tell you who is doing what actions on which particular information assets—file, folder, database, etc., on which system, where they’re coming form, what the desktop or workstation they’re using, where they were attempting to go to, and when they were doing it.
What this allows organizations to do is implement a security policy that says my users … are permitted to do these things under these conditions … Or on a more reactive note, it will give the organization a good understanding of the kinds of behavior that have been going on in the organization.
When organizations assess what’s really happening, what do they often find?
People do dumb things that affect [security]. Developers, for example, disable their antivirus protection so they can compile more quickly. [Or] IT administrators have a user within the organization who’s just annoying—can’t get into this or that, or a salesperson who needs to use chat or install a program—so [administrators give] these users power-user or administrator privileges … They do this to promote efficiency, but it’s dumb because they’re handing out the keys to the kingdom, and organizations don’t know where these holes are.
[In another situation] an administrator was following the RIM [Research In Motion] guide for BlackBerries … installed [BlackBerry server software] into a Notes environment, … and he created a loop and brought down their mail system. What [administrators] do can cause havoc in the organization.
Sometimes when you do things by the book, it still creates problems?
Had an organization really instituted those critical policies, procedures, and standards, whereby any new software had to be tested within a test environment [it would have prevented this situation] … [But] from a technology perspective, what’s faster and easier is if you have a solution that told you who did what, where, and when. You’d be able to go back and tell. The problem is, in most organizations, an individual does [something] and doesn’t realize the effect that [that something] has had on that particular system or server. In this particular case, the infrastructure was down for almost two days, because they thought it was a problem with the operating system.
Should automated monitoring get more focus than, say, education, with its hard-to-measure ROI?
I am by no means saying that a proactive, preventative health programs implemented in an organization don’t provide dividends … I highly advise our customers to implement those kinds of programs. But … [most organizations] have a hard time getting their arms around users. A paradigm is needed … to give them the tools to see who did what action and within what time frame. What I’m excited about is the fact that this technology is now available [and] … we’re going to be more effective because these technologies are available.
---
Related Story:
Cloaking Assets With Identity-Level Firewalls
http://info.101com.com/default.asp?id=6741