In-Depth

The Enemies Within: Building a Multi-Layered System Security

A system security administrator of a large multi-national corporation recently described his network as similar to a "tootsie roll pop – hard on the outside and soft on the inside." Once an intruder broke through the hardened perimeter of the network, they were free to roam inside with little threat of detection. Likewise, trusted individuals like employees or contractors are able to freely access and browse servers, employee workstations and hosts with little fear of being noticed. This network is among the best from a security standpoint, but still lacks a significant number of safeguards.

The FBI estimates that 95 percent of intrusions are never detected and a recent Computer Security Institute/FBI study indicates that roughly 75 percent of these intrusions are from trusted individuals. Obviously, a hardened perimeter strategy is very important, but simply fails to detect most intrusions. A common example of insider data theft is the employee planning to leave.

For intruder detection to be effective, the security administrator should develop a security strategy with multiple layers of detection. An intruder or insider may be able to elude or thwart one or two of these defenses, but will likely get tripped up in a mesh of multiple detection points. To create a fully layered detection framework, deploy the following layers of security:

• Hardened perimeter

• Monitored hosts and servers

• Monitored network infrastructure for unauthorized changes

• Monitored computer to computer conversations, especially at the workstation level

• Statistical analysis of how users behave on the network

• Vulnerability assessment of the perimeter

• Cryptographic tools for remote users, such as employees, suppliers and customers

Firewalls, intrusion detection systems, network management systems, computer misuse detection, statistical analysis and cryptography are key weapons in securing networks.

Layer 1: Hardened Perimeter

The first and most important security layer is a hardened perimeter. To harden the perimeter, a firewall and an intrusion detection system should be installed. The definition of "perimeter" must be expanded as well. The perimeter is any location outside the main data center, including WAN connections, remote access, intranets, extranets, vendor connections, etc. A frame relay WAN connection may be secure, but the branch office on the other end is typically lax in physical security and adherence to security policies. Hackers will often exploit modem back doors and other access points at remote locations as a means to get around perimeter defenses.

An Internet firewall should be the first security product deployed. Firewalls should be configured by an experienced individual. Consider engaging a consultant or one of the many managed firewall outsourcers. Moreover, the firewall configuration should be updated constantly to thwart new hacking techniques. A poorly configured firewall provides little more than a false sense of security.

In addition to the firewall, a monitoring of the firewall configuration – looking for "holes" – is necessary. This is best achieved by using a pattern-recognition intrusion detection system. These systems "listen" to all traffic on a segment and alert system personnel if a packet has a pattern of bits that could be an attack. Attack patterns can include several classes of attacks, including denial of service and operating system exploits.

Layer 2: Host and Server Monitoring – Computer Misuse Detection

A hardened perimeter does not address internal security breaches or the hacker that has managed to break through the perimeter. Host and server monitoring uses a computer misuse detection system that employs a small software agent installed on each monitored device. This agent activates itself every minute or so and reads the audit and security logs generated by most operating systems, looking for signs of a break-in or attempted access. The log data is sent to a central monitoring console that further analyzes the data and alerts system personnel to a potential misuse or break-in. The console can also provide extensive reporting and prosecution file building.

These tools monitor for several operating system-level issues like login and file read failures, attempts to gain root access, changes in administrative configuration, attempted modification of critical system files, uploaded files, file opens and file executions. When the agent daemon or process has collected the audit log data, it is usually scrubbed for duplicates, compressed, encrypted and sent to the monitoring console for further processing, alarming and reporting.

The value of these systems is that they monitor for both external break-ins and internal misuse. Keep in mind, 75 percent of all intrusions originate from internal sources. By deploying host and server monitoring, unauthorized attempts to access critical data resources can be effectively located and thwarted.

Layer 3: Infrastructure Monitoring

Monitoring a network’s infrastructure will provide clues in the event of an intrusion. By watching for new MAC addresses, IP addresses, host names, system descriptions and response times, or changes in the relationships between these data, it can be determined if someone is spoofing other users and trying to hide their identities.

Infrastructure monitoring tools provide a snapshot in time of the network’s configuration. These tools use standard SNMP MIBs (MIB II and RMON) and Ping to discover and collect data from the devices attached to the network. The collected data is placed in a relational database for analysis and reporting. For example, after discovering devices, an accurate accounting of the MAC addresses, IP addresses, system description and host name of each system on the network is available. It is also possible to track the relationships between these data, looking for telltale changes in relationship between MAC, IP Address and Host Name (potential spoof) and locating new – and possibly unauthorized – MAC addresses. The key enabling technology is the relational database, which can comb though thousands of records quickly, looking for exceptions. Without a database, this task is nearly impossible.

SNMP’s MIB II also provides a MIB for collecting data on a host’s listening and open TCP/UDP ports. Most server platforms support the TCP MIB, which will list the ports on which they are configured to listen. The first task after installing an infrastructure monitoring tool is to identify and reconfigure any UNIX or Windows host listening on the Telnet, FTP, Rlogin, Rsh and Finger ports. In active ports, the TCP MIB provides the source and destination IP Address and ports used in the communication. Infrastructure monitoring tools take constant snapshots of the network and host configurations, comparing them with a baseline. Any exceptions are reported to a management console.

The value in using SNMP is data collection efficiency. Most vulnerability scanners transmit two packets for each TCP/UDP port pair that is scanned. With 128,000 possible port combinations, millions of scanning packets must be transmitted for a typical infrastructure – too much overhead for continuous monitoring. By using SNMP Get and Getnext, these tools scan for only active TCP/UDP ports, cutting network traffic to a small fraction over other scanning methods.

Layer 4: Conversation Monitoring

Watching IP layer conversations is a key strategy. Conversation monitoring watches for conversations that should not be occurring. Conversation monitoring tools use RMON2 to watch IP conversation pairs, TCP Port usage and byte symmetry. Byte symmetry (bytes to and from a host) is the key element because it aids in determining which device served data to the other. Any conversation between an internal address and an outside address (for example, a Web page) should have net data inflow to the network. The only exception should be a mail server connection to the Internet.

A practical example of a dangerous conversation is any FTP conversation between a device inside the firewall and an outside address where net data leaves the network. This is permissible if the device is the network’s FTP server, but any other situation is cause for concern. Any Telnet sessions between an internal device and an outside address or unauthorized user is potentially dangerous.

Conversation analysis is also useful in spotting internal misuse and hacking. User workstation to user workstation conversations should be a rarity. Unfortunately, the proliferation of Windows 95 desktops with file sharing enabled is a potentially large security risk. Executives, high-level managers and salespeople often keep local copies of strategic data on laptops.

Layer 5: Statistical User Profiling

Distinguishing between normal and troublesome uses of the network, hosts and servers is a critical element in detecting both misuse and break-ins, especially socially engineered attacks and employees preparing to change jobs. Over a period of time, users will settle into a predictable pattern of use. Deviations from normal usage patterns or a particular user that has a distinctively different use pattern from their peer group is often a cause for concern.

In the future, expert systems will likely be the core of many intrusion detection systems. Statistical user profiling is an example of employing expert system technology to correlate the thousands of data points necessary to spot deviations, an impossible manual task for even a small network.

Implementing user profiling involves monitoring audit logs of an operating system of shared servers and hosts. For a period of 60 to 90 days, the expert system collects usage statistics for each user. These statistics include number of logins, actual and attempted file opens, program executions, day and night usage, hours using a particular system and vacation usage. The audit logs are collected and fed to the expert system that correlates current usage patterns with the baseline.

Any statistically significant deviation from the norm generates a warning or alarm. These systems generally allow the security administrator to increase the level of monitoring on suspicious users, building prosecution files for later use.

Statistical user profiling systems are the ideal tools for detecting socially engineered attacks or users preparing to leave an employer. A socially engineered attack will typically show up with significant increases in nighttime activity for a user. Likewise, a significant increase in file browsing by a user, especially accesses to sensitive servers, is cause for heightened concern and an increase in surveillance.

Layer 6: Vulnerability Assessment

While deep scanning of large infrastructures is not practical, there are still some critical uses for vulnerability assessment technology. The majority of the scans these products perform tend to be too invasive for the network infrastructure itself, but are critical for evaluation of perimeter defenses. Vulnerability scanners are typically updated monthly with newly discovered hacking applications and techniques. Running them against screening routers, firewalls, Internet servers and VPN gear will help keep a network’s security infrastructure up to date, and will assure that vulnerabilities are kept to a minimum.

If the network is already being monitored using the infrastructure analysis discussed in Layer 4, the need to perform large-scale vulnerability scans of its infrastructure can be minimized.

Layer 7: Cryptography and Remote Users

Remote sessions, whether dialing-in suppliers or customers, should be cryptographically authenticated. If they are traversing a public network, the sessions should be encrypted. Using cryptography in this area yields two key benefits. First, each user is authenticated with a high degree of assurance that they are the actual individual attaching to the system.

Secondly, it is nearly impossible for an intruder to clone or twin a session. By deploying cryptographic technology, network managers have a strong defense against attack.

Understanding cryptographic terminology can be a daunting task, but there are two key concepts to grasp. The first is two-factor authentication. This requires both a token of some sort (a physical element that cannot be duplicated) and a password. The best analogy is an ATM card and a cash machine. In order to use an ATM machine, one must possess both factors – the ATM card and the PIN. Likewise, the remote user should have more than just a password to remotely access a system.

Two-factor authentication provides a high degree of assurance that the user is who they say they are. The second concept is packet authentication. The contents of a packet should be cryptographically hashed to assure that its contents are unaltered or come from a different source. The cryptographic device automatically drops any packet that fails authentication. This prevents cloned or twin sessions. This is a critical point often overlooked by many systems administrators.

Remote user cryptography should be evaluated with an eye for both safety of the cryptosystem and for end user interface. If the cryptosystem can be compromised or disabled, it is worthless.

Users don’t want to have to interface with cryptography. Any cryptographic system should be totally transparent to the end user both in terms of usability and throughput performance. If either factor is missing, there may be a revolt.

Implementing a layered architecture provides a high degree of assurance that intrusions will be detected, illuminating any flaws in the network’s defensive architecture and usually thwarting attacks as they happen.

No solution is 100 percent effective, but this strategy goes much further than the vast majority of networks and systems today, while imposing minimum administration and network overhead burden.

About the Author:

Steve Schall is Security Product Manager for ODS Networks.

Must Read Articles