Columns

Unleash the Cyberhounds!

Follow these five steps to a good computer forensics program.

Alarms sound, sirens blare. Or, to be more exact, your intrusion detection system indicates some kind of network break-in. A hacker may have slipped past your security, gotten root access and carted away data. Or perhaps he's leaving Trojan programs, the infamous back doors that will let him in quietly next month.

Your first impulse might be to kill the power. Cut this guy off at the knees.

Don't.

Post-invasion lockdown is a natural impulse, but what you really need is enough good evidence to stand up in court. And even if there's no question of taking legal action, you'll still need usable data and logs of the attack so your security people can diagnose the breach. That way you can install new measures that prevent this from happening again. And armed with this data, you'll get a much clearer picture of any damage or theft.

Time is of the essence here; data surrounding the breach must be meticulously captured without fundamentally altering it—or worse still, just plain overwriting it. Of course, none of this can happen if you aren't prepared—physically, electronically and psychologically—for a breach. Once you've been attacked, it's far too late. A good diagnostics system, more commonly known as a forensic program, can provide that preparation.

Forensic analysis essentially establishes the facts surrounding the crime, identifies and lays the groundwork to contain the damage, and packages the information for legal purposes. Instead of dusting for fingerprints, though, computer security professionals try to capture as much digital information as possible—Web logs, system files, audit logs—and safeguard it for later study. Given the "volatile" nature of most operating systems—some system file or other is constantly being rewritten as the computer is used for business—time is of the essence.

Taking a "snapshot in time" of an active enterprise isn't easy. And unfortunately there are no really good tools on the market to automate process and procedures in one fell swoop. Companies that want a good forensic program need to have five things in place long before the incursion:

  1. A good security plan.
  2. An environment that's been optimized for collecting forensic evidence.
  3. A procedure for collecting that evidence when necessary.
  4. Training in the tools necessary to complete the procedure.
  5. Understanding of cost and cleanup.

Once computer forensics has played a part, you can then harden and restore the affected system, then communicate what's happened to stakeholders within the organization.

Some companies can't afford publicity, and therefore don't report attacks to law enforcement; that's common procedure in many financial institutions. To combat the secrecy, the FBI and Secret Service are pushing new requirements that force companies to disclose major incursions. If they go into effect, organizations will need a robust forensic response plan.

1) Build a good security plan.
First things first, and this is point number one for anything involving security—have a security plan. A good plan details what employees can and can't do with their computers, and clearly defines criteria for accessing different network resources. It specifies operating procedures for dealing with any security intrusions. Thus it should be clearly written, accessible to involved employees, and tested and refined through mock-attack situations that can ensure both the feasibility of the plan and how well it lends itself to collecting evidence.

The plan should include a section devoted to intrusion detections, which can be an extremely difficult problem. You'll probably want to include elements of an emerging class of software known as an intrusion detection system (IDS). An IDS is designed to quickly diagnose and report on network intrusions.

Create a Solid Trail
You should log on your network:

  1. Firewall
  2. IDS
  3. DNS
  4. Router
  5. Proxy Servers
  6. DHCP Servers
  7. Dial-up Servers
  8. VPN

Source: John Tan

It does this by treating the various pieces of hardware on the networks—servers, routers, firewalls—as sensors. When a "sensor" changes unexpectedly, the IDS sounds an alert. For instance, if a hacker re-writes a kernel, the kernel's power consumption may change. If that happens, the IDS flags the change for security administrators. They, in turn, can examine the affected elements for changes (such as kernel replacement). Warning bells will also go off if someone tries to alter an audit file, an obvious attempt to hide incursion signs.

An IDS isn't perfect. It must be accurately "tuned" to the particular system it's protecting. This can be difficult, exacting work, so it's not surprising that many companies, especially those without full-blown security staffs, have opted to outsource their security and/or intrusion detection to a managed security provider (MSP).

As explained in the article "Handing Off Security" in the December 2001 issue, an MSP can manage either part or all of network security—from keeping firewalls configured correctly to implementing and maintaining a complete IDS.

The advantage of an MSP is an economy of scale—an MSP's core competency is preventing and then responding to attacks, and it can probably do that for less money than it costs to hire, train and support an entire security staff. And an MSP has the advantage of collecting data from many client incursions, which can be proactively used to prevent clients in the future.

An MSP also operates 24x7, because attacks can occur at any time. This is important, as speed is of the essence when a break-in is discovered. To get data and logs good enough for use in tracing the criminal or presenting findings in a court of law, affected servers need to be discovered and isolated from the network quickly, and data must be captured as fast as possible.

But there's an enormous and obvious downside to an MSP as well: To be effective, it must handle and store sensitive corporate data. It's possible that the MSP could fall victim to a criminal attack—or have its data subpoenaed—which could put client data at risk.

2) Create an environment optimized for collecting forensic evidence.
Regardless of how IDS is handled, when there's evidence of an attack, steps need to be taken to quickly secure evidence, in this case a bit-level copy of every affected system, Web log, or system file. In addition, many files should be copied immediately to provide a snapshot in time of how the network looked.

You'll want to track all file system changes—when created, last altered or accessed, or what's been deleted—and all data produced by devices on the network. This collective picture allows security experts to analyze the type of attack and discern what was affected, so that the company can take steps to prevent future such attacks.

In addition, the snapshot can help the investigators trace back the attack to a specific individual, account or server, and take steps to halt it. It may not be an individual directly—when denial of service attacks were common, the culprit was often a company that hadn't realized one of its servers had been infiltrated and programmed to attack other servers. In many cases, tracing an attack all the way back to its real source just won't happen.

But if a company does attempt to prosecute someone, it needs to have well-documented and rigorously followed procedures in place for collecting, storing and analyzing evidence. Remember that one day, you may have to present findings to a court of law. Keep detailed notes of everything. Sign and date them all.

John Tan, who's written an excellent guide to forensic readiness, suggests that administrators track the RAM, registers, raw disk and logs of the victim system or systems, as well as the attacker's system or system, if available, to get an accurate picture of what was happening on the network at the time of attack. Surveillance of the actual physical environment is also invaluable for assessing whether or not the cyber security breach was also a physical facilities breach.

"Centralized logging is the key to efficient IDS and forensic strategies," writes Tan. By saving all server logs to a secure, centralized server, even if a server is irreparably compromised, you still have a place to begin the investigation.

The trick here is uniformity. Logs should all be sent in the same format, so that they can be analyzed and processed more quickly. Tan recommends using the syslog protocol, native on Unix devices and achievable through third-party products on Windows. In addition, administrators should record IP addresses, not domain names, since those can be spoofed or may change between the time the evidence is recorded and the time it's analyzed.

Time is a complex issue when saving logs to a centralized server. If different time zones are involved, time-stamping can be misleading. Your team should pick one format—Greenwich Mean Time—and make sure all servers use it.

It's also tough to analyze incursion sequences—i.e., construct a timeline of the break-in—if the clocks on all devices aren't synchronized perfectly. Recording when the data arrives at the logging server isn't effective, because it could be delayed. "The more devices on the network, the less possible it is to keep them all in sync. Without synchronized times, reporting will be confusing," suggests Tan.

Somewhere, some point must serve as a standard reference. Perfectly synchronized (not "almost-synchronized") servers not only ease analysis, but they also look better in court. One way to get that level of synchronization is via the Network Time Protocol (NTP-RFC 0958) protocol, which runs natively on Unix and through third-party software on Windows networks.

And though the logs will have time stamps for when they were created, you may want to verify those stamps by using an online notary, particularly if you want the evidence to stand up in court. Subscribers to an online digital asset notary run software that makes a fingerprint of any file—recording the exact time it existed in that state—and then sends the fingerprint to the notary for safekeeping.

When you need to prove that a file you collected hasn't been tampered with or backdated, you can generate a new fingerprint of the data and compare it to the old. If it matches, the data is pristine. If anything was altered, the fingerprints won't match.

3) Create an evidence collection procedure.
Attacks can rewrite data or files, delete files, even leave crippled kernels with built-in back doors for launching further attacks. Don't try to simply unplug the Internet connection and make quick copies of all the files with software on the system. If the attack was at all successful, then some of the core applications have probably been compromised.

Instead, prepare a reaction kit, a CD-ROM based toolkit with trusted versions of programs needed to capture data for analysis. The free Coroner's Toolkit for Unix is one such tool. Don't use standard backup programs for this process, since they can alter the file's "last accessed" timestamp and therefore alter the vital evidence you need.

Start your collection with the most volatile data, collecting everything possible. "The rule is to collect now and examine later," writes Tan. Shutting down the computer at this point is a no-no, as valuable information could be lost.

It's best to save the information on write-once, read-many-times media, such as CD-R, as well as to the centralized logging server, if possible. Be sure to create hashes (in essence, unbreakable checksums of the files) before and after sending any data, to verify that what went in is what came out. Remember the chain of custody; evidence has to be handled meticulously and documented thoroughly. Digital notaries that create a trusted, third-party time stamp on logs, are equally important for verifying when disk images were collected and that they haven't been altered.

Once collected, secure physical evidence in a safe place. It might literally be a safe, or a room with key-card access control. And make sure that your investigators handle only copies of the evidence—never the real thing—and that any evidence uploaded to the computer is accessed in read-only fashion to avoid corruption.

Order of Volatility

The order in which data is collected, from most to least volatile, is crucial:

  1. Registers and cache
  2. Routing table, arp cache, process table, kernel statistics, memory
  3. Temporary file systems
  4. Disk
  5. Remote logging and monitoring data relevant to the system in question
  6. Physical configuration, network topology
  7. Archival media

Source: Dominque Brezinski and Tom Killalea

4) Train your security personnel.
Just how expensive will forensic analysis be? Like anything involving computers, it depends on whether you handle it in-house or outsource it. If you're already using an MSP, forensic analysis may be included as part of the service contract. If you're monitoring security in-house, then there are two options—bring in an outside consulting firm or dedicated computer forensics agency, or do things in-house.

Keeping things in-house is the most cost-effective solution, because you can train existing security staff, who already know your network, in forensic analysis. Computer forensics, at least where network attacks are concerned, is a very nascent field, but there are already some rather inexpensive courses. @Stake, for instance, offers a "Digital Forensic Analysis" course for $3,000 (less for government agencies) that teaches how to identify and analyze digital evidence.

There's also a growing community devoted to computer forensics; most of the applications and articles on the subject are free, and were written by a small group of security professionals trying to advance the state of computer forensics. It's easy to begin learning about this online.

5) Cost and cleanup.
If you're analyzing in-house, you must budget your security professionals' time. There aren't many hard numbers, but last year the Honeynet Project, which puts real servers with standard configurations onto the Internet and watches how hackers attempt to gain access to and subvert them, held a contest to see who could do the best forensic analysis on data from one particular intrusion.

The results showed just how expensive analysis can be. "The average time spent in investigation turned out to be about 34 hours per person," wrote contest judges. In other words, it took one expert nearly a week to deal with a mess that took an intruder perhaps a half-hour to create. Project leaders estimated that, given an investigator's $70,000 annual salary, investigation time for a single incident cost about $2,000.

If you've trained personnel in-house or retained an MSP, you're going to be ready for when there's an intrusion and you have to fix the damage done. If you're using outside resources, now is the time to begin making contacts in consulting companies or security firms should you need their services.

For a complete list of computer forensics resources click here.

Must Read Articles