In-Depth

Choosing the Right Protocol Analyzer

Why hardware-based protocol analyzers are worth the extra cost and are superior to those based in software.

by Steve Wong

A protocol analyzer is computer software and/or hardware that can interpret traffic passing over a network. As data streams flow across the network, the analyzer captures each packet and decodes and analyzes its content according to the appropriate specifications.

Protocol analyzers are used to monitor network usage and resolve network problems; they can be especially helpful in identifying incidence of malicious software passed through a network. Take, for example, the recent Conficker worm that slithered onto computers on April 1 via peer-to-peer (P2P) network connections. By utilizing an analyzer to detect P2P traffic running on a network, an organization can obtain a more accurate picture of what is happening on its network. After pinpointing the offending parties, an organization is able to successfully mitigate the risk and institute policies to prevent re-occurrences.

Demand is also surging for high-valued services, including VoIP, video telephony, and Web conferencing. Troubleshooting the problems that emerge within these services is becoming ever more complex for network administrators.

With the large number of protocol analyzer products on the market today, how can an organization decide which will provide the best value for their network? The decision ultimately comes down to a software-based protocol analyzer and a hardware-based one. The most popular protocol analyzer is software-based and is available for Windows or Linux PCs, partially because it is freeware.

As popular as this product is, it is not without limitations. We will explore several reasons hardware-based protocol analyzers are worth the extra cost and are superior to software-based ones.

MAC Errors

Software-only protocol analyzers are significantly limited in their ability to analyze and capture frames with MAC errors. Ethernet switches typically discard frames that have MAC errors because the frame information is no longer reliable and the errors must not be allowed to disrupt network services. Nonetheless, it is important to capture and analyze these frames because they can also provide clues about why a network is not performing well.

High Performance

Software-only protocol analyzers cannot analyze and capture all the frames of a highly utilized link, but a card specifically designed for this task can. NICs (network interface cards) are designed for client-server communication (Web surfing, file exchanges, email, database queries, etc.), but perform poorly under the demands of protocol analysis. When performing protocol analysis, typical Gigabit NICs can start dropping frames when network utilization reaches 8-10 percent. Additionally, it is easy for these spikes to occur and go undetected because network trending tools average numbers to the intervals that they’re plotting, effectively leveling off any spikes.

On the other hand, hardware-based protocol analyzers typically employ FPGA accelerator technologies that can capture networks with utilization rates of up to 100 percent. The Utilization window in Figure 1 is monitoring the utilization numbers of four separate links (four channels) in a live Gigabit Ethernet network. Notice how the values change over time. Because these values can be extremely high, approaching 80 percent (channels A and B) under some circumstances, software-based protocol analyzers will not be able to “keep up.” In such a situation, the network simply overwhelms the software-based protocol analyzer, and the value of the data captured by the analyzer is suspect.

 

Time-Stamping

Hardware-based protocol analyzers provide the ability to accurately time-stamp frames in FPGA hardware as they are analyzed and/or captured. If the card is properly designed for network testing, then it should have double-digit nanosecond accuracy. Software time-stamping is used by a software-only protocol analyzer. Before the frame can be time-stamped by the application, it has to traverse the card and the card’s driver.

Time-stamping frames in software can typically have a timestamp accuracy of one microsecond, because they rely on the clocking capabilities of the operating system, but it can fluctuate by “tens of microseconds” between frames if other system processes that are demanding resources. For organizations with voice or video traffic, quality metrics such as mean opinion score and jitter/latency values are computed to help characterize VoIP quality. These metrics can only be accurately calculated if hardware-based time-stamping techniques are used.

Real-Time Multi-Segment Analysis

Hardware-based protocol analyzers can perform real-time multi-segment analysis. For example, when using hardware-based FPGA cards, these analyzers can correlate data between multiple network segments in real time. Real-time multi-segment analysis lets you view connections end-to-end to see how transactions are propagating through the network. This makes it easier to see where there may be bottlenecks or other problems and is one way to gain end-to-end network visibility.

The same level of accuracy (double-digit nanosecond resolution) needed to prevent phantom-jitter in an RTP stream (used to support multi-media communication) is also needed to maintain the alignment of merged trace files.

Streaming to Disk

Perhaps one of the greatest benefits of using a hardware-based protocol analyzer is that often such tools are integrated with streaming-to-disk capabilities. This allows the analyzer to capture and record massive amounts of network data, which can then be used for what is called “retrospective network analysis” or the ability to review network events that have taken place in the past.

For example, if you know that the network suffered an outage or slowdown yesterday at 3:00 pm, you can, in effect, rewind time and review all the conditions that may be responsible for this network issue. Software-based protocol analyzers simply do not support such an important feature -- a network-capture tool that is able to capture and store network data over periods of days or months without losing any packets. Streaming to disk is useful for diagnosing problems that are intermittent. It can also help to spot times when network utilization spikes during hours when no one is watching.

 

Summary and Recommendations

Look for a network analyzer that has an intuitive user interface that highlights network problems in a way that requires little expertise to recognize. For example, look for color-coded icons that indicate the severity of a condition that might adversely affect network performance or cause loss of data. The ability to issue alerts via e-mail or mobile device (such as a pager or smart phone) or to run scripts to fix a problem is also an important feature.

The analyzer should have reporting capabilities that show network performance in clear, concise form so that executives can quickly understand them. Products available today for a few thousand dollars can be used by almost any IT-oriented employee, but it could cost an organization significantly more to train an employee to use a product that is not easy to use.

An organization must be sure that the network analyzer can see what is happening on most of the 7 layers of the OSI network model, and particularly at the application layer because this layer is troublesome for many enterprises. Research has consistently shown that application failures are one of the key causes of network outages, yet problems at this layer are also more difficult to troubleshoot because of the protocol complexity found at this layer. Legacy products from just a few years ago are typically not application aware and thus lack the capability to see and troubleshoot what is today a critically important area for enterprises. This is true for many software-based software analyzers.

In summary, even though corporate networks are more extensive and more complex than ever before, it is actually easier now for an organization to keep its network in optimal health. An investment in the appropriate level of network monitoring and management solutions will reduce the organization’s total network operating cost and will greatly reduce the risks that might otherwise arise from poor network performance or network outage.

Steve Wong brings nearly 10 years of test and measurement experience to his role as the vice president of marketing for ClearSight. He was the director of product marketing for Finisar, and was responsible for product management at Anritsu, where he drove initiatives to develop SONET/SDH and Ethernet/IP technologies for the company’s LAN/WAN analyzer and traffic generation test platforms and solutions. Steve holds a BA in computer science from New York University and an MBA from the Kellogg Graduate School of Management, Northwestern University. You can reach the author at swong@clearsightnet.com.

Must Read Articles