Bringing Your Network Up to Speed: Using Network Arrays to Improve Performance and Availability of Fast Ethernet, Token-Ring and FDDI Networks
Data centers are expected to have 100 percent availability, 24 hours-a-day, 365 days-a-year -- especially when they provide mission-critical services for national security, public safety and telecommunications. These data centers depend upon redundant technologies to eliminate single points of failure. For example, data storage systems are protected against single points of failure by the configuring of multiple disk drives into a redundant array of independent drives (RAID), gaining greater data throughput and protection from data loss through mirroring or striping. Using arrays of redundant network ports in mission-critical data centers imitates the RAID system and allows organizations to incrementally improve network throughput while simultaneously protecting local area networks (LANs) from costly downtime.
Meeting the Challenge of Corporate Intranets
Maintaining reliable connectivity to mainframe computers, workgroup servers and data storage units is growing in importance as the use of corporate intranets continues to expand. To handle the growing needs of corporate intranets, organizations are reviewing the effectiveness of their network operations, modeling anticipated end-user and system requirements and producing service level requirements. These internal service level agreements (SLAs) characterize the metrics that are required for adequate network functionality. Analysts at International Network Services reported that network availability and performance were the leading metrics for all surveyed organizations.
Simply increasing a network’s data throughput will not meet the service requirements for growing intranets. Many organizations depend on the accessibility of time-tested technologies, such as network arrays, to maintain system availability and performance, and are not willing to experiment with newer technologies that have not been proven.
The most ubiquitous LAN topologies are Ethernet (10 Mbps), Fast Ethernet (100 Mbps), Token-Ring (4/16 Mbps) and FDDI. Over 96 percent of data centers use one or more of these topologies to connect their intranetwork switches, hubs, bridges, gateways and routers. Many factors continue to drive the popularity of these topologies. Perhaps, the most compelling reason for companies to continue to expand their networks with legacy technologies is the wide availability of mature, low-cost products with broad industrywide acceptance. The momentum of an installed infrastructure with trained and experienced service personnel, makes it easy to understand why organizations easily choose legacy LAN topologies over newer technologies for mission critical connectivity.
Several new network topologies have been developed to address the demand for greater network bandwidth. Gigabit Ethernet, Asynchronous Transfer Mode (ATM) and Fibre Channel are the leading topologies that offer bandwidth from 622 Mbps to 1000 Mbps (1 gigabit per second) and more (see Table 1).
2.12 Gbps, 4.24 Gbps
Storage, Network, Video
SCSI, Network, Video
Copper and Fiber
Copper and Fiber
Copper and Fiber
Table 1: Comparing Gigabit Technologies
Gigabit Ethernet was designed to leverage the installed base of over 120 million desktop, workstation, server and peripherals that are connected with Ethernet and Fast Ethernet. By leveraging the same frame, Gigabit Ethernet provides a faster physical layer service with only minimal changes to the transport and application layers. To deploy Gigabit Ethernet today, users will need new fiber cable plants to support the higher speed; Gigabit Ethernet over copper is not expected to be standardized until late 1999.
ATM is a wide area network topology that provides support for quality of service (QoS) and fractional bandwidth. These features are especially attractive for audio/video applications. Earlier variations of ATM, running at 25 Mbps and 155 Mbps on the LAN, have not been well received.
Fibre Channel was optimized for use as server-to-storage connection. As a high-bandwidth transport service that operates independently from the higher level protocols, it has the capability to support data, storage and multimedia applications. Today, Fiber Channel is used primarily for locally-attached storage and storage clusters (network-attached storage).
With the ratification of Gigabit Ethernet (IEEE 802.3z), Fibre Channel (TCITS T11-FC) and ATM standards, products from multiple vendors are becoming more widely available. In addition to providing basic data connectivity, each of these standards promises to address future needs, including flow control and advanced network management. Unfortunately, these topologies do not provide intrinsic support for high availability network links or ports.
Early adopters will face significant challenges. Aside from the initial equipment cost, the wide scale deployment of any new technology will mean substantial training, installation, configuration and troubleshooting efforts. Although these costs may be justified for some installations, the effort to drive network bandwidth to 1000 Mbps capacity may be overkill for most organizations that are just starting to face bottlenecks with 100 Mbps network links.
Understanding Network Arrays
Network arrays are conceptually similar to disk arrays with redundant array of independent drives (RAID) storage technology. With RAID, multiple disk drives are configured in an intelligent disk array to deliver reliable, high-performance storage. With network arrays, multiple network ports are configured for high availability, high performance network connectivity. The redundant network ports provide the capacity for greater network throughput and reliability. Network arrays provide analogous service levels to RAID. For example, disk mirroring is similar to redundant network links with automatic failover and failback capabilities; disk striping is similar to link aggregation where multiple network links provide a higher, aggregate capacity.
Network arrays can be configured to support greater network bandwidth. To add additional capacity, add more network ports and cabling. To double available bandwidth, just add a second network interface card (NIC) and corresponding network cables. To quadruple network bandwidth, install another two NICs and cables. The key to the array is the connecting software. Without the appropriate software, each additional NIC would create a new network segment and a bridge or router would be required to connect the segments together. With network link services software, all the NICs would constitute a single, logical array.
Two or more ports and links in an array provide additional bandwidth between points on the network. Intelligent data management algorithms balance the traffic over these aggregated links (or "trunks") using Layer 2 (Data link) and Layer 3 (Network) protocols. This distribution algorithm minimizes link saturation and transmission latency (see Table 2).
Bandwidth with 2 Links
Bandwidth with 4 Links
Full Duplex Connection
Full Duplex, 2 Links
Full Duplex, 4 Links
Table 2: Bandwidth of Network Arrays
The natural successor of Fast Ethernet (100 Mbps) is Gigabit Ethernet (1000 Mbps). Until the network bandwidth requirement is 1000 Mbps, an array of multiple Fast Ethernet ports and links can be a simpler to install, easier to manage and more cost-effective to deploy.
Network arrays are also ideal for legacy networks with no natural successor. When faced with saturated networks, many organizations contemplate migrating to another network topology or supporting multiple topologies. These "forklift upgrades" can severely disrupt network communities that depend upon mission-critical applications running on mainframe systems and servers. By delaying the deployment of new network technologies until a future date when migration costs are reduced and network support and management tools have matured, organizations have the flexibility to deploy the best technology with minimal disruptions and downtime.
For organizations with 16 Mbps Token-Ring LANs technology, adding two or more NICs in a network array can increase aggregate network bandwidth to 48 Mbps or more. Networks with FDDI backbones can scale to 200 or 300 Mbps or more with an FDDI-based network array. More sophisticated network arrays can manage mixed topologies (e.g., Ethernet, Fast Ethernet, Gigabit Ethernet, Token-Ring and FDDI) for each network node.
Advantages of Network Arrays
Network arrays offer compelling advantages for many organizations. In addition to scalable bandwidth, network arrays also provide redundant network connections -- effectively eliminating the network link and ports as single points of failure. By configuring networks to support redundant network links, organizations can design data centers for a very high level of system availability.
For every LAN, a network array delivers redundant network connections that can eliminate network downtime whenever an active network adapter/switch/router port or cable fails. To improve reliability, each array can support several network links that automatically detect link failures, transfer traffic to redundant links and provide immediate failback whenever the link is restored.
When used with cooperating intranetworking devices, network arrays simultaneously provide additional bandwidth and active redundancy -- both greater network bandwidth and link redundancy. Instead of keeping redundant links in "stand-by" per IEEE 802.3d Spanning Tree protocol, network arrays carry traffic on all redundant links. In the event of a link failure, traffic is transparently diverted to other redundant links (see Table 3).
Link aggregation (or "trunking")
Load balancing to maximize available bandwidth
Active redundancy minimizes downtime and failover latency
Unaffected by intranetwork failures
(NIC, switch, hub, bridge, gateway, router)
Logical abstraction maintains API compatibility with upper level network and transport protocols
Topology independent (Ethernet, Fast Ethernet, Gigabit Ethernet, Token-Ring, FDDI)
Table 3: Attributes of Network Arrays
Deploying Network Arrays
Faced with the dual challenges of increasing network loads and greater demands for reliability, organizations have begun to deploy network arrays for mission-critical intranets and related services. Network arrays are very cost effective and easy to install when compared to deploying new network topologies.
Network arrays can be configured using a series of single-channel or multi-channel NICs. A typical network array supports multiple logical links (or "trunks"). For example, a network server may provide one logical link to a network switch and another link to a redundant switch or peripheral.
Typically, network link services software is required to configure and manage network arrays. This operating system-specific software virtualizes all supported NICs and provides one or more logical links to the upper level network and transport protocols. By integrating the array support into the driver and higher network layers, full compatibility is maintained with all legacy applications and service protocols; no additional software or modifications are required.
For maximum functionality, verify the link aggregation protocol between the network array (NICs and corresponding link service software) and the corresponding network switch. First generation network arrays handled only a single, full duplex, Fast Ethernet connection; redundant network links only ran at half speed (half duplex). Second generation network arrays support pre-standard IEEE 802.1ad link aggregation protocols with multiple full duplex connections.
The Future of Network Arrays
Network arrays are a powerful abstraction for network connectivity. Instead of administering a collection of point-to-point links and ports, arrays provide a more manageable series of logical links. Unique array services include link aggregation (for increased network bandwidth) and active redundancy (for high-availability, low-failover latency connections). Future array services can provide fault tolerance and connection-based security levels.
The process for managing network links for link aggregation and failure detection is inherently very processor intensive. To support each physical port and link with a 500 millisecond (ms) failure detection threshold, link management software must inspect each port/link at least every 250 ms; two failed inspections would cause the link to be deactivated and traffic would be re-balanced to the other remaining ports/links.
Future implementations of network arrays should feature multi-channel NICs equipped with embedded processors and dedicated failure detection circuitry to provide the necessary functionality. Models with copious memory could buffer network traffic and minimize host CPU utilization, ensuring good host and network performance.
About the Author:Jim Hsia is the Vice President of Marketing at ZNYX Corporation (Fremont, Calif.). He can be reached at (510) 249-0800, or via e-mail at firstname.lastname@example.org.