In-Depth

Need to Add Capabilities to Your Network? Warm Up the Forklift, or Install an Intelligent Infrastructure

Predictable, reliable, stable and cost-effective are essential elements a network administrator relies upon in their network infrastructure, particularly in relation to how the network responds when called upon to perform a larger role. E-mail, intranet and Internet activities are all driving forces for greater productivity, but, unfortunately, all place a serious burden on the network resources. If the network goes down or responds slowly, productivity drops, costs rise and opportunities may be missed. It's a challenging paradox; to be competitive one has to add new applications to the network, while the network may not be able to effectively handle the new loads reliably without major expense and disruptions. There is a solution to this contradiction, deployment of an ATM intelligent infrastructure. But first, it's worth exploring the evolution of networking to appreciate what resources have been expended to date. In so doing, a clear picture of Ethernet's development emerges, and one might pause to ask, "Can we afford to continue down this path?"

In The Beginning

In the beginning, there was Ethernet. A simple, yet elegant protocol developed by Xerox to allow devices in a small network to communicate with one another. Unfortunately, there was no accepted standard for the protocol and customers couldn't be guaranteed interoperability between devices from different vendors. The IEEE solved this by adopting the 802.3 standard, rendering the first generation Ethernet devices obsolete.

At the time, individual Ethernet segments could not be more than 500 meters long because the cable absorbed the signal strength, a distance limitation. Ethernet repeaters were developed which allowed segments to be extended up to two kilometers. Now that a larger network was possible, the cost of coaxial cable and connectors became a problem ... so 'thin net' cabling emerged. With lower material costs came increased deployment and usage. More and more workstations and servers were now moving traffic over the cable - and since each wire in the network carried all the traffic, even if no nearby devices needed it, congestion became a problem. The solution: the Learning Bridge. These bridges filtered foreign packets, aimed at receivers in different segments, and each segment became less congested. But bridges introduced a new problem; loops and resultant broadcast storms. Because bridges didn't keep track of the start-to-finish path of packets, broadcasts would be passed around between them. To prevent this, the Spanning Tree protocol was introduced. Spanning Tree allowed resilient Ethernet networks to be built, albeit at the cost of disabling all but one path between devices. Of course, now the first generation of bridges was rendered obsolete.

While bridging eliminates foreign unicast traffic from a particular segment, it must deliver all LAN broadcasts. As networks grew, the broadcast load caused performance problems. The fix: divide the network into subnets and route traffic between them as needed. The Ethernet backbone becomes practical again, but the price was complexity, latency and continual management headaches as subnets grew.

The Emergence of New Applications

As networks were more widely deployed, new applications emerged that put heavier burdens on them. Client server applications and graphical user interfaces moved more and more traffic across the network. Response times and application performance slowed to unacceptable levels. The fix was Ethernet switching: giving each user a dedicated wire with no other users on the segment. This was implemented in a new generation of hardware, which rendered the earlier learning bridges obsolete.

As users took advantage of dedicated bandwidth, the links between switches became overloaded. In Ethernet, simply adding more wires between the switches couldn't solve the problem, Spanning Tree would disable the additional links. So, a new generation of Ethernet was invented: Fast Ethernet, running at 100 megabits per second. For demarcation purposes this brings into a contemporary time frame.

As links within the LAN became faster, routing bottlenecks between LANs became more apparent. In Ethernet backbones, the proposed fix today is to install a new generation of hardware-assisted routers. But even hardware-assisted routers have a problem in large networks: their routing tables become too large. A new generation of routers and routing protocols, know as 'layer three switching,' is in the works. By adding route information to each packet, these protocols allow routers to make faster forwarding decisions and use smaller routing tables. A multitude of protocols has been put forth: IBM has promoted ARIS, Cisco has promoted Tag Switching, and the IETF is developing a standard under the name Multiprotocol Label Switching MPLS. Does the latter not mean the demise of the proprietary solutions?

Because Ethernet performance is determined by which broadcast domain a workstation is in, administrators need to perform lots of moves, adds and changes. To make this easier and more reliable, 'Virtual LANs' were developed. If you want to connect several switches together and create a VLAN across them, there are two choices: 802.10 encapsulation or 802.1q tagging. 802.10 is already obsolete for VLAN trunking -- vendors don't support it. And 802.1q isn't ready yet.

Different applications need different, guaranteed levels of service: data traffic can tolerate "bursts" and delays more than video and audio feeds. Ethernet backbone vendors are starting to promise that they'll support different service levels, allowing you to give one application or traffic type preference in case of congestion. Cisco's NetFlow switching promises this, but it's proprietary and not interoperable with other vendors' equipment. The IETF's Reservation Protocol and the IEEE's 802.1p specification also aim to provide service levels. These standards aren't finished yet, and nobody knows when they will be; but one thing is known: they will make existing Ethernet products obsolete. They rely on larger packets that current equipment is designed to reject.

The Gigabit Ethernet Solution

Bandwidth requirements continue to mount -- and Ethernet backbones can't deal with them by simply adding links. Which brings us to the Gigabit Ethernet solution. Despite all the unresolved issues with Ethernet backbone technology, many people say they like the concept of Gigabit Ethernet because they already understand Ethernet, and they believe a Gigabit version will extend their networks in a predictable way. Well, the first generation of Gigabit Ethernet products has appeared over the last six months, and from what has been widely reported, unfortunately, they're right.

The first problem that was encountered was incompatibility between vendor implementations. Products wouldn't work together. So, the IEEE is developing a standard for Gigabit Ethernet: 802.3z. It isn't complete yet, but the standard changes the Gigabit design and will make the first generation of products obsolete.

The next problem for Gigabit Ethernet was distance limitations. Early designs were limited to a hundred meters or so. To extend beyond this, some designs adopted single-mode fiber links. But, single mode is expensive. Gigabit vendors are hoping to invent a new multi-mode technology that could reduce future wiring costs, but perhaps at the expense of making obsolete the last generation of Gigabit equipment. The future direction isn't clear yet. Fast Ethernet ran out of bandwidth in about 18 months, but Gigabit technologies were on the horizon. How do you accelerate beyond Gigabit rates?

Ethernet backbone technology has experienced, as has its user's, a tumultuous history of development which shows no signs of changing into the future. What's wrong? There are three underlying causes of these problems:

First, an Ethernet backbone has no end-to-end path intelligence. Instead of choosing a link-by-link path when a datagram enters the network, Ethernet floods each packet to every device. When a switch knows that the destination is attached to a particular port, the switch filters the packet from other links. One side effect of this is, Ethernet backbones must prohibit multiple paths between switches -- spanning tree disables duplicate links so that the flooded traffic won't loop, consuming all available bandwidth. The result of the flood and trim model is, crippled capacity through spanning tree, and as a result, the continuous need for faster and faster link speeds -- and technology replacement.

Second, an Ethernet backbone doesn't manage broadcast traffic effectively. Because of the flood-and-trim architecture, each broadcast packet arrives without indication of its past route, so the switch has to guess where to forward it. If the switch guesses wrong, you get a broadcast storm.

Third, datagrams and groups of datagrams are not tagged with information to specify a guaranteed service level. This information is needed in any network that supports voice or video traffic, or guaranteed bandwidth for a transaction processing application. The IEEE and IETF are planning to address this by changing the lowest-level Ethernet packet structure. However, the RSVP committee has warned that the result may not provide the quality of service many users want, plus the changes will make existing Ethernet equipment obsolete.

Today, routing protocols don't provide different services levels for different applications. Support for quality voice and video is questionable, and can't even provide guaranteed bandwidth levels for client/server applications. There are proposals to add new protocols allowing workstations to identify which traffic flows need bandwidth guarantees, but they still don't provide interoperable, managed bandwidth. That's because today's routing protocols don't keep track of bandwidth allocations or congestion, nor do they have any way to choose a suitable path for a voice flow. Open Shortest Path First (OSPF) still isn't fully deployed, and it's already made obsolete by emerging needs. And this doesn't even address the congestion of data leaving the local subnet. In the MPLS model, all clients still compete for access to a subnet's gateway router. The bottom line is forklift upgrades will be required, and the subnets will have to be manually tuned again and again.

The ATM Intelligent Infrastructure

Businesses today requires a communication infrastructure that is predictable, reliable and stable. Actually, it's mission-critical. Internally, e-mail is becoming the information backbone and client-server applications are relying on 24x 7 network availability. Externally, Web commerce may be the link to your largest customer via an electronic data interchange (EDI) purchasing system, or serve as a vehicle to promote the company's goods and services. Clearly, in whatever the case, if the network isn't stable then the business is needlessly being placed at risk. Lamentably, IS' budgets are not growing to address these critical needs.

Given the continuing network disruptions and shortage of resources to address them, one might rhetorically ask, "What else is there?" Instead of a flood-and-trim architecture, the network calculates the routes for data to use and marks the data with those routes. This would not only make the network more reliable by enabling load-sharing links to be built, but would do so without spanning tree disabling them. If more capacity was needed, newer faster technologies wouldn't be required. By simply plugging in additional link cables one could increase network capacity. Instead of unmanageable broadcast flooding, there would be a central broadcast distribution service, one that can identify heavy broadcast generators and meter, and throttle the traffic accordingly. Instead of relying on unfinished and unproven proposals for marking traffic with priorities and bandwidth requirements, there'd be a proven, standardized tagging methodology? Well, actually, there is a solution! An ATM intelligent infrastructure can address all these problems today.

Calculated, directed routes are supported with PNNI (the Private Network-Network Interface), which distributes address and link information among ATM switches, and with UNI (the User-Network Interface), which enables workstations and Ethernet switches to request directed routes from ATM switches. These technologies have been shipping for years and form the basis for some of the largest ATM networks in the world.

Central, manageable broadcast services are part of the LAN Emulation protocol, standardized in 1995. And LANE networks don't generate broadcast storms.

Proven, interoperable data tagging is the very basis of ATM. Each ATM cell carries a short Virtual Path ID and Virtual Circuit ID, while ATM switches map these IDs to both forwarding ports and service categories. Again, this is proven and interoperable.

The Future of Ethernet, The Reality of ATM

The future of Ethernet is likely to be fairly predictable if one considers past experience. Then again, they might seek another option given that past experience. Fast Ethernet ran out of bandwidth in about 18 months, but Gigabit technologies were on the horizon. Beyond this, when Gigabit backbones are exhausted, there remains the challenge of how to accelerate beyond Gigabit rates. The technology churn in the world of Ethernet appears to be never-ending. New generations of products and protocols are more and more disruptive, triggering heavier one-time expenses, requiring more training and administration, and raising overall maintenance costs. This doesn't even begin to address the network service degradation issues that are in fact more costly to business than hard down time.

As a proven, reliable, standards-based network architecture, the ATM intelligent infrastructure doesn't play the proprietary technology churn game. If there is a need to increase link speeds and network capacity, simply plug in more ports and cables. If you need to run voice, video, or teleconferencing over the network you get quality delivery of the services. Moreover, the ATM intelligent infrastructure affords tremendous benefits to its users because of lower administrative costs, less disruptions, no technological barriers potentially forestalling expansion of the network or business opportunities. As stated before, Ethernet is an elegant protocol, one that meets the needs of many of today's organizations. However, given its sometimes-tumultuous past, people may be looking for another solution. There is the ATM intelligent infrastructure. One additional benefit of the intelligent infrastructure is that the phrase, "warm up the fork lift" will relate to the manufacturing floor and not the MIS department.


About the Author:

Scott Smith is the Product Marketing Manager for FORE Systems' ForeView Network Management Software, ForeThought Intelligent Infrastructure Software, ForeView Accountant-ATM Usage-based Billing & Performance Application, and the ForeView network management partner and reference solution company suite of products.

Must Read Articles