Q&A: Keeping Networks Simple Brings Big Rewards

The key to smart IT spending just may be investments that simplify your network infrastructure.

With IT budgets tightening and demands from business users to do more to stay competitive, it’s no wonder that IT wants to make the right investments. Where can IT get its biggest return on an investment? Mike Banic, vice president of product marketing, Ethernet Platforms Business Group at Juniper Networks, says that investing in the right network infrastructure is key. His advice: keep it simple.

Enterprise Systems: With the current global economic climate, what is the biggest IT issue facing your customers?

Mike Banic: The biggest IT challenge facing our customers is to remain relevant and strategic, and deliver real value to the organization in the face of shrinking budgets. This is nothing new; IT is always being asked to do more with less. What’s different now is the level of pressure applied to every department, and IT is no exception.

To overcome these obstacles, IT needs to be more innovative than ever -- adopt new technologies that help the company remain competitive in its market while quantifiably reducing TCO. To do that, of course, companies need to find the right strategic technology partner, one that delivers innovation and efficiency.

Where do you see IT investing most of its dollars today in terms of network infrastructure?

Customers are investing in data center technologies in particular to not only weather the current recession but emerge stronger. We’re seeing focused investments in technologies that enable server consolidation, process automation, and network simplification -- all to lower the total cost of doing business and improve the customer experience.

What can enterprises do to advance the economics of networking in the data center?

Two words: eliminate complexity. Today’s data centers are far too complex, with multi-layer architectures and multiple devices offering discrete functionality addressing very narrow needs. Incidentally, this device variety means IT typically runs different operating systems, requiring multiple management systems and compounding the complexity and adding to both capital and operational expense.

To truly advance the economics of networking in the data center, enterprises must deploy high-performance network devices. This may sound counter-intuitive, but think it through: high-performance network devices actually drive down network costs by collapsing layers, reducing the amount of hardware, and generally consolidating network operations, which all have a material impact on capital and operational expenses. It definitely helps to implement high-performance solutions that build on a single operating system, which, in turn, provides a single application for all management and administrative functions.

How can IT simplify the network infrastructure?

Like the data center, the distributed enterprise suffers from excess: too many network layers, too much hardware, and too many operating systems. As with the data center, the answer is to reduce complexity by collapsing or consolidating layers and reducing the amount of equipment in use.

However, unlike the data center -- which is typically in a single location -- the network infrastructure is widely dispersed and decentralized. Newer technologies play a key role here by allowing multiple switches deployed on different floors, or even in different buildings, to operate as a single, logical device, dramatically reducing the administrative and management burden. The University of Exeter in England, for instance, utilizes this technology to reduce the number of managed devices by up to a factor of 10 -- a tremendous savings.

What’s the biggest impediment to this simplification, and what are the biggest mistakes IT staff makes in simplifying their network?

The greatest obstacle is perspective -- the IT equivalent of the old forest vs. trees analogy. It’s the problem of looking at devices rather than at architectures. When IT faces specific problems or needs, IT turns to point solutions. Pretty soon, there are far too many devices and far too many layers.

We believe IT needs to take a broader, more global view. Start with the architecture and use it as the blueprint. Remember to consider different vendors when building the network, or even adding to it -- it’s not about what one particular technology provider offers but what “your” company needs.

Why is the network operating system such a big issue in the data center?

It’s a problem everywhere, not just the data center. Ultimately, it’s a management, interoperability, and complexity issue.

Here’s a real-world example. A company typically has one operating system (OS) for the access layer switches, a different OS for the aggregation and core layer switches, yet another OS for the firewall service modules, and a completely different router OS. Not only does each class of device run a different operating system, but each layer may have multiple feature-specific versions of the OS. Can you imagine having to manage and maintain all those operating systems and versions? That’s a full-time job for a team of people, and businesses just don’t have the resources to support such an environment.

This sounds like an extreme case, but it’s not. An industry analyst recently spoke about a company that had more than 250 versions of an operating system running in its data center. Talk about complexity.

Forrester Research said it best in a recent report when it called multi-version network operating systems the silent killers of network efficiency. Businesses are better served by focusing on vendors that offer a single-version OS based on a single release train across all network components. Dozens of versions of even the same OS will inevitably lead to operational inefficiency.

What role does network management play here?

The current state of network management is best described as “swivel-chair management” -- an environment in which different applications are required to manage switches, routers, and security devices. Consequently, network administrators are constantly swiveling the chair to move from one screen to another to keep track of what’s going on -- a recipe for inefficiency.

Again, a single management application that offers a true single pane of glass through which all network policies are pushed, makes life much easier. These policies may come through standard northbound interfaces from data center orchestration applications that manage physical servers and virtual machines, as well as virtualized storage and the network that connects them all. It’s a consolidated view of the entire network that provides significant operational efficiencies.

What is the impact of virtualization on data center networks?

Simply put, it’s the need for higher throughput, lower latency, higher reliability, and even quality of service (QoS). Historically, average server utilization has been below 10 percent because they have been dedicated to run a single application. Today, running a Hypervisor allows a single server to run many applications on separate virtual servers and achieve much higher utilization. With higher utilization comes higher network traffic. Hypervisors also offer mobility for virtual machines (VMs) with live migration, which increases demands for lower latency and greater reliability so that VMs -- which are tens of gigabytes -- can be moved statefully, quickly, and reliably.

What is the impact of the data center network on applications?

Lowering network latency and increasing throughput and reliability all improve business processes that run on service-oriented architectures (SOA) because there is minimal delay between applications, and data moves quickly and reliably. SOA enables multiple applications to work together to form a business process such as checking inventory or placing an order. Simplifying the data center network architecture with high-performance devices that enable collapsing layers and reducing network devices have a visible impact on the response times of business processes because the applications are coupled more tightly together.

How do you see data center architectures changing in the future?

Now that data centers are being increasingly virtualized, businesses want more seamless connectivity between data centers -- an architecture that creates a single virtual data center, regardless of location. This means Layer 2 domains need to be extended across data centers with technologies such as MPLS and VPLS enabling a dynamic environment between data centers.

About the Author

James E. Powell is the former editorial director of Enterprise Strategies (esj.com).

Must Read Articles