In-Depth

Why Data Centers Need a Performance Review

By Dhritiman Dasgupta

Performance evaluations are a key productivity driver in any organization, but employees shouldn't be the only targets. Businesses should consider conducting performance evaluations on their IT infrastructure, too. An objective analysis can identify specific strengths and weaknesses and will help extract the highest possible level of efficiency and performance.

Your data center is a critical part of your IT infrastructure; it is the engine that drives and facilitates the delivery of applications and services throughout your organization. Here are the top five things IT managers and CIOs should ask themselves when evaluating their data center's performance.

1. Is your data center ready for the big data -- and even bigger data transfer -- explosion?

More data is created and shared globally than ever before more e-mail messages, photos, and videos sent from more mobile devices running a wide variety of new applications. Is your data center network ready to handle the zetabytes and exabytes of traffic heading its way within the next two to three years? Probably not. You need to prepare your network for this data explosion, driven by the rise of 10 GbE servers that are rapidly replacing the 1GbE servers in the data center. Specifically, you'll need to deploy new high-performance switches, routers, and security devices as you refresh your servers to support these new bandwidth demands.

2. Does your network align with the flow of traffic in your data center?

Data center traffic has changed direction over the past few years, from predominantly north-south (into and out of the data center) to more east-west (server-to-server within the data center). This change is being driven by the evolution of applications, which are moving from being client-server-based to SOA-based. Also changing the landscape is the adoption of technologies such as server and storage virtualization. In the past, a "tree-shaped" data center network design, along with protocols such as Spanning Tree, were sufficient for supporting the north-south traffic flow. Now, with 80 to 90 percent of data center traffic traveling east-west, you must plan for high bandwidth, low latency, and predictable performance between servers.

3. Is your network ready for the new world of server virtualization?

Server virtualization technologies from VMware, Microsoft, Citrix, and others, as well as open-source versions like KVM, have fundamentally changed the rules for modern data center design, throwing several commonly held assumptions to the wind. These include:

  • Each physical server supports one operating system and one application
  • Each physical server has one IP address and one MAC address
  • Once installed, a server never moves

Although server virtualization has caused more havoc for data center management and security than any other technology, it's no surprise that most modern data centers have adopted this technology (or will soon) given its CapEx and OpEx benefits. If you're planning to introduce server virtualization into your data center, you need to look at how your network is configured. Are applications, address/capacity allocations, and policy management tied to a server's physical location? If so, you'll need to change the network so the location of servers -- physical or virtual -- doesn't matter. That way, you won't have to keep going back to the whiteboard and update your capacity plans every time you add, move, or retire an application.

4. Agility: Is your data center designed for peak or steady-state?

Your data center is probably not operating under consistently high loads 24/7. Like most networks, the amount of data moving through your data center waxes and wanes throughout the day. Traffic patterns can be seasonal, too; for instance, Macy's servers are probably their busiest in December, and the load on Intuit's servers is likely highest in April. Is your data center network agile enough to handle these constant changes?

The basic promise of cloud-based computing is the ability to deliver a geometric increase in efficiency by pooling compute resources. As your data center's traffic load ebbs and flows, can your network dynamically and automatically allocate capacity from a common compute pool to accommodate the peaks? When the load drops, can your network release compute resources to be used by other "lower priority" tasks? In other words, do you have an "on-demand" network? The key word here is "scale" -- not just large scale but the ability to expand or contract anytime, on the fly, as needed.

5. Have you eliminated redundant operations?

The easiest way to increase network performance is to provide "fatter pipes." To this end, the networking industry has doubled pipe size every other year, growing from 10 Mbps to 100 Gbps over the last 15 years! However, there is a more fundamental way to increase network performance, one that arguably offers a bigger bang for the buck.

The trick? Removing redundant switching operations. Every time a packet enters a switch, the switch performs a "complete Ethernet lookup" in which it opens up the packet, reads the header, changes the bits as needed, and then forwards the packet to the next switch in the line. This process accounts for 80 percent of the time the packet spends in the switch.

One obvious way to reduce latency and speed the packet along is to reduce the number of switches the packet must traverse. This is accomplished with a flatter network.

Conclusion

The explosion of content being created and moved globally shows no signs of abating, at least not in the foreseeable future. At the same time, technologies such as cloud computing, server virtualization, and mobility are here to help make moving these huge amounts of data easier and more efficient.

Everything in the data center -- servers, storage, and applications -- has evolved. Everything, that is, except the network. A regular performance evaluation of the network will ensure that your data center doesn't become a cost center but instead is utilized as a strategic resource to drive your competitive differentiation.

Dhritiman Dasgupta is a senior director of product marketing in the campus and data center business unit (CDBU) at Juniper Networks. He is responsible for product and technical marketing for Juniper's enterprise routing and switching products, including EX Series switches, MX Series routers, and the QFabric product lines. Dasgupta has 14 years of experience in the networking industry, with roles in product management, corporate marketing, software development, and customer support. Prior to Juniper, he worked at Cisco as a senior product line manager for campus and data center switching. He started his career at Nortel Networks, Canada in the network management team. He holds a bachelor degree in computer architecture and an MBA in marketing and international business. You can contact the author at ddasgupt@juniper.net.

Must Read Articles