In-Depth

Q&A: Data Center Consolidation Best Practices

With mergers and acquisitions and the growing need to cut data center costs, data center consolidation is on the minds of many enterprises. We look at the steps IT can take to make for a successful consolidation project.

Enterprise Strategies: What’s driving data center consolidation? What are the benefits IT expects?

Charles Kaplan: There are two clear drivers at play with regard to data center consolidation: cost savings and regulatory compliance.

Whether through M&A or organic growth, many companies end up supporting dozens of data centers that are delivering the same applications. By identifying the application overlap, companies can consolidate their servers, making their data centers more efficient and reducing costs significantly.

Cost savings are always an impetus for change. By removing redundant servers companies can reduce electricity costs; the complexity in delivering critical services; and the difficulty retaining expert talent across so many locations. Additionally, with the availability of greatly increased CPU per rack density and much lower WAN costs than in the past, consolidation is an even more attractive option.

Regulatory compliance has becomes a real driver of consolidation when companies find themselves storing data in multiple places. Each site that houses regulated data (credit card information, Social Security numbers, financial records, product inventory, health care data, etc.) requires network segmentation of, and security controls around, that data. You'll also want to periodically audit the controls and data.

Because of compliance pressures, retailers are looking toward consolidation as an answer. Whether it’s a company that once stored data in hundreds of small local data centers or a multi-national organization that may have records stored in dozens of global data centers, consolidation simplifies compliance.

Do enterprises realize the benefits and value they expect?

A level of cost savings is eventually realized; however, there are many factors that can cause a lower total ROI than originally expected -- factors such as higher-than-anticipated labor costs to prepare and execute the move; project delays; and unplanned outages all affect ROI. Obstacles such as these typically take place during the initial phase of the consolidation, but once the move is made, organizations experience significant savings in the future.

What best practices should IT follow before consolidating data centers? What needs to be in place before you begin?

There is a standard process for consolidating data centers, but how it is executed can result in staggeringly different time-to-value results. This discrepancy can either give the IT team a black eye or a big pat on the back. The standard process includes a review of system and application behavior (both historically and in real-time), establishing dependencies, enacting freezes, verifying behavior prior to the move (making sure no new servers or applications are introduced), making the move, then validating the availability of business services

Best practices can be summed up with one word -- automation! It's important to have information on the server-to-server and client to server dependencies as well as dependencies between servers and outside resources (SOA, Web services, and so on).

You need to use technology to be alerted to changes that occur between your dependency mapping and the actual move, and you need a systematic way to benchmark availability, performance, and usage both before and after the move.

What approaches to taking inventory does IT typical take, and what are the pros and cons of each?

I have a table [editor's note: shown below] that focuses on key attributes for data center inventory and highlights the most popular and proven methods:

 

The other means of taking inventory is using a consultant. This method is frequently used because it is the fastest way to start the process, which many believe will result in completing the project quicker. On the contrary, this method often produces the least accurate data, takes the longest, offers no future value, and, in the end, costs the most.

What kinds of benchmarks do you suggest IT establish?

IT needs to be prepared to answer this question: How do we know it works? Just because a server is powered up, or even ping-able on the network, does not mean the consolidation and migration was a success. Metrics to truly measure the project’s success pre- and post-move include the number of connections per second (per application in aggregate and per server); the number of unique users (per application in aggregate and per server); total bandwidth consumption by application (per server and per application in aggregate); and response time by application by client site.

Visibility and metrics to look for stragglers still using or trying to use decommissioned servers is another way to benchmark success. Looking for attempted connections to old-server IPs can be an early warning tip-off to stragglers.

How else can IT and business users prepare?

Notification and communication!

It isn’t enough to just send an e-mail to the entire company announcing a pending change. IT teams should use automated discovery to look for different jobs within the organization that may require special handling, or at a minimum one-on-one user notification and acknowledgment of the pending change.

Identify users at each facility to validate performance after the change. Don’t wait until the morning after for the calls to start coming in -- have a realistic test plan that includes key users.

A significant data center change is also an opportunity to enact new IT policy (e.g., a new proxy server or restrictions on outbound e-mail). Unfortunately, many employees pay minimal attention to e-mail messages from IT, so while there is increased risk in making multiple changes at once there is also likely to be better acknowledgement of the pending change if lumped in with other data center changes.

Why does IT fumble data consolidations? What gets in the way of IT’s success?

Lack of real-time and continuous (across many months) visibility into on- and off-network dependencies is by far the number one root cause of consolidation problems. Stated another way, administrators simply don’t have any reliable way to figure out all the moving parts required to deliver a service to a user over the network.

Relying on "point-in-time network snapshots," feedback from administrators and "as designed documentation" is a surefire way to fail because the IT infrastructure can change dramatically day to day. Any factor such as updates/patches, temporary fixes, and undocumented changes made by contractors that no longer work for the company would make these information sources unreliable. Automated discovery and dependency mapping is the only way to go into a consolidation project armed with the most accurate information on the current IT infrastructure.

How do data center consolidations that originate from mergers or acquisitions complicate your suggestions?

The process in the case of a merger or acquisition is largely the same except the IT team is dealing with documentation from both their own company and from the company they are acquiring. Further, there may be greater corporate visibility and pressure to complete the consolidation fast and flawlessly. These drivers only serve to reinforce the need for automated, systematic efforts.

Acquisitions do pose one potential additional challenge. Often the asset acquired is only one business unit of a larger entity. Determining how to divest that unit off the network without negatively affecting any other unit can be difficult. Shared resources, regulatory concerns, and different architectures all come into play.

What products or services does Mazu Networks offer for data center consolidations?

Mazu Networks offers Mazu Profiler, a leading application performance management solution based on network behavior analysis (NBA). The solution uses network flow data to provide automated discovery and dependency mapping. With Mazu Profiler companies can increase efficiency and productivity through the elimination of manual inventory of assets, minimize risk with accurate, comprehensive, and up-to-date data, and understand the impact of change on business services.

In addition, they can improve return-to-service times by moving all resources required to provide business services, providing continuous visibility before, during, and after the move. Users can resolve service restoration problems faster and validate that service levels are restored.

Must Read Articles