Q&A: Private Cloud Helps Virtualized Data Center Cut Costs in Half

How a cloud solution solved performance and complexity issues and returned an impressive ROI.

There’s a lot of buzz about the benefits of cloud, but are the benefits real? In the case of Kroll Factual Data, the answer is a resounding “yes.” We spoke to Galen Wiles, the firm’s IT director, about their cloud project.

Enterprise Strategies: What was the problem your organization was trying to solve?

Galen Wiles: The project started out as a normal three-year server refresh. We had planned to implement blades, but that was looking difficult because the I/O bandwidth and number of I/O connections needed for our virtualized hosts are not easy to achieve with blades. Because we had consolidated to virtual servers for several years, we thought that adding virtual I/O could be the solution to the bandwidth and scalability problems. Xsigo provided us with an virtual I/O answer that gave us six times more bandwidth to each blade chassis and more scalability on the number of connections, so what we got from I/O virtualization was the ability to condense our physical server infrastructure even smaller, as well as an increase in our overall I/O bandwidth.

Can you explain more about the performance and management complexity problems that were causing you problems? For example, what performance was poor, and in what area(s) of IT were you trying to reduce complexity? How critical were these issues to your company’s success?

As we virtualized our servers, more of our network traffic was becoming server-to-server traffic that had to cross the 1Gb Ethernet production network. This created I/O congestion issues that limited performance. We could only run about 10 VMs per server, far less than we thought was really possible, so we our efficiency was limited, too. These issues were also hard to fix because the infrastructure was complex and all I/O connections were locked down in hardware.

The other problem was how to consolidate into a blade environment and deal with the increased I/O bandwidth. Moving to 10 Gb Ethernet was one option but would have required a significant investment, most likely a complete replacement of our internal data center switch infrastructure.

Why were you interested in adopting a cloud solution?

We had already virtualized our servers and storage, so in our mind we were two-thirds of the way to a private cloud already. That last leg of the stool, the I/O, appeared to be holding us back. We believed we could make the entire environment more efficient and easier to manage by virtualizing that as well. Why did you choose a private cloud rather than a public or hybrid model?

We have a significant amount of PII (Personal Identifiable Information) that needs to be kept internal.

What benefits did you expect and what benefits did you actually realize?

We expected blue skies and sunshine, with no I/O bottlenecks. What happened was that we exposed an I/O limitation of the Hyper-V virtual switch. We had this setup originally with 1 NIC per VLAN on each host. We have expanded that to two NICs and it did improve, but not where we need it. We are still working this issue with Microsoft.

Despite this, the overall results were still quite good. We’re now able to run 120 VMs per host, up from 10 VMs per host before. Furthermore, the flexibility of virtual I/O has made things easier to manage. We recently fixed an I/O issue in just a few hours that previously would have taken days to fix.

What drove you to choose Microsoft Hyper-V for this project? Didn’t implementing virtualization on top of cloud -- a two-technology project -- complicate matters?

We have been using Hyper-V since it was first released, and Virtual Server prior to that. Virtual servers work well with virtual I/O. It’s actually simpler than trying to optimize conventional I/O.

What was the ROI of your project? How did you measure that return (what metrics did you use)?

We were able to reduce the amount of I/O hardware we need by about two-thirds, so Xsigo virtual I/O ended up saving us about $15,000 per blade chassis, and about 50 percent on the overall infrastructure. We needed more bandwidth from server to server, which is what we got with virtual I/O. Upgrading our entire environment to 10 Gb Ethernet would have resulted in weaker performance and would have been more expensive.

Now that your project is complete, what recommendations do you have and what best practices can you recommend for colleagues in the same situation?

We are a learn-as-you-go kind of company. We struggled a bit learning the new technology, and had a few big “gotchas” mainly due to our lack of understanding. If I had to do it all over again, I would have done more up-front training.

Now that you have established a private cloud, where is this project headed next?

We have two big projects. The first is to create two active data centers, both with the same virtual servers, storage, and virtual I/O infrastructure. The second is to create what we call the “Dynamic Data Center.” This will allow us to respond to the ups and downs of customer load quickly. We want to be able to add more capacity (additional VMs) on demand. This has been a brain child of my CIO for several years now, but we lacked all the pieces to make it a reality. With virtual I/O in place, we think we have all the pieces we need now.

Must Read Articles