Q&A: Data Center Remodel Up Close and Personal
A remodel of a data center created unique challenges. We examine the success factors of a project at the University of Nevada Las Vegas.
Data centers are always changing -- but what if you had to build a data center to serve today’s needs by remodeling space vacated by another tech group? To learn first hand what’s involved in this unusual data center remodel, understand the challenges, and understand the keys to the project’s success, we talked to Cam M. Johnson, the Manager, O.I.T. Operations Center at the University of Las Vegas Nevada (U.N.L.V.) He will be giving a virtual tour of his new data center at the upcoming Data Center World conference to be held March 18-21 in Las Vegas.
Enterprise Strategies: What was the initial configuration of your data center?
Cam M. Johnson: We did not have a true data center so that is a little tough to answer. What we did have was space that had racks, UPS, etc. that was called a data center. The remodel we did was of space that was vacated by another IT group.
We do lease space from another group on campus. Their facility is roughly 3400 square feet, with a 750 KW generator, 100 kVA UPS, 160 kVA UPS, and 30 tons of cooling supplied via three older CRAC units.
Our other footprint on campus consisted of 100 square-foot to 400 square-foot closets. Cooling was typically provided by house air or dedicated residential units. UPS were small 5 kVA UPS units. In one location we do have a 20 kVA UPS that supports 5 racks. This was our main facility outside the space we leased.
For the largest time, my organization was focused on supporting only our server infrastructure. This was in large part due to the state of our facilities. At one point we also had two separate server groups, so between the closets and the leased space there was no standardization and no real thought was put towards growth.
As for metrics, we are just now at a point where that is something we are starting to work on.
What factors drove your need (or desire) to remodel your data center? What were the key goals of the project?
The space we remodeled was an existing data center that used to be occupied by another organization on campus. When a new science building opened, they were able to secure new data center space, which left us getting their old space. We could not get the new space because we are not a research organization.
There were really two main goals. The first was to have a proper data center that was operated by my organization; the second was consolidation. Turning an IDF [intermediate distribution frame] into a data center was not sustainable and was the wrong approach. Consolidation was needed for several reasons -- the biggest was a push on campus to increase network security which would drive more people to the data center. No more servers in your office.
Back to what I alluded to above -- up until the remodel, we did not have a location that could operate as a co-location data center. Any external customers we took on usually had a server or two. This has changed now that the remodel is complete and we are seeing more customers come to us with far more equipment.
What was the scope of the project? How long did you plan on taking and at what cost?
A top-to-bottom remodel of the space was required because most of the infrastructure in the room was inadequate, broke, or about to collapse.
Construction lasted one month, which included the installation of a new floor, two new chillers, all the piping to support the cooling, 10 racks, a 80 kVA UPS, 6 in-row cooling units, 8 dedicated network zone racks, a ladder rack, cabling, and so on. We also crammed an office remodel in there.
A rough timeline is as follows:
Nov 2010 - Final plans approved
Mar 2011 - Bids due
May 2011 - Construction start
Jun 2011 - Construction ends
Oct 2011 - Open house and site launched
The cost question is interesting because the 80 kVA UPS was something we had on hand from a purchase that was done almost six years ago, and the office remodels were also included in the total price. The total cost of the project came in at just under $600,000.
The title of your presentation at Data Center World includes "racquetball courts" as an option to house the new data center, but ultimately you remodeled. What was the biggest factor in your decision? What were the biggest challenges to your remodel project, and what problems did you encounter that you didn't expect?
The biggest factor that lead to the remodel was that we got the correct space to remodel and that the racquetball courts were such a bad idea even though everyone thought it was a viable solution. If the space we remolded had not opened up, I am not sure we would have a data center today.
The biggest challenge had to be getting all the occupants out of the space so the remodel could occur. The day before demolition started there were production servers running in the room.
Space on a college campus is a rare resource. In order to pull off the data center remodel and associated office remodel, four or five groups had to have places to go and it seemed like each one had a unique need.
The one thing that I cannot stress enough is how important it was to get everyone on the same page. In trying to find a new space for our data center there were several departments from campus involved and not one of them had ever had anything to do with a data center. This presented problems because space, like the racquetball courts, seemed to fit the requirements from their perspective and there was also a fear that if we turned down space, it could be months before anything else opened up.
Getting everyone on the same page was very important. I did this mainly through data center tours and by showing a large collection of photos to those that could not attend the tours. Eventually, everyone saw that what we had -- small 400 square-foot, un-dedicated spaces -- were not what we wanted to replicate.
You mentioned the project took six years to complete. Why was it a drawn out project? What was the final cost to complete? What are the final specs on your new data center?
Finding an appropriate space is why this project took so long, along with a fear of the unknown with the hot aisle containment/in-row cooling.
Final specs for the facility are as follows:
1600 sq. ft.
10 racks
80 kVA UPS
Two 25-ton n+1 chillers
Power density of 8 KW per rack
Hot aisle containment
6 in-row cooling units n+1
Managed PDUs
Network zone utilizing passive and active rack zones
Large amounts of horizontal and vertical cable management along with a good use of ladder rack and fiber trays
Stack of Cisco C3750 provide connectivity
MR21 from network zone to racks with appropriate RJ45 cassettes
Fiber distribution is via MPO from network zone to top of rack with appropriate cassettes
Avocent KVM
Two stage, dry pipe, water-based fire suppression
Funding was not available at the time of the remodel for a generator, but funding for a 500 KW generator has been secured and the installation will be completed by May 2012.
Future plans include the addition of 18 more racks, two 25-ton chillers, and two 100 kVA UPS units.
For your data center, you chose hot-aisle containment over traditional raised-floor cooling, and a dedicated end-of-row network zone was picked over a top-of-rack solution. How did you arrive at these conclusions? For example, were they your first choice or chosen because they were more cost effective?
I am not a big fan of raised floor cooling -- just a personal bias. The room only has a one-foot raised floor and I do see us needing to support density beyond what that can easily provide.
Another thing to keep in mind is that up until 2011 I was not sure what kind of space we would end up getting, so this helped push me towards hot-aisle containment with in-row cooling. Although I would have hated it, the racquetball courts would work using that approach.
Once we knew a proper data center space had been secured, I do not think there was ever any thought of using traditional raised-floor cooling technology. Regardless of the vendor, I had drunk the Kool-Aid, and the numbers we saw indicated there was a power savings benefit as well. Hot-aisle containment with in-row cooling would be our approach.
The decision for a network zone came from trying to manage the mess I inherited and the high cost of adding one port to a top-of-rack solution. We really needed the network area to be clean, easy to manage, and scalable. From all our research, this is not something we could easily achieve using top-of-rack switches. We also had to account for customers that might need a special piece of networking gear, and the top-of-rack approach does not support that very well.
What best practices can you recommend to your colleagues facing a similar project? How, for example, did you minimize impact to existing users or processes?
I think the best advice I can give is get out there and see what others are doing. Too often, groups get used to doing things a certain way because that is how they have always done it. There are a ton of smart data center managers out there who are more than willing to share their experiences. It is also good to see how the things vendors present get implemented in the real world.
The remodel actually afforded us the opportunity to implement many of the policies and procedures that were only loosely followed over the past couple of years. We have done more than I think anyone ever imagined to provide a service that customers want and have grown to rely on. Even in a university environment, I think it is important to put a strong emphasis on customer service for both our internal and external customers.
We have also taken full advantage of our internal communication department to create brochures and signage in the data center. When we give a tour of the data center most people are impressed with the attention to detail and overall appearance. This goes a long way in building customer confidence in the service we offer. That is, it helps people erase from their minds the old facilities they might have seen.
What were the key elements of your success, and how did you measure success? What would you do differently if you had it to do all over again?
Success could be measured in three ways for this project. The first two are pretty easy in that we came in slightly under budget and on time. The third success factor is a little more tricky to explain, as building confidence is not something that is usually listed as a goal.
As I alluded to before, people on campus were familiar with our other spaces and this was not a good thing. I was also very honest about our capabilities which were not flattering. Explaining that you have a generator but that it only supports the UPS and not the cooling does not make someone want to house their server in the facility. Keep in mind this is the mess I inherited.
Because we knew that people had seen what we’ve done in the past, our remodel had to be impressive, and I think we achieved that. I am very fortunate to have a great group that really went the extra mile to make sure that things looked clean from top to bottom. We’ve had new customers come in that have never seen a data center before and they say that this is what they imagined a data center would look like. That is confidence being built and I love it.
A small footnote to our project is that after you get past the way something looks, it has to operate correctly also. I think we’ve put just as much time towards this as we did making the facility “look” proper. We have a complete operations guide and a great group of individuals that make themselves available to provide the best experience possible. We bring in other subject-matter experts when needed (network/server team) and even pull in vendors to help our customers out when needed.
How did your background in customer service play into the remodel? How did you deal with taking certain systems offline?
I am very consciousness of the customer experience and have a great group that is focused on customer service. We did an excellent job of listening to our customers before, during, and after the remodel which helped us build to the right specifications.
Because the remodel was of a space we had not previously occupied, we did not have to worry about transferring services from one location to another or keeping services up during construction. The only exception to that would be two racks that supported a small fiber plant and some access layer switches. To be honest, we just hung those from the ceiling with anchor bolts and come-alongs while the floor was replaced.
What is the projected life expectancy on your new data center?
Undetermined is the easy answer. For the foreseeable future, we will continue to maintain and grow this facility, but there is always the chance a new building could come to campus or a better space could become available.