In-Depth
Data Center Under the Gun: Welcome to the Party
Data center managers have their hands full, from compliance costs (including the threat of fines and imprisonment for Sarbanes-Oxley violations) to front-office interference.
In the first installment of Bruce Willis' Die Hard franchise, the hero takes up a fighting position in, of all places, the corporate data center of his wife's employer, where he proceeds to engage the bad guys from behind some servers. Hollywood's penchant for staccato gunfire and flying glass is well served by the venue.
Although the cover was adequate, I wouldn't try it today. Data centers aren't what they used to be.
The plastic-and-tin rack-mounted components popular today probably couldn't withstand the hailstorm of hot lead that was unleashed on poor John McClane back in the late 1980s. Not only could data centers not withstand contemporary firepower, they increasingly fail to hold up under the scrutiny of a front-office auditor on a cost-cutting mission.
The economy is tight and cutbacks are taking place everywhere. The outsourcing companies are leveraging the situation to convince corporate executives to unload their costly homegrown IT operations, buying them instead from service providers. This pitch is echoed by a growing cadre of cloud service and software-as-a-service (SaaS) vendors who claim that, in these changing times, on-line services can today effectively replace IT. They say that the conditions that killed the original application service providers (ASPs) back in the late 1990s have changed -- usually without explaining how.
The favorite mouthpiece of this group is Nicholas Carr, who has pronounced the data center DOA. Carr's iconoclastic characterization of the data center as a "commodity" is getting a lot of play in the media and in the boardroom, much like his earlier view that IT no longer matters to business.
Adding to the din are the virtualizers -- the VMwares, Citrix Zens, and others, who correctly assert that return on investment from all of that expensive data center hardware hasn't been delivered. They are using multi-million dollar marketing budgets to press the mantra of consolidating physical devices into Jenga towers of virtual machines that will magically yield better resource utilization and simpler management of hardware and data.
The deficits in their thought process are offset by compelling statistics. With allocation efficiency in storage behind distributed servers, arguably the most expensive infrastructure component, hovering below 20 percent of optimal and Fibre Channel fabric switch ports delivering only about 15 percent of their rated load-bearing capacity, there is considerable waste in the data center. Moreover, management is lacking -- not only in storage, but also in servers and networks -- which places data center operators constantly in fire fighting mode and limits their time to do things that will improve efficiency.
In their defense, the operators have received limited support from the bean counters to purchase and deploy infrastructure management tools. Partly that is because the hardware vendors have been so successful in usurping the IT department and selling directly to the front office. If they can get a senior executive to buy only brand X hardware, the vendor argues, then on-board element management is all you need. Although it sounds good, it is patently not true, and IT rarely gets a vote.
Coming down the pike is another big issue. According to Census Bureau numbers, channeled by CA VP and all around smart guy Vince Re, the economy is expected to generate between 650,000 and 800,000 jobs over the next few years at a time when colleges and universities will only graduate about 100,000 qualified candidates. Re argues that this may impact distributed systems, which require far more hands to support operations, even more than it impacts mainframe environments, which require far fewer personnel to support that environment. (This flies in the face of popular perceptions that mainframes are troubled more than distributed environments by mounting skills shortages.)
Bottom line: with these issues, and many others, confronting the contemporary data center, McClane's fire fight with a gang of high-tech thieves in Die Hard seems like a simpler problem to surmount. Over the coming months, I will use this column to dissect some of these bigger issues and report on innovative steps practitioners and vendors are taking to address them.
Innovation is not invention. You don't need to have some sweeping new paradigm (such SaaS) or product (such as VMware) to innovate. Sometimes you just need to go with what works. Take mainframes, for example.
For at least a decade, the demise of the mainframe has been foretold by every "respectable" analyst in the industry. CA's Re believes that this results, at least in part, from a fundamental misunderstanding of the market. He says that 80 percent of the MIPS in the mainframe world are located in the top 1000 sites where the multiprocessor systems are deployed. The other 20 percent of MIPS are generated by smaller shops, and many of those are moving away from the platform. IDC calls this a decline in mainframe usage, but Re argues that "the migration away from the mainframe in smaller shops is not representative of the market, which is seeing 30 percent plus growth in workload every year."
Mike Moser, who directs the mainframe services management business at BMC Software, agrees with Re based on statistical survey data he has been collecting for a number of years. He sees the mainframe user community as two communities in which small shops are getting smaller and big shops are getting bigger. He also sees a tipping point in the decline of the mainframe, "Shops that wanted to step off of the platform have done so. We are seeing more strength in mainframes today than ever before, and not just because databases are growing and more capability is being added, but because new applications and workloads are being moved over to the mainframe."
Moser's latest survey of 1100 shops using mainframes, which can be viewed in its entirety here, suggests that new workloads are being added to existing frames by 63 percent of those surveyed. Moser says this continues a trend that saw 52 percent adding workloads last year.
It is also notable that the number of companies working to retire their mainframes over the next few years has gone from 6 percent in 2007 to zero in 2008.
Mainframes have already trod the ground that is just being explored by the VMwares of the distributed world, Moser said. "In my personal opinion, the virtualization folks are moving in the right direction, but mainframes have a 20 year head start."
If you do the math, resource allocation and utilization efficiency on hardware, not to mention proactive management, in a disciplined mainframe shop are several orders of magnitude better than they are in distributed environments. Innovative IT folks would be well advised to take a hard look at what is working on the mainframe side of the house to get a clue about effective methods for driving out cost and driving up value from the distributed computing side of the house.
The key question is whether IT innovators in distributed computing settings can design an infrastructure that is essentially the mainframe writ large while leveraging the capabilities of networks to drive out cost and complexity. The good news is that the front-office preoccupation with cost containment is rivaled by mandates to govern its information assets more effectively. Think about it: if you were a senior executive or director, what would scare you more -- the costs associated with IT operations or the threat of fines and imprisonment for Sarbanes-Oxley violations?
BMC's survey numbers suggest that two things are now taking place. First, organizations are spending more time exploring "shared information governance" methods that span both mainframe and distributed operations. Moser says that a surprising 29 percent of respondents to his survey said that the two sides of the data center were moving toward a consolidated governance program -- beginning with change management.
The second notable trend is greater attention to asset management in the distributed environment. Says CA's Re, the front office's appreciation of the expense of both human and technology assets in the distributed world is getting more realistic as data is collected on pay trends and technology ROI. On the former point, mainframes require fewer operations personnel and their pay, according to Re, has not increased very much since 1985 (after adjusting for inflation). By contrast, the numbers of operational personnel required to support distributed computing initiatives, and their pay rates, have increased substantially in the same timeframe.
Looking at nonhuman assets, the inefficiencies in distributed computing asset utilization are being laid bare by the virtualizers and external service providers. Companies are investigating whether this delineation of waste applies in their own shops so that they can begin taking measures to address it.
In short, companies are taking stock of their current position and in the best case scenario will forego the knee-jerk reaction to outsource everything in favor of a more measured strategy to contain IT costs and improve IT services through innovation and rightsizing. An invitation to this party should be the most coveted ticket in IT today. It will decide whether you work out the rest of your career in the profession for which you have trained or spend it flipping burgers in the exciting world of fast food.
I invite readers to participate. I would like to learn more about the situation in your shop and the innovative strategies you are developing to cope with the new realities confronting the data center. E-mail me at jtoigo@toigopartners.com.