A Five Billion Dollar Data Center Bailout Strategy
It's time to challenge your concept of how LAN and SAN connectivity should be instrumented, and save money in the process.
The word is that IT costs too much. Senior management is pushing the back office to find ways to contain costs -- both CAPEX and OPEX -- while maintaining or improving service levels. Every IT staffer I know is dreading his next pay envelope and the possible pink slip inside.
The industry is predictably responding to the current economic situation. Mainframes are enjoying a much-deserved resurgence in popularity, with InfoPro documenting a significant re-commitment to the highly available platform by companies that already have them ready. Despite its software costs, which can be huge, mainframes handle workload elegantly and with far fewer administrative personnel than open systems. They are much more efficient in their use of resources, partially as a function of de facto standards to which all third-party vendors must adhere and because management services are part of the operating system. As one IT manager told me recently, his mainframe operation is run by five people and never has downtime; his open systems environment has over 50 administrators and seems to be down at least once a day.
If you aren't blessed with a mainframe, Cisco Systems, IBM, HP, and perhaps some of the cloud vendors want to provide you with something they say is comparable. Cisco has moved into the server space, giving them a soup-to-nuts stovepipe for a set of Cisco-proprietary protocols and network services. IBM has hatched a plan with Intel to support its virtualization scheme as well as other technology to deliver another proprietary stovepipe leveraging IBM wares. HP has been working with specific application vendors (such as Oracle) to provide application-tuned stovepipes for the most used databases and other business wares. Everyone and his kid sister has a cloud or virtualized data center play, with IBM and Microsoft arguing over whose software stack is the real "open cloud" play -- whatever that means.
All the companies' brochures are strikingly similar: a unified software stack leveraging best-of-breed technology from the premiere vendor in the universe that will drive costs out of IT by making operations more resource efficient, more "green," and more labor lean. Just like a mainframe.
Every one of these vendors, however, seems to have lost sight of the original mantra of open systems: the idea that user-driven technology will better map IT investments to business needs than tech administered by a staff, centralized in a glass house, with all gear bearing one vendor's logo, and providing a single interface to the business itself.
What has happened to the idea of open standards for managing increasingly simple and commoditized gear? When did each vendor start to try to differentiate itself from competitors by making sure that standards are sufficiently watered down to enable two vendors to build standards-conforming products that absolutely will not plug and play with each other -- at least not without losing all of the sexy value-add functionality the vendor went to great pains to embed on its controller, switch, HBA, NIC, or other component? Looking back 25 years, I realize it has always been that way. Open systems were never open.
Here's my take: if you want a rock solid monolithic computing platform, buy a mainframe. They provide a better virtualization platform than an x86 server with extent code and lots of software. They use storage and networks more efficiently than any "open" servers available today, and the 10:1 advantage mainframes have over open systems in terms of labor costs can't be beat.
The alternative is to return to the original thinking around open systems and get serious about achieving the goals originally set forth for distributed computing. That means deconstructing the monoliths of your favorite vendors.
Deconstruction is a scary word, mainly because it has been contextualized as risky by the vendors who want to sell their solutions. The marketing around pre-integrated "one-stop-shop" solutions from trusted tech providers has been increasing in intensity for the past two decades and it is coming at you loud and furious from vendors today. If you only buy Cisco, you will be able to leverage all of the cool proprietary protocols that they have invented to shape traffic, improve security, etc. Buying only IBM gets you to IBM's cornucopia of intellectual property. EMC gets you … EMC stuff. The problems with these approaches are two-fold.
First, you are locked in. There is little room to negotiate the price of technology that is sold on a per-port or per-TB basis when the only source of supply for goods and services is the primary vendor. Cisco wants you to double up on switches for resiliency and endorses a multi-NIC, multi-HBA-equipped server configuration to provide all of the necessary I/O. Deviating from this design, inefficient as it is, or using less-expensive NICs and HBAs from outside the Cisco "ecosystem" of suppliers, risks either an unsupported configuration or a platform shy of some of Cisco's vaunted networking functionality. Even if the technology added by third party providers is superior to Cisco offerings, you are locked into their model and locked out of taking advantage of the best tech your money can buy.
The second problem is linked to the first: the price tag. Vendors do their best to conceal the fact that everything they sell is fundamentally a commodity component. Those who remember the disaster movie Armageddon will recall the cosmonaut's assertion, "Russian spacecraft, American spacecraft … all parts made in Taiwan." For just about every piece of hardware, the chipsets are made by a handful of vendors and sent to the finished-goods providers in the storage, switch, and server spaces.
To differentiate themselves, vendors have embedded "value-add" software functionality directly onto commodity gear, increasing the complexity of the gear, increasing the management difficulty, and increasing the cost to the consumer who must pay for all of the value-add software whether he uses it or not. This becomes an even greater cost driver when you stand up different vendor monoliths for different needs: cross-platform management is a nightmare, so you need to hire more people when you deploy different gear from different vendors.
Assuming that you aren't the sort of person who buys "love brands" but instead seeks the right technology for the right need, you already know the problems of monolithic single-vendor solutions. But who is in your corner, fighting the fight for open-ness in a world increasingly staked out by vendors with as much money invested in their logo art as their technology?
Last week's bright spot was a conversation I had with a little-known Silicon Valley upstart called Xsigo Systems. Jon Toor, who directs marketing for the San Jose firm, intrigued me by promising that he could save my enterprise data center over $5 billion dollars with his wares. While not as much as a government bailout, $5 billion is nothing to sneeze at, so I heard him out. You should, too.
He showed me the inevitable pitch deck illustrating how things are done today for I/O handling: install redundant NICs and HBAs into servers, producing lots of clutter and wires spilling out of the back of the rack mount servers. Using the example of a $14K 2U server from Dell, he demonstrated that the I/O cost for the configuration was $5372. As the number of applications consolidated onto the server forced an upgrade to a $21K 4U model, and the I/O plumbing requirements increased to manage more applications and their resource requests, the expense of multiple I/O cards, more switches, more LAN and Fibre Channel ports (many unused, by the way), and management rose to $16,700.
Using numbers garnered from an actual Fortune 500 client using Cisco LAN and Brocade SAN gear, he demonstrated (conservatively) the multiplier effect of building an I/O platform in this manner. Data center server I/O costs in the company were approximately $2.14 million per year.
Toor proceeded to explain his company's wares, which consist of software and hardware components. First, Xsigo proposes to eliminate all of the NIC and HBA cards from your servers, substituting "virtual I/O cards" that are essentially software drivers, and a single -- or two for redundancy -- interface card with one wire each trailing out of the server to something the company calls an I/O Director. If you are following me, you have one or two wires carrying all I/O from the server to one or two Directors. Inside the Directors are the same chipsets that you would normally have in several NICs and HBAs, but offloaded into an externalized box with a bit of on-board routing intelligence. The Director routes the I/O to the appropriate LAN or FC fabric.
This is a brilliantly simple design enhancement -- your basic smart bus extension model -- and, Toor argues, saves up to $20K per server in hardware, energy, and management expenses. It further enables you to consolidate not only servers but switches, too, giving you 33 percent less gear to manage and producing an additional cost savings of about $5500 per server. Multiplying this by the 990,000 servers deployed in 2009, the total savings comes to, you guessed it, $5B.
Toor delighted by making comparisons to Cisco Systems (a company that could doubtless buy and sell Xsigo many times over) and showing radically reduced price models between infrastructure built the Cisco way and that built on I/O Director technology from Xsigo. Granted, you don't get Cisco's "five layers of software and management." Instead, you get a resilient and more open I/O infrastructure that is easier to manage and simpler to operate with fewer people.
The idea sounded to me like just what the front office ordered: doing more with less. I would argue that Xsigo's future is high and to the right if I didn't know how many Fortune 500s are doing deals directly out of the CFO's office with name-brand vendors. Too often, the front office is suckered into believing that brand names connote longevity and "ability to execute." They fall prey to well-honed sales and marketing pitches from vendors who never mention technology at all, but tout their solution as one to make certain business problems "go away." That's how EMC sells its Centera storage, how Cisco sells its Nexus switches, etc.
A lot of expensive and inefficient technology enters infrastructure by way of the Flashing 12's, Microsoft's word for front-office managers who can't even program the clock on their VCRs, so it always flashes "12:00," but who nonetheless are making data center architecture decisions.
You owe it to yourself to take a look at Xsigo Systems if only to challenge your concept of how LAN and SAN connectivity should be instrumented. Hopefully, common sense will trump marketecture and you can get your piece of the $5B bailout.
Your comments, as always, are welcome. jtoigo@toigopartners.com.