Case Study: Breaking the One-Stop-Shop Rule
Even in the most "monolithic" of IT settings, the mainframe data center, there may be compelling justifications for violating the one-stop-shop approach that vendors are keen to promote these days.
I was asked recently to take part in a storage survey being administered by a trade press publication. The survey came down to a single question: did I prefer to buy my enterprise storage as a scalable platform from a single vendor complete with software, hardware, and other componentry—all integrated into a one-stop-shop, or did I like to build my own storage solutions, taking responsibility for the integration and the headaches of vendor finger-pointing every time anything went wrong?
I’m not kidding when I report that the phraseology of the invitation was blatantly slanted toward the desired response: I was clearly supposed to choose "integrated, scalable, simple, one-stop shop, one-throat-to-choke solutions." Doing otherwise would label me as some sort of masochist—or worse yet, a do-it-yourself idiot who wanted to mess around with building storage.
The good news about the offer was that it helped crystallize for me the subject of this week’s column. Has the idea of "enterprise-class" storage been joined at the hip to centralized FC SANs populated with monolithic storage arrays, all purchased from a single vendor, with its cadre of sycophantic "ecosystem" partners? Listening to the sales pitches of leading storage vendors (or just scanning their Web sites), you might think so.
Network Appliance says that its products have a "unified storage architecture" that enables you to add components like building blocks under a common core operating system, ONTAP, and take advantage of key features and functions such as snapshots. EMC talks about uniform "tiered storage"—all inside one of its DMX arrays. IBM calls its offerings "TotalStorage for an on demand world." Hitachi Data Systems heralds its TagmaStore array as a "universal storage platform" (USP) and adds that it is "an entirely new paradigm" for managed storage. Meanwhile, Sun Microsystems talks about its "Project Blackbox," a totally virtualized IT department.
The common theme is clear: if you need "enterprise-class" storage, just deploy any of these vendor’s products (to the exclusion of all other vendor’s products), and you have a "no muss, no fuss" solution for all your storage needs. The subtext of this message is clear, too: individual vendors have cobbled together a full stack of software and hardware that defines a specific way of doing storage. If you stick to their product line and road map, it will serve you well today and going forward, and enable you to think about more important things than spinning rust and Ruthenium.
Under the circumstances, why would you want to build storage infrastructure in any other way? Are you concerned about early obsolescence? Are you worried that some components, whether software or hardware, may not be exactly what you need or less than best of breed? Do you fear commitment to a certain vendor that eliminates any bargaining leverage you may have when it comes time to reconsider or refresh past infrastructure decisions?
The top vendors have thick "playbooks"—consisting of dirt on competing products, ROI and TCO calculators, and user success stories—set up specifically to address these objections. Whatever the potential deficits of one-size-fits-most, you will be assured that the gains are overwhelming—if measured only in non-events. The parts of the solution are pre-tested to work together, so there is no guesswork or inter-vendor squabbling when you need to fix a problem. Not having to deal with these issues saves you time, hence money. In the long run, fielding this type of infrastructure will cost you a lot less than operating a "home grown" infrastructure comprised of many best of breed products.
It’s a compelling argument. Some would say that it recalls the orderly world of the mainframe, where the dominance of a single vendor, IBM, ensured interoperability for the price of an annual maintenance agreement.
It Ain’t Necessarily So
The problem is, it doesn’t always work out to be so orderly—even in the mainframe shop. Take, for example, the experience of Ed Low, operations manager for Fidelity Information Services’ data center in Oahu, Hawaii. I chatted with Low about his tape backup and encryption solution and learned about the need for embracing third-party wares and creating your own solutions to knotty problems such as data protection.
Fidelity’s data center is a true blue IBM shop. They use a z/OS mainframe, a z990-303 to be exact, to provide core processing services for Bank West, which is a combination of Bank of the West and First Hawaiian Bank. The geographies served by these two financial institutions means that Low’s operation is a 24/7 endeavor spanning day-to-day banking services, plus ATM network processing and Internet banking.
In October 2005, driven by FDIC regulations requiring the encryption of all media leaving the security of the glass house, Low began a search for a tape-encryption solution. At that time, his tape backup load was represented by 12 to 15 cases of 3490 E cartridges, at 50 cartridges per case, being shipped daily to an off-site storage facility.
Low's first decision was to consolidate his tape backup processes by deploying a new 3494-B20 Virtual Tape System, and a 3584-D22 Tape Library from his preferred vendor. The former provided the means to write tape backup streams to virtual 3490 tape cartridge images; the latter provided more capacious physical tape targets (3592 J cartridges)—sporting in excess of a Terabyte of capacity each. The idea was to store multiple 3490 images on to each 3592 cartridge, thus reducing the number of physical tapes—from 600 to 750 per day to only eight per day. Along the way, he would also be able to leverage virtual tape to reduce backup time and to provide a virtual tape image that could be replicated via a network to his disaster recovery hot site facility in Philadelphia, PA.
Staying within the family of IBM products, the implementation went smoothly. However, Low began looking for solutions for encryption and found his primary vendor’s offerings lacking. His desire to stack multiple 3490 images in each 3592 tape, for example, required the coding of special Job Control Language (JCL) for each tape. Checking around, he found that a third-party software offering from CA called "CopyCat" (officially, BrightStor CA-Dynam/TLMS Tape Management Copycat Utility) could do the job without the extra JCL. Problem solved.
CA also held out a solution to his burgeoning tape-encryption requirements: BrightStor Tape Encryption. While his primary vendor, IBM, had provided a great facility for off-loading the processing for performing encryption so that it didn’t drain the mainframe's central processor, Low says that the vendor had no software that provided an easy way to encrypt his backups, or to decrypt them on the fly when they were needed for recovery.
"We were using ASG Software products for tape management, but when the requirements came down, they didn’t have an encryption product. We talked to another vendor and they had a solution, but it was very expensive. CA offered a comprehensive solution with CA-1 (tape management), CopyCat, and their Encryption product," Low reports.
The decision to go outside the established norm and to deploy a third-party solution made operational and economic sense. Low says that the implementation began in the first quarter of 2006 and was relatively smooth, "There was a lot of pre-planning involved, but since we run multiple LPARs [logical partitions], we were able to subject the products to parallel testing. When issues arose, CA worked with us to work them out."
The deployment was successful and the solution went live in just under three months. Low said that the resulting solution is top notch and presents no issues. Restoring the data from encrypted tapes requires only a client software component that decrypts the data back to a designated target.
Moreover, he said, he expects to extend the scope of the CA solution to include UNIX servers that are maintained in his operation, "They use different tape devices today, but soon CA has promised to release UNIX agents that will allow us to centralize UNIX backups to our primary IBM tape library." Tape consolidation, enabled by the software, provided another operational savings that helped Low decide upon the third party approach.
Fidelity Information Services also benefit from automated encryption key protection in the BrightStor product. More than serving as a central repository for the identification and storage of encrypted keys, the Tape Encryption product provides full life cycle key management, including encryption-key creation, monitoring, tracking, auditing, backup, and recovery. It also provides for automated expiration and removal of expired keys via integration with CA-1 or CA’s other tape management system, CA-TLMS.
Low can also have the peace of mind from knowing that his corporate and personal data are automatically safeguarded from unauthorized access and that he is helping the company avoid potential public relations nightmares, litigation, fines, and the additional remediation to put customers and stakeholders at ease. Remediation carries its own cost in terms of the campaign itself—shareholder value decreases, lost business, and churn of customers and partners. Through some out-of-the-box thinking, Low is delivering a lot of value to his company.
All in all, this example shows that even in the most "monolithic" of IT settings, the mainframe data center, there may be compelling justifications for violating the one-stop-shop approach that vendors are keen to promote these days. Enterprise-class storage doesn’t mean buying everything from a single provider. Arguably, it never did.
Your comments are welcomed: email@example.com.