In-Depth

Building Block Storage, Part 2

The term SAN is an oxymoron when applied to Fibre Channel fabrics. Fibre Channel is a channel protocol, not a network protocol, so it stands to reason that Fibre Channel cannot be used to build a storage area network.

In my last column, I wrote about building block storage and its potential to help organizations construct manageable and scalable storage infrastructure custom fit to business needs. Although I wanted to launch directly into a discussion of some enabling technologies -- specifically data management software coming from firms such as Crossroads Systems, Novell, and SGI -- something else occurred this week that was relevant to the previous discussion.

I was contacted by an integrator who wanted my assistance in reasoning with a new customer, a health-care services firm that was growing by leaps and bounds and developing its IT infrastructure strategy for the first time. It is a rarity these days to have the opportunity to discuss infrastructure strategy in a tabula rasa context; usually you become mired in a discussion of legacy gear and what to do with it. This case was unique and I was delighted to use my erasable markers to fill the consumer's white board with my thoughts on building block storage.

Only, the white board wasn't so pristine. The consumer had already purchased blade servers from a prominent vendor and only avoided inking the deal for the same vendor's storage gear because the costs seemed exorbitant to him (they were). He was then engaged by an EMC sales rep, who was trying to sell him a "Celerra SAN" to meet his initial need for about 10 TB of storage adequate to hold the output from transaction-oriented (aka SQL and Oracle database-driven) applications. He was almost ready to sign the contract.

While I hold no hostility toward EMC solutions, except for Centera -- the company's content addressable storage product -- for reasons that we can cover in a later article, I was confused by the term "Celerra SAN." It sounded like very convoluted marketecture indeed: Celerra is a network-attached storage (NAS) platform with backend storage connected to the NAS server head via Fibre Channel. Competitors (such as NetApp) and many smaller players have similar configurations, but they don't call their products SANs. Regular readers of this column know that the term SAN itself is an oxymoron when applied to Fibre Channel fabrics. Fibre Channel is a channel protocol, not a network protocol, so it stands to reason that Fibre Channel cannot be used to build a storage area network. The marketing term, SAN, is already an exercise in technical silliness; calling a NAS a SAN just makes it more so.

This is not, however, the discussion the consumer needed to have. He was getting pushed to an infrastructure choice by a silver tongued sales rep who was not talking much about technology at all. Instead, he was emphasizing EMC's "king of the mountain" status and selling relationships rather than products -- the hallmark of Hopkinton's sales strategy, I am told by many former EMC salespeople. The consumer was close to buying in.

I began applying some of the principles covered in the previous column. First, I asked him about his business and applications and about his projected data workload -- mainly data characteristics, volumes and growth projections, and access requirements (how many folks would need concurrent access to the data and whether access would be through the application software itself). Once workload characteristics and access patterns had been outlined, it was obvious why he believed that he needed fast block storage that would scale over time in a very manageable way. He wanted a SAN, or what passes for one today.

Next, I asked if he really needed NAS functionality at all, and if not, why was he paying for it? He said he was told that Celerra was a SAN. The simple truth, I pointed out, was that if connectivity to the platform is being provided via NFS or CIFS, he was being sold a NAS head, regardless of the back-end storage interconnect.

He countered with some architectural descriptions provided by the sales rep that clearly had him confused. Going forward, the NS120 (the Celerra model he was considering) was going to offer Flash drives that would provide all of the speed that he was seeking, complemented by larger capacity magnetic drives, and all the software to move the data between tiers. Speeds and feeds seemed to make it a good choice.

I had to counter with three questions. First, why would he want a platform sporting flash drives costing between $30 and $40K and at best capable of storing up to 500,000 writes before memory wear required their replacement? If databases were going to be driving data writes, how soon might he encounter the need to replace the 146 GB capacity flash drives? What would that mean operationally in terms of interruptions? Moreover, I asked whether he had factored into his budget, say, $60K every couple of months to replace flash drives (assuming that the drives need to be replaced in pairs, which is very likely)?

Secondly, I inquired whether he had looked at the history of the product. How soon after the introduction of the NS20 was it replaced by the NS120? I noted that most storage "system" vendors discontinue models about every 18 months -- corresponding roughly to the intervals between the introduction of next-generation system main boards. Storage systems are typically just computer main boards with lots of embedded software talking to ranks of disk drives. Customers often complain that their storage vendors are always force-marching them to the next big thing whether their products require an upgrade or not.

Third, I asked if he had vetted the product with other users. What problems have other consumers had with the products, if any? He hadn't checked. If he had, he might have found a post I had made the day before on my blog that resulted in many consumer responses -- some favorable toward Celerra, others not -- and that was just in a 24-hour period. Google might provide a bigger sampling of data from users, I suggested.

That's when I caught myself. One problem with EMC, I noted, is that the company does not subject its gear to testing by SPEC.org or Storage Performance Council. The company also maintains a warranty-based gag order on its customers (some don't conform, of course) prohibiting them from discussing the performance they receive from EMC products, lest they have their warranty voided.

He was sounding a bit overwhelmed at this point, so instead of interrogating him further, I asked whether he had considered a simpler building block model for storage. I spelled it out for him.

The first questions you need to ask are those that characterize your requirements: the size and shape of data and access. I complimented his diligence in analyzing these issues.

Next, you need to consider management. How will the storage you are fielding be managed and what happens when you scale the infrastructure horizontally and vertically as your business grows? Will the management approach you are using also scale and can it accommodate heterogeneity in a unified way? Management, I explained, is a key gating factor in cost of ownership. Since the days of mainframe SMS from IBM, it was clear that companies cannot afford to hire more bodies as they deploy more capacity. Management is a hedge against this.

Next, you need to turn away from a myopic focus on primary storage and look holistically at the way that data will be handled across infrastructure. He hadn't yet considered archiving or data protection, two services that would need to be provided in any respectable storage infrastructure -- especially in health care with HIPAA and other regulatory requirements to preserve and protect data. He needed to have a clear view of a storage management strategy and a data management strategy, I suggested.

He agreed, so I pressed on. Key to designing storage infrastructure today is reducing storage gear to its building blocks. Disk is cheap and getting cheaper, so why not capture this trend (which has existed since the 1980s) to drive cost out of storage? This message played into the pitch he was already getting from his integrator, who was reselling multiple storage products, including Xiotech's Emprise arrays, which are building block arrays devoid of a lot of "value add" embedded software.

He had already heard the claims from Xiotech about their platforms, he said. His EMC sales rep had explained to him that Xiotech is a marginal player in the industry, that there had been problems with disk failures and data loss, and that real men bought EMC.

I knew that EMC had played the fear, uncertainty, and doubt card prior to the call, so I checked with Xiotech's CTO Steve Sicola to assess the validity of the claims. His honesty warmed my soul. He said that very early in the Intelligent Storage Element (ISE) product introduction (now branded Emprise 5000 and 7000), there had, indeed, been some problems with how the ISE communicated with drive firmware on some models of drives. This had resulted in problems such as early timeouts. These issues were quickly resolved, however, and caused no significant inconvenience to any consumers. They also drove the development of enhanced monitoring of drive telemetry that is probably the most advanced of any vendor in the business.

Today, he observed, customer experience with the product has been keeping up with company braggarts. Because of Xiotech's on-board drive refurbishing, customer reported drive failure rates have been reduced by 37 percent over other storage arrays. (Drives typically fail at a rate of 3 to 4 percent every six months according to Seagate; Xiotech's drives show a nominal 1 percent failure rate.)

As for being a smaller player in the industry, everyone would agree. However, the economy is a great leveler. EMC's last quarterly earnings report was not stellar and the company refused to offer guidance for the coming year. By contrast, Xiotech is reporting significant growth in revenues and is having a hard time keeping inventory from flying off the shelves.

Less a pitch for Xiotech, I was stressing that an alternative model for building storage exists to what the three-letter companies are selling. Products such as Xiotech's ISE/Emprise are modular building blocks whose performance -- documented by Storage Performance Council == is currently the best in the business. By layering atop these FC-connected boxes an extensible "software-based" storage controller such as DataCore Software's SAN Symphony or a hardware virtualization engine such as IBM SVC, you can effectively scale to petabytes of storage that can be sliced and diced readily to accommodate application and end-user needs.

So long as you stay on ISE, you manage the scaling infrastructure with ICON, a W3C Web services standards-based management system that is fast gaining popularity as a general purpose infrastructure resource management paradigm that squashes the Storage Networking Industry Association's ill-fated Storage Management Interface -- Specification (SMI-S) of two or three years ago. If you want a heterogeneous infrastructure going forward, CA, Symantec, Tek-Tools, and a host of others offer a cool management console with lots of bells and whistles that will plug and play.

That leaves data management, and I am delighted to report that moving data between storage tiers for archiving and protection is being made much simpler courtesy of companies such as Crossroads Systems, Novell, and SGI, the subject of the next column here.

By the end of our chat, the customer and I were best buddies. He said I had given him a lot to think about and requested a follow-on meeting to deep dive into his architecture. I can't think of a better way to spend my time.

Your comments, as always, are welcome. jtoigo@toigopartners.com.

Must Read Articles