Son of Tagmastore Targets Sweet Spot in Entry-Level Storage

HDS wants to take the market by storm

With some trepidation, I took a phone meeting with Hitachi Data Systems CTO Hu Yoshida to talk about the company’s recently announced products. Storage vendors aren’t known for their sense of humor, and my satirical presentation of HDS’ TagmaStore disk array at the SNW conference in Arizona a few months ago—which took the form of a mock movie poster showing a Godzilla-like robot bearing the moniker “TagmaStore: Last of the Big Iron Monsters”—might well have been taken as a slight by the technologists at HDS. If it was, I wholly expected to be admonished by Hitachi’s uberTech, but the subject was never raised.

Yoshida did correct me when I used the expression “Son of TagmaStore” as he began to describe the latest addition to the TagmaStore product family, the NSC 55. He said he preferred to think of it as a “lean and clean” TagmaStore. Aimed at the midlevel market, the NSC 55 features the same virtualization controller as its fire-breathing Big Brother, but with 32 processors around a 64 GB global cache capable of managing up to about 16 PB of storage. The product ships in a 5TB configuration for about $150K, which Yoshida believes is within the sweet spot for entry-level storage in the middle tier of the market.

I wondered immediately how NSC 55 would fit with Yoshida’s core/edge paradigm for storage, discussed here in a previous column. Did the idea of customizing storage to meet different kinds of workloads still hold, or was this new product breaking with Yoshida’s architectural vision?

He responded that the new platform might be an “edge” device for large enterprises, but it was most certainly a “core” platform for the middle tier, providing better resiliency and non-stop performance than dual-controller solutions, including HDS’ own Thunder arrays. He rationalized that mission-critical applications in the middle tier required the same non-stop, reconfigure-on-the-fly, high-resiliency hosting as mission-critical apps in the large enterprise. The middle tier just needed them at the right price point, which NSC 55 delivers, in his view.

In addition to being leaner, the new platform is also cleaner, he added. It conforms to the European Union’s ROHS directive, requiring vendors reduce hazardous substances (such as lead) used in the manufacture of their computer equipment. ROHS, which doesn’t take effect in Europe for a while yet, has its parallels in the U.S. market. In California, for example, there is a fee for disposing of used electronics to cope with environmental damage.

Cleaner and leaner, TagmaStore Junior is also meaner than rivals, I learned as the conversation drifted onto the topic of virtualization. NSC 55, like TagmaStore Senior, features virtualization capabilities that enable, on the one hand, the attachment of up to 1024 servers to each of its 48 physical ports (a godsend in complex or switch port-constrained environments), and, on the other, the external connection and pooling of third-party storage arrays behind the TagmaStore front end. Simply put, according to Yoshida, the TagmaStore virtualization approach blows the socks off of IBM’s SAN Volume Controller or EMC’s newly announced Invista virtualization technology.

Yoshida became as emotional as I have ever seen him when he talked about Invista. He bristled at characterizations of the product by EMC as “out of band.” “EMC says Invista is out of band, stressing the point that it uses no cache and is not a state machine, but, it isn’t out of band. Invista cracks packets [that pass through it] to provide routing. That is in-band and it isn’t very secure.”

Moreover, says Yoshida, EMC has virtualization backwards. “It’s not just about volume pooling,” he argues; “security and quality of service are now equally important. EMC just pools everything together. Then they require that you use extents to create specific volumes. We aggregate existing LUNs—and in some cases parts of LUNs—into pools based on the compatibility of the underlying devices. This is a much better approach in terms of volume recovery in the event of a failure.”

IBM’s SVC has some of the same foibles as the EMC product, Yoshida argues. Both, he notes, depend on failover clustering of devices to provide service guarantees. “That’s okay if only one SVC fails, but it offers no protection against multiple failures. Plus, I am told that with the SVC you need to stop the SAN to make any upgrades or changes to the SVC’s configuration. That introduces the idea of maintenance windows [and] downtime, which everyone wants to avoid.”

Then, there is the difference of partitioning support: with the new TagmaStore, cache can be divided into a maximum of 32 partitions to support multi-tenancy on the storage platform. That means you can divide the platform between different hosts or applications, including both mainframe and open servers, supporting different quality-of-service levels in each partition. Invista has nothing like it, Yoshida says.

At 700,000 I/Os per second, the NSC 55 beats all comers. IBM SVC supports up to 140,000 IOPS, while EMC Invista delivers a paltry throughput of 30,000 to 40,000, according to HDS-supplied comparisons. So, Yoshida asks, why would you want to pay double the price of the NSC 55 for the EMC solution?

Why indeed? At the same time, why buy all the virtualization and the FC fabric to go with it if your applications don’t require such a solution? The question was a kind of purity test of the CTO, a man whom I regard as a consummate technologist. His answer was telling: you don’t.

“We launched an initiative called Application Optimized Storage about a year ago. We are developing agents for popular applications to identify what kind of storage services they require. We have them for Oracle, SQL, Exchange, and Sybase and we are still developing them for DB2 and other applications. The idea is to match the right storage to the applications, based on their needs. In order to do that, we need to get information from applications, which requires agents,” he offered, sounding somewhat apologetic.

I understood his situation exactly. There are many IT professionals who were taught years ago that agents were a bad thing that stole expensive CPU cycles and precious network bandwidth. That was back when 10 MB-per-second networks and sub-Gigahertz processors were commonplace. As I listened to Yoshida, I wondered whether faster (and much underutilized) processors and networks haven’t changed the rules on agents. Clearly, to get to purpose-built storage, we need to know what applications need in the way of server, network, and storage resources.

Yoshida offered that, in addition to enhancing application awareness, HDS was proceeding on its commitment to support iSCSI as an optional interconnect on its platforms. Additionally, the company had just introduced new modular storage arrays including a 5 TB, $20,000 SATA array with RAID 6 protection called Workgroup Modular Storage (WMS), and a couple of models of higher-end modulars called Adaptable Modular Storage (AMS), with FC or SATA drives, larger capacities, cache partitioning and virtual ports (TagmaStore Lite?). RAID 6, he noted, is a necessary feature to shorten rebuild times following a disk failure.

All in all, Yoshida told a good story, and now you are reading it here. HDS is making a formidable thrust into the midrange of the market with a product set that carries with it an enterprise pedigree, but is not simply a dumbing down of Big Iron wares.

It will be interesting to see whether the other majors will follow suit with anything like the platforms from HDS…and whether the pricey technology (by middle-tier standards) will resonate with consumers. Watch this space.

Comments are always welcomed. In particular, if you use HDS products mentioned in this column, please write and tell us about their performance.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.