Hitachi Data Systems' Hu Yoshida Looks Ahead

The author of a foundational white paper is struggling to figure out what the storage utility model really is.

Hu Yoshida is someone I respect a great deal as a fellow writer and as a technologist. Years ago, he authored a white paper at Hitachi Data Systems that described the elements and activities involved in practical storage management that became a foundational work in the industry.

How does a white paper become foundational? Simple. People steal it. They cut and paste Yoshida’s block diagrams into their own documents and call it their own work. Over time, the ideas and images become part of the general industry viewpoint, even if their author’s name is lost in the process.

Yoshida has never complained. Somehow, I don’t think it would be in character for this soft-spoken fellow who tends to eschew the limelight and focuses his time instead on building the future of HDS’s product set in storage.

Yoshida is currently consumed with two things: defining categories for his company’s storage arrays (which do not fit easily within the analyst taxonomy of modular and monolithic) and figuring out what to make of the industry’s current fascination with the so-called “storage utility.” To capture his thinking on the former, I refer you to a white paper available from, entitled “A Blueprint for Matching Network Storage Architecture to Business Applications Requirements fromCore to Edge,” which is attributed to Yoshida and co-author Carlos Soares.

While the paper contains a lot more marketing fluff than I would prefer, and includes irrelevant commentary from various analysts who say practically the same thing in every vendor’s brochure, Yoshida’s insights shine through. He would like to see the idiotic analyst categories “modular” and “monolith” replaced by truly useful categorization such as “core” and “edge” that link products back to specific application-focused usage patterns. It’s a good idea, and another of Yoshida’s foundational concepts—this time to support a truly utilitarian model for storage array acquisition based on application requirements.

The other issue on Yoshida’s mind is the so-called “storage utility.” In our conversation, Yoshida conceded that he is struggling to figure out exactly what the “storage utility” is so HDS can design an appropriate strategy to compete with other vendors who are touting its benefits. He says that the term has two meanings: one is technical; the other is financial.

The storage utility model, he says, is on the one hand about technologies such as HDS’s virtual ports—the capability to create virtual arrays that facilitate capacity aggregation and management. The vision, in his view, is an intelligent storage network with a virtual address space that separates the server view from the physical storage and provides resources based on application performance indicators and requirements.

I can already see the new block diagram coming together in his mind: on top of device platform management and storage resource management is process management, then service management—the next evolution in storage management with intelligent arrays providing an assist.

I would be cautious if I heard this mantra from any other array vendor, but Yoshida is passionate about two other things that are very important from a consumer’s point of view. He says that the intelligent infrastructure will require a standard messaging bus for a single point of management. Moreover, he says that the Common Information Model (CIM) must happen to prevent what amounts to the creation of multiple, vendor-specific, approaches that will prevent storage utilities from ever being more than proprietary, vendor-specific, lock-ins. Agreeing on CIM management standards and CIM SOAP messaging formats will not commoditize storage, he argues, but it will enable vendors to sell more storage that can be effectively managed by fewer personnel.

That’s the vision part, but Yoshida must also concern himself with the marketing definition of storage utility. He notes that the current economy has compelled some companies to consider outsourcing their storage infrastructure to a third party. JP Morgan Chase just agreed to a $5 billion “pay as you go” outsourcing deal with IBM Global Services, while HP just garnered $3 billion in a similar contract with Proctor & Gamble.

Getting into this storage provisioning game requires deep pockets and broad competencies. If it becomes a trend, Yoshida worries, it will marginalize equipment suppliers and take away whatever affinity and brand recognition that array manufacturers have cultivated with their customers. The financial case for outsourcing seems compelling enough: customers will no longer own depreciating assets. However, there are many potential deficits both for the consumer and for the vendor.

While Yoshida works to identify ways to package his company’s virtual ports and virtual arrays, we will watch to see whether storage utility-qua-outsourcing really gains a head of steam in the market. To readers of this column, I would welcome hearing your views on, and experiences with, IT outsourcing generally and storage services in particular.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.