Storage Projections: The Lucky 7 for 2007

The top seven storage trends that mattered this year and may matter even more in 2007.

As 2006 winds to a close, it is useful to look at the storage trends that mattered and to wonder about those that may matter even more as 2007 unfolds. The large analyst houses, such as Gartner and IDC, have already had their say. Now it’s my turn. I call my projections the Lucky 7.

Number 1: Consolidation in the Data Center

The beginning of 2006 saw a brief surge of corporate IT spending, earmarked simply as “consolidation projects.” To reduce expenses for software licenses, to contain heat and energy costs, and for a host of other reasons, many companies consolidated servers and software using products such as VMware and various blade server technologies—a smart move.

At the same time, many “consolidated” their data, previously stored in far-flung branch office environments, into a central repository—which was not always such a good idea. According to storage vendor marketing, the latter strategy offered the “intuitive benefits” of (1) reducing labor costs for managing distributed IT, (2) enabling economies of scale and better leveraging of both platforms already deployed inside the data center and personnel required to manage them, and (3) exposing the newly re-centralized data to more disciplined data protection and archiving routines.

However, these intuitive gains too often produced unanticipated losses. After data re-centralization, help-desk operators in corporate IT often found themselves inundated by complaints from end users regarding application performance slow downs and long wait times for access to files.

Simply put, the cost of violating the 80/20 rule of networks as it applies to storage (which is to say that 80 percent of accesses to data are made locally by those who create them) was much higher than anticipated. Re-centralization created choke points in file access and imposed new workloads on networks and other infrastructure components.

To address these issues, 2006 saw a “new” technology, called Wide Area File Systems or WAFS, go mainstream—ostensibly to address the issue of file sharing. WAFS appliances cached copies of centrally stored data back out to the branch offices where the data had originated. In the end, the cost of WAFS technology often reduced whatever cost advantages that were supposed to accrue to re-centralization, making WAFS the greatest technology that no one should ever need to buy.

In 2007, I am hopeful that most companies will have learned their lesson about file re-centralization. File re-centralization isn’t the silver bullet solution for data protection or regulatory compliance, with or without WAFS products, that many vendors claimed. In truth, WAFS itself is looking less and less like a “productizable” technology because 2007 should see delivery of long-awaited extensions for NFS version 4 that will provide most of the functionality of proprietary WAFS products, but in a universal, standards-based way, as part of NFS.

Number 2: Array Deconstruction

While the economics of disk continued their predictable curves in 2006—doubling in areal density over 2005 figures while decreasing in cost by half on a per-GB basis—“enterprise class” arrays continued to accelerate in price on a year-over-year basis between 70 and 120 percent. Driving the cost increases were (1) the industry-wide adoption of dual ported memory in cache controllers (whether or not such technology was actually necessary from the standpoint of cache protection), and (2) the addition of value-add software to arrays (charged to consumers whether they were using the features or not).

Typically, value-add software joined at the hip to a proprietary array was soft-peddled by vendors as a “one stop shop” solution with a “one throat to choke” invoice and warranty. In point of fact, most of the value-add software had a tendency to create proprietary lock-ins for the vendor’s product. The consumer who spent big bucks on proprietary wares would therefore be required to buy additional arrays or disk drive trays only from the vendor who sold them the original array—if they wanted to derive any value over time from their investment in value-add, that is.

Whether this marketing model will go the way of the dinosaur in 2007 is a matter of conjecture, of course. However, there are some promising signs.

During 2006, we saw in the market many excellent disruptors of the monolithic array model. First is the Universal Storage Platform (USP) from Hitachi Data Systems. Some would argue that the USP is just bigger iron, but I suspect that consumers are not buying it for that reason. I’m reasonably sure that CTO Hu Yoshida and the gang at HDS did not conceive of the USP, also known as TagmaStore, as a disruption of monolithic storage platform engineering. However, by deploying a USP, with its cross-bar switching head, then slaving all other storage, whether EMC, IBM, or no-name, off the back end ports, HDS made it quite clear that, at the end of the day, everyone was just selling a box of Seagate hard drives.

Whether or not the high priced TagmaStore is your cup of tea doesn’t really matter; 2006 also saw a proliferation of other Big Iron busters, including new products from OnStor, Reldata, and a host of others, designed to deconstruct NAS technology and to front-end virtually anything from a few file servers to a flock of arrays in a Fibre Channel fabric or iSCSI network with a common NAS head. Somewhat like TagmaStore, these products demonstrated that you could physically pull the controller and interconnect from the monolithic array and place it into its own box, enabling disk to be purchased at list price, which is constantly falling.

Adding to the phenomena of deconstruction was the ingenious separation of array functions from arrays themselves by vendors (mostly start-ups) who, instead, placed functionality onto software-based appliances and switches. Virtualization vendors paved the way, but Paul Carpentier, CTO of Caringo, helped reinvigorate the trend by creating a content addressable storage product (CAStor) that you could install on any server and that would turn any storage volume into a CAS platform. The approach made a lot of consumers start second guessing the real value of an overpriced proprietary hardware solution like EMC Centera. Score one for the little guys.

Crossroads Systems also made an impression, this time entering the fray with a whole bag of new appliances based around their intelligent router engine. For Crossroads, it was time to reinvent a company whose fortunes, based on Fibre Channel to parallel SCSI bridging, had been waning for a few years. Following a new business plan, the company returned to the forefront of the market with appliances that did discrete security and virtual tape functions, promising to take a significant bite out of the profits of the big iron boys.

Even Microsoft, it seems, is a member of this party. Last week, Redmond announced a software product (called a SKU) in the form of the Windows Unified Data Storage Server. Blended with server hardware from OEMs such as Dell, the WUDSS is capable of serving as the iSCSI controller for a number of back end storage platforms. If this product is ultimately integrated with Microsoft’s Storage Server R2, you can add CIFS and NFS file services and WUDSS will become a true multilingual storage controller. Cobble it together with Microsoft’s WinFS data-base-as-file-system-replacement and its Open File XML document format, which was just approved as an international standard for documents, and the next storage solution you buy might just have Microsoft written all over it.

Look out also for Gear6, an exciting start-up out of Menlo Park, CA. If they are successful, as they may well be, at speeding up all storage with an external common cache, the value proposition of big iron may become increasingly difficult to sustain.

Number 3: IP Storage Ascendant

Readers of this column are aware of the oversell that has always accompanied Fibre Channel. As a storage interconnect, FC is well-suited to extremely high performance applications. However, roughly 80 percent of the universe runs on Windows, which does not require FC speeds and feeds.

iSCSI, on the other hand, offers a rich, scalable, and inexpensive alternative to FC that is perfectly suited to Windows apps. This we’ve seen a massive deployment of iSCSI solutions, far and away dwarfing the implementations of Fibre Channel according to the IDC analysts. I’m not sure I believe IDC numbers, but I do believe my own eyes, and iSCSI has made significant inroads into enterprise storage this past year—far beyond what IDC and Gartner were identifying as sweet spots for the technology last year (e.g., as low cost, IP-based gateways between FC SANs.)

iSCSI is good stuff, to be sure, but it is still a channel interconnect masquerading as a network—or more precisely, iSCSI is operating block storage as just another application across a network interconnect. It beats the price of FC any day of the week, and has a significantly lower learning curve. However, it isn’t technically networked storage. This year, UDP-based storage based on the work of Zetera in Irvine, CA emerged in a big way. Bell Micro and Netgear were the first to offer Zetera Storage-over-IP, which leverages UDP/IP, rather than TCP/IP, to enable IP multicasting. This is important because it places storage functionality such as RAID, copy on write, and even LUN virtualization directly into the network protocol where it belongs. If these functions can be delivered as part of the network protocol, we move a lot closer to networked storage and array deconstruction.

I’ll bet, as Zetera-enabled products gain market presence in 2007, that you will see some other vendors such as LeftHand Networks and Adaptec, both of whom have stellar UDP-based storage protocols in their hip pockets, begin to offer these as alternative interconnects for their platforms. Time will tell whether the awesome success of Zetera-enabled technology, which have blown away the adoption records set by Fibre Channel SANs in the last 24 months, will turn the heads of the major players or their loyal customers.

Number 4: Proliferation in the Intra-Array World

Even as an increasing number of protocols are coming to the fore in the server-to-storage interconnect space, proliferation is occurring in the intra-array world, as well. Leading the charge is Serial Attached SCSI or SAS. I have to confess that I was originally very concerned about the future of this technology, which delivers the bulk of the value associated with Fibre Channel but for a significantly lower cost.

Early on, there were problems with the testing of the standard. The SCSI Trade Association tried to bury the issues, but it seemed that every vendor and his kid sister were going their own way when implementing the SAS protocol on their devices. The result was some early incompatibilities between SAS controllers and hard disk drives from various manufacturers that we saw in our testing work at my labs.

These issues now appear to have been worked out and certain companies (in particular a newly invigorated Adaptec and LSI Logic, and hard disk makers Seagate and Fujitsu) seem to be going after SAS in a big way. New SAS controllers are inexpensive and efficient, delivering speeds and feeds on par with their more expensive FC cousins. At the drive level, a SAS drive is the same hardware as the FC drive or parallel SCSI drive, with comparable speeds and feeds and resiliency characteristics, but with a different electronics pack. Plus, SAS is designed to support not only dual ported SAS but also single ported SATA drives—natively.

I now expect SAS to make huge inroads into the storage array market in 2007. Moreover, SAS’s little secret, that it offers a worldwide naming scheme and other features comparable to Fibre Channel, might just see the protocol moving outboard from the array and eating into the profits on the Fibre Channel vendors’ tables. The SCSI Trade Association was always a little coy when why the discussion of SAS was always confined to its use as an interconnect inside the array. FC is both inside arrays and serves reasonably well as an externalized inter-array fabric protocol: so, why not SAS? Politically, they could not in conscience say such a thing: most SAS manufacturers also manufacture FC components, which is where real profits are made. But there was always that telling wink…

Number 5: Industry Consolidation Takes a Toll on SNIA

In 2006, the Storage Networking Industry Association began to feel some pressure to change from an aggregation of mostly hardware manufacturers to something else. They are, after all, a microcosm of the industry as a whole and the implosion of the industry, with large companies gobbling up small fry, that has occurred in 2006 is having a negative impact on membership opportunities within the SNIA club.

It is too soon to suggest that SNIA will go away in 2007, though little would make me happier. The organization, in spite of a fine collection of technical talent, has made a political mess of everything it has tried. Earlier this year, one of the founders contacted me to ask for input into the organization’s effort to reinvent itself. I suggested moving to a more open model, like IETF, that would enable consumers to participate in the development of quasi-standards like Storage Management Initiative. I further suggested that they look at ASCDI, an association of resellers and integrators, for a model that describes standards and ethics and the means to enforce them within the industry.

Alas, little has happened since, though I hear rumblings that the organization is going to try to move up the protocol stack in 2007, growing its potential revenue base by inviting in the records management companies. AIIM and ARMA are already entrenched in this space, and I wonder what advantage records management vendors will see in paying even more dues to another organization.

Time will tell whether 2007 tolls the death knell for SNIA.

Number 6: Industry Consolidation Takes a Toll on Consumers

Consumers who were willing to go outside the ranks of vendors with three-letter acronyms for names to try the innovative ideas and products of new companies found their strategies compromised this year as big vendors absorbed many little ones. Mergers and acquisitions are inevitable in the contemporary market across all industries, and storage is no exception.

The good news is that some mergers may actually have strengthened the products of companies that were acquired. In the case of XOsoft, for example, which was acquired by CA this year, the company's products are being integrated into a stronger foundation of data protection functionality than was previously possible on its own.

Meanwhile, the jury is still out on the fortunes of Revivio, acquired recently by Symantec. By all accounts, users will be served by Symantec with the next evolution of the product at just about the same time that Revivio promised its next release in early 2007. The Revivio acquisition might also provide a lower price point for Revivio wares than Revivio itself could afford to charge.

Someone smart once said that the only permanent thing in life is change. So it is with storage in 2007. The good news is that small innovators will continue to form companies to create and release new products. Many of these innovators are from the very companies that were absorbed by the big guys in 2006. Call them serial entrepreneurs, but they like start-up row and are bound to keep reappearing there.

Given this fact, it is probably a good idea to hedge all storage acquisitions going forward. Look for at least two sources of supply for any critical infrastructure component. That way, you will have alternatives if the need arises.

My other piece of advice is to look for small vendors who are actually enjoying what they are doing and who are not beholden to venture backers in a big way. Tek-Tools and DataCore Software are cases in point. Both vendors continue to innovate and develop cool new products. They are the closest thing in this industry to becoming “beloved brands” to their customers. When you encounter such companies, you must buy their stuff.

Industry consolidation will continue in 2007. But you have a better chance of surviving it than does SNIA.

Number 7: The Rise of Storage Service Providers…Finally

For all the hoopla around regulatory compliance and its growing mandate for data archiving and data protection, I suspect that organizations are even less able today than they were last year to field expensive new infrastructure and to hire the specialized personnel required to organize their storage junk drawers. This paves the way for service providers to begin building viable businesses around archive and backup.

Dynamite service providers have begun to appear, supported by an eager telco industry and an even more desperate cadre of storage hardware vendors, sporting world class facilities for data hosting and skilled personnel for data management. If you’d rather not sort your own junk drawer, you will probably want to consider one of these third party services. Arsenal Digital, CenturyTel/The Mountain, and Data Islandia are three providers we have covered here in past columns, and I am sure that others will be knocking on your door shortly.

This time, the SSP model may actually work.

Well, those are my lucky seven trends. We wish you a great holiday season and look forward to covering the wacky world of storage in 2007. Send your holiday wishes to

Must Read Articles