In-Depth

What's In Store for Storage 2008

Disaster recovery tends to become a focus during any period of economic uncertainty -- and 2008 will deliver uncertainty in spades if market pundits are correct.

This year flew by with little reference in the trade press to the ten-year anniversary of the Fibre Channel Fabric, oxymoronically called a "SAN." There was also a noticeable absence of any comment in October about the ten-year anniversary of the Storage Networking Industry Association (SNIA), whose formation paralleled the introduction of FC fabric products.

Ironically, Miss Manners associates a ten-year anniversary with "tin." Maybe that is what 2007 was: the year of tin—as in storage array chassis encased in tin, or vendor promises echoing as hollowly as tin. I will mark it as the beginning of the end of Fibre Channel.

I know FC fabrics are still being sold and upgraded. Mostly they are "inside" an array cabinet—so-called SAN in a Box—but times are changing.

Protocol Wars

There was a time when an FC hard disk and switch and host bus adapter constituted what the marketing people referred to as the fastest game in town for storing data. However, with the advent of 10 Gbps Ethernet standards, and with a spec for 100 Gbps Ethernet already ripening, FC's speeds and feeds claims lost their luster.

It is interesting to watch the industry scramble to keep their products from being marginalized by announcing FcoE—Fibre Channel over Ethernet—which is best characterized as a desperation play. Already, in forums and conferences around the world, vendor spokespersons are tripping over their own words, referring to their FC wares as "legacy SANs," even if it encourages the wrath of consumers in the audience who are wondering aloud, "If it is a legacy SAN, how come you just sold it to me last week?"

Frankly, FC never delivered the goods. It didn't provide the any-to-any connectivity between servers and storage that those who thought up a SAN back in the late 1990s had promised. The Enterprise Network Storage Architecture (ENSA) white paper disappeared shortly after Hewlett Packard acquired Compaq-qua-Digital Equipment Corporation, whose braniacs first articulated the idea of a SAN. HP knew that Fibre Channel couldn't deliver the universal storage pool described by ENSA, so they quietly buried the white paper and erased all traces of its existence.

HP, like many other vendors, was selling "FC SANs" that were SANs only because its marketers said they were. Fibre Channel itself, according to one unfortunate Cisco Systems representative at an event in Denver late last year, was a network protocol "because we [Cisco Systems, now in the FC switch business] say it is"—an assertion elicited a hearty "boo" from the audience.

Pounding the nail in the coffin of FC was a survey conducted last year that showed that FC fabrics were the third leading cause of IT downtime in companies, just behind "Wrath of God" (events such as hurricanes) and WAN outages. Server failure came in fourth on the list. This was interesting because surveys in 1997 saw consumers pursuing SANs because of vendor claims that they were more resilient than direct-attached storage modalities. With direct-attached configurations, losing a server meant that you would lose access to data in the attached array. By contrast, argued the vendor, deploying storage in a fabric would ensure that server failures would not impair data availability. Not so, it seems.

More important than debates over SAN resiliency to the future of FC was the new economic reality coalescing in 2007. In an increasingly bearish economic climate, FC fabrics were showing up on total-cost-of-ownership analyses as a huge cost nail in need of a good hammer. FC fabrics were and are, simply put, the most expensive way ever devised to host business data. Between the extraordinarily high hard (CAPEX) costs for equipment and the soft (OPEX) costs for SAN-educated personnel, software, downtime, administration and energy, spiraling storage costs accounted for 35 to 75 cents of every dollar being spent on IT hardware. With capital budget dollars in increasingly short supply (and the dollar's purchasing power declining significantly), something had to be done.

Enter iSCSI. The industry, even many of the FC Fabric vendors, have begun to tout iSCSI as the next big thing—a claim I have been hearing for roughly five years. Certainly, there are some plusses in the iSCSI pitch.

Implementing an iSCSI "SAN" (technically, iSCSI is not a SAN either) is less expensive in part because it leverages technology the company already owns (IP switches, NICs) or can obtain for free (iSCSI software initiators), and in part because it leverages skills sets the company already has on staff (a knowledge of TCP/IP networks and Ethernet). Additionally, the components used to create iSCSI-attached storage are mostly plug-and-play. Unlike FC switches, where standards have been developed to ensure that vendors can make their switches non-interoperable with one another even if they fully comply with the letter of ANSI standards, iSCSI switches (or rather IP switches) will work and play together.

It is also true that major vendors are ensuring that their iSCSI storage arrays remain just as proprietary as their FC products. That is part of an on-going business model, first articulated publicly by a former EMC CEO in 2001, that sustains profit margins by joining proprietary "value-add" software at the hip with proprietary controllers in order to differentiate otherwise commodity storage wares, to lock in consumers and lock out competitors. The proprietary nature of gear is what impedes cross-platform resource management and virtualization, among other things. This does not change with the deployment of iSCSI.

Helping iSCSI along is the current fascination with server virtualization à la VMware. In an interesting twist of urban mythology, iSCSI storage is widely thought to be easier to connect to virtual machines than other forms of storage, including FC. In fact, a LUN is a LUN, but this has not stopped iSCSI storage vendors from promoting the perception that server virtualization and iSCSI are twin sons of different mothers.

In the storage industry, where there is an overwhelming desire to pigeonhole any product or technology that sells more than a copy or two, dominant FC product vendors have categorized iSCSI alternatively as a niche interconnect aimed at small business consumers who have no performance requirements, or as a low cost method to join together two or more geographically dispersed SANs. Now, there is talk of iSCSI in the enterprise, particularly behind consolidated and virtualized servers. In a few cases, it continues to be described as a platform for retention storage containing data with low probability of re-reference. Strictly tier two. However, Dell's purchase of iSCSI array maker EqualLogic last month stood many of the old theories about the technology on their collective heads. EqualLogic wares displace some very low-end, low-performance iSCSI targets from EMC that Dell used to hawk on behalf of its Hopkinton, MA supplier. Press releases from the companies and related stories from the trade press stated boldly that the interconnect was making its way into the data center, replacing new Fibre Channel purchases in many cases.

Still working to gain a foothold are technologies such as Zetera's wildly successful UDP/IP-based block-storage protocol, Z-SAN, and other Ethernet and PCI bus extension-based storage connectivity options. Z-SAN is worth careful consideration as it is as simple to deploy as iSCSI and in many configurations exceeds the performance of iSCSI by leveraging UDP rather than top heavy TCP as a transport. Also significant is the elimination of additional expenditures for virtualization and RAID gear since these are provided via the protocol in Z-SAN.

Spinning Rust

This year also saw the introduction of Perpendicular Magnetic Recording (PMR) technology on disk drive products, enabling leading disk makers to field obscenely capacious drives of 1 TB and beyond. In 2008, we'll likely see the continued increase of capacities that has been the trend since the mid 1980s.

One by-product of PMR 750 GB and 1 TB 3.5 inch disks has been the invalidation of RAID 5 as a meaningful protection method for data. Rebuilding a RAID 5 set following the loss of a 1 TB member disk is an extraordinarily protracted process. RAID 6 and RAID n products will probably proliferate in 2008.

Despite the improvements of disk capacity, more than 70 percent of the world's electronic data continues to reside on magnetic tape. This volume will likely grow in 2008, contrary to the marketing messages of purveyors of disk subsystems, de-duplication technology, and disk-to-disk backup. Yes, this group has garnered a lot of ink in the trade press in 2007 for the advantages of disk over tape, and nobody particularly loves tape—an old school modality for data storage that predates even the disk drive. However, tape remains the most economical storage medium in the game and, as the recent advertising from Sony testifies, there are only two types of disk drives: those that have failed and those that are about to fail. Better have a backup.

Software and Services

This year, storage professionals saw an uptick in interest in storage management and virtualization—the former being an expected by-product of concerns about storage allocation inefficiency and cost; the latter driven by the server virtualization craze. This trend toward software will continue into 2009 and beyond.

The rule of thumb is that distributed storage, especially "FC SAN" storage, is an underachiever when it comes to allocation efficiency. Recent studies at large corporations have discovered repeatedly that allocation efficiency rates hover between 10 and 30 percent. That means between 70 and 90 percent of every disk in the SAN was being wasted on storing replicated data, contraband data, orphan data, and the like. Contrast that with mainframes, which provide tools at the operating system level for managing data and storage infrastructure: capacity allocation efficiency is consistently above 70 percent of optimal.

Cash-strapped companies want to stop throwing capacity at every need and to leverage the investments they have already made. This is good news for storage resource management software vendors including Tek-Tools, CA, and Symantec. It is also good news, potentially, for a new crop of storage service providers who will offload archival data from their clients and host it in remote data centers.

Archive and disaster recovery are topping the 2008 to-do lists of corporate CIOs in just about every survey commissioned by the industry. New attention to archive is a no-brainer: by archiving the 40 percent of data that occupies every spindle in your shop, you can forestall new hardware acquisitions, comply with regulatory requirements, breathe new life into your tape backup operations, and "green" your infrastructure in one fell swoop. Next year, you may also get a tax credit for dumping your old data onto greener platforms such as tape and optical if current legislation passes the Congress.

Expect more noise around data classification tools, data migration tools, and "pre-ingestion" archival tools matched to specific data types: databases, files, video, e-mail, and enterprise content management systems. I have my eye on Trusted Edge, acquired by FileTek, for its ability to classify user files based on the user's job function. I am also looking for additional players to join BridgeHead Software to provide manager-of-manager functionality between multiple archive policy tools.

Disaster recovery tends to become a focus during any period of economic uncertainty—and 2008 will deliver uncertainty in spades if the market pundits are correct. Technology for failing over infrastructure between and betwixt corporate offices and physical and virtual hardware platforms has been improving steadily from companies like CA XOsoft, Neverfail Group among others. New business process-centric DR scenario monitoring software, such as RecoverGuard from newcomer Continuity Software, will likely see some market growth.

Conclusion

All in all, the collective storage market may actually grow from its current $30 to $34 billion business to nearly $40 billion by this time in 2008. If it does, I'm betting that vendors won't achieve their revenue goals by pushing tin, but by pushing software and services that enable businesses to do more with less.

Some will find this more challenging than others. EMC has spent a boatload of money adding functionality directly on arrays and may need to rethink this strategy in order to push software value out into infrastructure generally. Conversely, little shops (such as Crossroads Systems and DataCore Software) are ideally positioned to add value around infrastructure at a fraction of the price for buying new arrays. This could actually be a harbinger of a new kind of storage infrastructure in which already-embattled array technologies find their wares being dumbed down on the altar of a new service-oriented storage model.

Final word: VMware will continue its expansion in 2008, but it won't be long before their product differentiation erodes. Microsoft's new server offers higher performance virtualization for those seeking to host multiple virtual machines in a single box. Because the majority of servers being virtualized today run the Microsoft OS, expect a one-stop-shop from Redmond to win the day over third-party virtualization wares. The handwriting is already on the proverbial wall.

Those are my projections with respect to storage in 2008. Your feedback is welcome: [email protected]. See you next year and thanks for reading.

Must Read Articles