In-Depth

On to a New Decade

In 2010, vendor marketing campaigns will ramp up with the usual hype.

In nearly every publication you've read lately, pundits resemble the oracles of old, trying to predict what trends will occur in the new decade. Many of these trends are based on statistical models to give them the weight and appearance of science despite the fact that they are often less "objectively empirical" than they seem. In this first column of the new decade, I will strive to get in front of the 2010 vendor marketing campaigns to bypass the statistical blather and to better understand the real importance, if any, of what we're reading in vendor brochures, analyst reports, and the trade press.

One storage-related statistic that jumped out at me in late December came from the University of California, San Diego, the latest home for the on-going research effort called "How Much Information" that has been at the heart of much of the clarity surrounding the digital revolution -- as well as of deliberate misinterpretation by marketing people in the storage industry. According to the most recent study, "U.S. households consumed approximately 3.6 zetabytes of information in 2008… corresponding to 100,500 words or 34 gigabytes for an average person on an average day." The entire report is downloadable from UC San Diego. (http://hmi.ucsd.edu/pdf/HMI_2009_ConsumerReport_Dec9_2009.pdf)  For the record, the above numbers exclude data consumed by corporations. The researchers are talking about household data consumption: YouTube videos, Twitter tweets, Google look-ups, games (online and offline), digital media, etc.

With the findings thus clarified and contextualized, you may see these numbers crop up in "data growth projections" from your favorite industry analyst house or in vendor PowerPoint presentationss claiming to describe the "data explosion" confronting businesses in 2010 and beyond; I give you full permission to call them on it. By any realistic estimate, actual data consumption in the corporate world is a small fraction of this number.

Still, the statistic is impressive -- if only as a harbinger of behavioral trends. It seems that we are encouraging a culture of profligate data consumption in our home lives -- and in the lives of our children -- that will likely travel with us (and them) into the workplace.

This is not necessarily a bad thing, of course. More timely and accurate data can facilitate better business decision-making. On the other hand, because most home-based data acquisition appears to have no "real world" consequences to our Internet-savvy children (the methods and infrastructure used to store it and transport it to the consumer are "dark" or invisible to users, to use the words of the study), it also does not cultivate any sort of appreciation among users of what is actually required to store, protect, or even to transport all of these bits. This, too, will likely bleed over into the business world … assuming it hasn't already.

Processing, transporting, and storing data have real associated costs, of course. This includes the cost of cabling, tin-wrapped motherboards, feature cards, chips, displays, spindles and controllers, tape drives, optical drives, robots and autoloaders, power supplies, and network switches edge and core. These hardware costs are augmented by facility costs -- physical data center facilities, electricity, UPS and power generation, heating and cooling, physical security systems, etc.

In addition, if the hardware is doing anything purposeful, it must be protected and, in some cases, replicated for continuity purposes. That tacks on between 1.5x to 2x the original hardware cost to the overall infrastructure investment.

Now, add to the hardware budget the cost of software, which makes the infrastructure relevant: operating system software, business application software, management software, security software, data protection software, utilities and tools, and integration middleware. Annual licensing fees for software, which enables you to patch and upgrade on a routine basis, rival maintenance contract fees for the hardware itself.

To top it all off, you have to consider the "soft costs" -- labor, administration, planning, testing and quality assurance, and help desk. IT folks need to be paid, trained, insured, and provided the desks, chairs, phones, office supplies, and quality-of-life components enjoyed by other workers in the company.

Taken together, these costs are huge. Some analysts estimate that between 6 and 10 percent of corporate revenues go into supporting IT operations. A small percentage of firms divvy up this expense among business units in the form of chargeback systems that let department managers see what technology resources they are using and how much they cost. However, even in these challenging economic times, the majority of firms treat IT as an overhead expense, a cost of doing business, underwritten at the corporate level and used by departments typically at will and without a lot of concern for the price tag.

In the decade just ended, we witnessed an on-going debate over the solvency of this IT strategy for the future. The 2004 book Does IT Matter? suggested that internal IT was going away in the coming decade, in large part because its expense couldn't be justified -- especially in a future where IT service providers would offer technology via pay-per-use across the Web. Cloud computing folks have seized on this "insight," which echoes earlier claims by 1990s application service providers, and before that by outsourcing service vendors, arguing that IT is not and should not need to be a "core competency" of, say, a department store chain, or a grocer, or a bank.

Others have sought to attack the inefficiencies of technology infrastructure, such as sub-par usage of resources (for example, server CPUs and memories), and, of course, storage. Virtualization, both server and storage, are part of this trend.

Virtualization Sense

The server virtualization trend begun in the Aughties will likely continue into the Teens, as sweeping new initiatives within large and small firms continue to be heralded in vendor surveys. Symantec's Data Center 2010 study, released recently, stated that 82 percent of its survey respondents were planning to deploy virtualization technology this year.

Interestingly, however, a new sensibility about server virtualization may be filtering through the hype around the hypervisor. Last week, I conducted an interview with a former senior level exec for VMware in Europe who told me that deal sizes had been falling off slightly. Instead of $1M-plus hardware and software bundles that had been seen early on in the virtual server hype curve, most enterprises were making software-only purchases totaling only about a third of that amount.

The primary goal of IT planners in deploying virtualization, he said, had shifted from the lofty ("enterprise IT resource optimization") to the more pragmatic ("reducing data center power consumption and prolonging the life of existing hardware as more apps were being deployed"). That might help explain the demise several weeks ago of Verari Systems, a firm specializing in bundling high-density blade servers and storage with a hypervisor chaser -- an expensive and proprietary one-stop shop that had some impressive customers.

The ex-VMware boss noted something else that might not have been discussed were he still employed by his former firm. He said that most companies deploying hypervisors "were doing things all wrong." He pointed out that customers were deploying hypervisors on single core/single socket servers -- equipment they already owned. However, he noted, "dedicating one core per application produces ridiculously low densities of virtual servers -- 4:1 or 10:1. It's hardly worth the effort or the pain, especially the disruption that usually occurs in the storage infrastructure when you virtualize servers."

This observation is important on two levels. First, it portends that companies will not achieve the promised efficiencies of server virtualization any time in the near future. There may be incremental improvements, but little for vendors to brag about or to use to justify all of the hype. This might bring sanity back into the discussion of server virtualization as this decade unfolds.

The other issue -- that of the disruptive impact of server virtualization on server/storage relations -- is also important. The I/O "brain-deadness" of hypervisors has been discussed here in the past. Some claim that hypervisor deployment is killing network storage (whatever that means) and driving consumers back to direct-attached configurations. Other folks are claiming that it makes iSCSI-attached storage more important than ever.

The only certain thing is that the issues created by server virtualization and I/O are helping to create opportunities in the Fibre Channel fabric world for I/O monitoring and management products such as NetWisdom and the new "VirtualWisdom" offering from Virtual Instruments. That company is logging good revenue growth in an otherwise down tech market by helping companies address their inefficient use of FC storage, exacerbated by server virtualization.

The same can be said of Tek-Tools, whose storage resource management software, Profiler, is becoming a "must-have for IT admins who need a clear view of storage from the perspective of virtual machines," according to CEO Ken Barth. In a recent interview, Barth claimed that his sales are high and to the right and pointed directly at server virtualization for an explanation.

Interestingly, server virtualization has breathed new life into storage virtualization -- resurrecting the technology from the rubble of the late 1990s tech industry meltdown and driving it into the mainstream. Server virtualization claims to optimize CPU and memory resources in commodity servers by separating the application software (and in some cases the operating system) layer from the underlying hardware platform, then inserting a hypervisor of some intelligence into the allocation of hardware resources to apps. Storage virtualization takes a similar tack: a storage virtualization software layer abstracts the "value-add" software functionality from storage array controllers and places it into a software layer.

The fundamental merit of this approach, which has crystallized in products from DataCore, FalconStor Software, ExaGrid, and a few others, is that it enables value-add software functions to be applied across all connected storage hardware, regardless of the brand name on each box. That helps to eliminate consumer lock-ins to proprietary storage vendor rigs and to save big on current on-array software licensing fees.

In the latest developments, DataCore Software has set new industry records by creating stable virtual volumes from commodity disk of a petabyte or greater in size. Their Advanced Site Recovery, and comparable Continuous Data Protection functionality offered in FalconStor iStor, are enabling data protection to reach a new level without the associated cost of requiring identical name-brand equipment at your production and disaster recovery environments.

Storage virtualization makes enormous sense from a resource efficiency perspective as well as a cost perspective. By establishing value-add functionality in a software layer that scales independently of hardware, hardware vendors can focus on improving the speeds and feeds of their rigs without concerning themselves with maintaining a complex set of software functions that tend to break more often than the hardware.

Will The Empire Strike Back?

Of course, not all vendors are content to have their wares reduced to the status of commodity building blocks. Although Seagate just announced positive first quarter results of its disk drive business -- shipping nearly 50 million drives in three months (96 million in the last six months) -- just adding all of these drives to undifferentiated RAID arrays doesn't translate into market leadership for the likes of EMC, HDS, IBM, et al. The storage industry has been using on-array, value-add software both to distinguish their products from one another and to hike up the prices year over year in a trend pattern that ignores the consistently falling costs of the drives they use.

In the latest developments, HP announced a joint effort with Microsoft to create pre-built server, network and storage units optimized for hosting Microsoft applications. In turn, NetApp, Cisco Systems, and VMware announced a similar product. The vendors couched their announcements in "cloudspeak," saying that they were going to deliver building-block platforms for cloud services, echoing architectural models similar to those described by analysts who watched Oracle absorb Sun Microsystems, STK, and Virtual Iron last year. For his part, Oracle CEO Larry Ellison has gone on record stating that he hates clouds, though he is interested in creating an Oracle software-plus-hardware stack.

The key point is that all of these efforts could be interpreted as a desire of the vendors of increasingly commoditized servers and storage rigs to attack the commonly-cited cost issues of distributed computing by addressing not the obscene sticker prices they tack onto their gear, but the inefficient utilization of those resources.

In the short term, these products may have appeal. IT staff has been cut to the bone in many firms. The idea of a pre-integrated hosting platform for an application offering top down management will certainly appeal. It may even work out to cost a bit less than maintaining a qualified IT staff, providing a hedge against the low-cost leadership claims of some of the cloud people. However, as different all-in-one host platforms are rolled out, the inevitable issues that will arise are two-fold: efficient scaling and common management of "mini-mainframes" from different vendors. This is likely to push the need for qualified IT workers to a new level.

Welcome to 2010. Your comments, as always, are welcome: jtoigo@toigopartners.com.

Must Read Articles