In-Depth

Hybrid Drives: Good Architecture or All Flash?

Is Microsoft's Vista responsible for a push to hybrid drives?

As if another architectural question in storage were needed, the industry has set up a new group, the Hybrid Storage Alliance, to develop flash-memory-assisted hard disk drives. We reported on this development two columns ago and promised to revisit the topic.

The time has come. The topic of the day is what role, if any, hybrid drives play in enterprise computing.

Let’s begin at the beginning. What is a hybrid drive? Based on interviews I have conducted with disk drive mavens, hybrid refers to the addition of memory to hard disks to realize several benefits, including (in the words of the January 4 press release from the industry group) “longer battery life, faster response, [and] greater system durability.” In addition, the release suggests that this technology is aimed rather narrowly at disk drives installed in notebooks—more specifically, at notebooks loaded with Microsoft’s new Vista operating system.

The reasons suggested for the necessity of hybrids are seemingly straightforward. If the boot image is stored in memory, you can boot a laptop (or resume operation from a hibernating state) more quickly than if it needs to be loaded from a disk drive. I’ll buy that. Everyone I know (except for Mac users) dreads the initial boot-up on their laptop: it’s like watching paint dry. Presumably, the problem is exacerbated under Vista, which should come out close to the day this column is published. So, expedited boot up is a plus.

The second rationale for hybrids is a bit of marketspeak if you ask me. Officially, the group says that hybrid drives will save laptop battery power. In their words, “Hybrid drives curtail platter spin time, which reduces power draw. This, in turn, extends battery life, especially important in notebook PCs and other mobile applications.” This is a rather nice way of saying that Microsoft Vista has the laptop-unfriendly tendency of writing temporary files semi-frequently to disk while a system is active. Writing data, even small log files, to a typical disk drive consumes precious battery power, making you crazy when you find yourself with no power on a long flight across country or overseas.

Hybrid drives offer less power-consumptive Flash RAM chips to serve as the target for Vista’s bit wash so that it doesn’t need to be stored on a spinning platter. That saves your precious battery power—just in case you are ever lost in a forest and need to light a fire with your battery later. It strikes me as interesting that the whole industry finds itself in contortions to address an issue that no one seems to want to point out to Redmond. Perhaps they believe that they would have as much success in changing the way that Vista writes files as they would, say, cajoling airlines into building power into their seats and increasing space between rows so you could effectively use a laptop in an airplane anywhere but in an exit row or in first class.

The last two stated benefits are “gimmes”: (1) less time accessing the hard disk increases its life, and (2) using non-volatile RAM (NV-RAM) to store data gives you greater protection against data loss when you drop your laptop on the ground or TSA bangs it up on the x-ray belt at the airport. Assuming that is true, it begs the question of why memory needs to be added right on the drive. Why not use a thumb drive or put an extra stick of RAM on the motherboard and configure it to perform this task? That question doesn’t seem to have been answered anywhere.

Hybrid Options for the Enterprise

Bottom line: hybrid-drive development seems fairly focused (for now, at least) on the laptop drive. However, outside of the realm of the disk makers, discussions have already begun to surface in blogs and press interviews regarding the promise of the technology in an enterprise storage setting. To some degree, the debate is shaping up along the thought lines mentioned above: disk caching is a good idea, but where is the right place to implement it? This is the architectural decision point I alluded to at the outset.

Option 1: Cache on the disk itself using a layer of NV-RAM. As hybrid drives enter the market, one alternative will be to cache directly on the disk drive. This strategy follows a pattern we have seen before with respect to data encryption. Some drive makers have begun to offer “secure drives” that encrypt data as it crosses the drive interface electronics and is written to the platter. It remains to be seen whether self-encrypting drives will remain a novelty or go mainstream, but it reflects the desire of disk makers to add value to commodity wares as a means of differentiation. Hybrid drives, from the enterprise perspective, may fall into the same category.

One potential plus of this strategy might be the caching of data about data on the drive. Imagine a multi terabyte-sized disk drive: finding specific files from among the morass of bits stored on the platter might become a problem over time, affecting both read and write operations. Placing a bigger memory cache on the drive that contains, say, a readily accessible drive map or index, and combining this function with the caching of the last files accessed from the drive, might help surmount the latency problems that could develop.

Solid state disk (SSD) takes this idea to the extreme, replacing the spinning components of the disk altogether with a virtual disk living on a collection of chips. To date, the SSD market has yet to evolve much beyond niche opportunities for I/O acceleration behind high performance applications.

Option 2: Cache on the array (and sometimes on the server, too). This technique is probably the mainstream approach in enterprise storage today. On-array caching is used for a variety of purposes and contributes dramatically to the price of enterprise array products.

Some vendors use NV-RAM caching to spoof servers and applications that are writing data to the array. They place a cache buffer on the array to receive incoming data and to acknowledge receipt so that the sending application or server can go on about its business without being delayed by the time required to actually write bits onto disk or to stripe it across multiple disks. Since spoofing sounds bad, vendors used to talk about their “elegant caching scheme for surmounting the write penalty imposed by RAID.”

Conversely, there is a burgeoning need, in the view of some vendors, to expedite disk reads through caching. Typically, such a need arises in situations where storage is used to host what is primarily reference data that may be read frequently but seldom updated or changed. This requirement is viewed by some vendors, such as Gear6, as showing significant growth. According to Gary Orenstein, vice president of marketing for Gear6, “There has been a shift toward an increasing percentage of reads over writes with the advent of the Internet. Existing storage doesn’t do a lot of simultaneous reads well, so there is a need for new methods of read caching.”

Option 3: Create a network-based “global” cache. What Orenstein is inferring is the value proposition of his company’s forthcoming products. Gear6 wants to provide read caching as a function of the network in which storage is attached. He says that his caching appliance and software front-end storage arrays (what he terms the “persistent storage layer”) and eliminate the work that storage architects must do to maximize their storage caching components on servers and arrays.

“Memory capacity has always represented a constraint on storage efficiency. We seek to make cache tuning evaporate … as a task for the storage architect,” Orenstein says. Moreover, he sees his approach to “centralizing caching in the network” as a way to offload burden from caches already in place on disk arrays, “If you think about it, by removing the need to manage the caching of read requests on arrays, we can provide passive write acceleration—enabling on-array caches to work on write requests, increasing throughput and reducing latency.”

He says that on-drive caching is fine for notebooks, which he terms a “one drive world,” “but an enterprise cache should be scalable independent[ly] of heads (array controllers) and independent[ly] of individual disks.”

Gear6’s memory-cache appliance is in beta/early adopter shipments today, and Orenstein says that general availability will be announced in the next several months. When the product is available, he has promised me one for testing and we will report what we find here.

From where I’m sitting, some sort of in-network caching scheme makes a lot of sense. For one thing, it is a further deconstruction of monolithic storage that bodes well for creative and cost-efficient storage building. That said, it remains to be seen whether a centralized appliance model is more sensible than, say, a federated architecture that leverages cache memory wherever it happens to be available.

If Hybrid Disks do become the flavor of the month, perhaps the industry could cooperate on a scheme for federating a bunch of drives together so that their collective cache could be shared efficiently via a network service. This scenario is far more likely than the alternative of getting array manufacturers to agree on a scheme for sharing the cache on their proprietary controllers.

Much of this is speculation, of course. The only certainty is that, going forward, deciding what to do about caching will become a more complicated choice between a growing number of options rather than a simpler one. Stay tuned for developments, and send me your thoughts at [email protected].

Must Read Articles