In-Depth

Enterprise Storage: Hitting the Wall at 150 GB/IN2

For the first time in recent memory, all of the leading manufacturers in the highly competitive field of magnetic disk drives agree on something. Unfortunately, it isn’t good news. With current technology and materials, we will soon witness the end of 120 percent per year improvements in disk drive areal density. Areal density refers to the number of bits that can be stored on a square inch of disk media. It appears, we hit the wall at 150 gigabits per square inch (150 Gb/in2). With current disk density growth rates, that will happen within the next three to five years.

Individual vendors have made claims about density limits in the past, of course, only to have their clocks cleaned by a competitor’s announcement that a tweak to read/write heads, or actuator positioning circuits or platter coatings had brought about an areal density breakthrough. The fact is, for a while, many vendors were reluctant to make any statements regarding density limits at all.

But, this time, all of the vendors are singing the same tune.

The Impact of Superparamagnetism

With current drive technology, at around 150 Gb/in2, the magnetic energy holding bits in their recorded state will become equal to the ambient thermal energy within the disk drive itself. It’s called superparamagnetism, and it will cause "bit flipping."

Bit flipping is bad. It means that recorded data is no longer held in a reliable state. Bit flipping can do nasty things like adding extra decimal places to the "amount due" line of your electronic tax returns, changing the shipping address to mom’s house for that lingerie you ordered your girlfriend on the Web, or re-tasking a weather satellite so that it is placed on a collision course with that top secret, space-based, nuke platform that nobody is supposed to know about. In short, bit flipping can spoil your whole day.

Many take consolation in the caveat offered by the vendors, "with current technology and materials." The implication is that there are a number of other technologies or materials breakthroughs in the offing that will keep pushing disk drive capacities higher. And, there are.

Unfortunately, new drive designs and read/write techniques -- including thermally-assisted drives, perpendicular recording, and an assortment of near- and far-field technologies -- are about a decade off. A lot of research shows promise and some technologies have been demonstrated under laboratory conditions, but most insiders agree that it will take many years to bring products to market based on these new designs.

The bottom line is that the vendors are telling us to batten down the hatches, and prepare ourselves for the end of the dynamic that has fueled much of the development of IT infrastructure for the past three decades -- increasing disk storage capacity and decreasing price. The big question is: What will happen once the trend lines flatten out?

One thing that will happen is that traditional approaches for scaling storage capacity -- adding more disk drives to servers, or filling the trays of a disk array cabinet -- will no longer be as cost effective as they may have been in the past. Instead, storage consumers will need to turn to externalized storage architectures to scale their capacity.

Networked storage, whether in the form of storage area network (SANs) or network-attached storage (NAS) topologies, will likely receive a boost of interest -- possibly pressing adoption of these technologies well beyond the 66 percent to 67 percent annual growth rate anticipated by IDC. IDC already forsees a -3 (that’s negative three) percent growth rate in internal, server-captive storage sales annually through 2003. This could mean that companies will seek to find a comparable cost-efficiency benefit from virtual disk.

Disk Virtualization

At a recent conference in Santa Barbara, Calif., sponsored by Credit Suisse First Boston (CSFB), one of the hot topics was storage virtualization. Virtualization of disk has been around for a while. Large-scale storage array manufacturers use virtualization to aggregate multiple physical hard disks into virtual volumes that can be presented as logical disks to the operating systems of attached servers. The operating system thinks it is looking at a single, very large, hard disk drive. In fact, it is seeing an aggregation of many physical drives whose read/write accesses are managed by a sophisticated array controller.

SANs are supposed to be the ultimate expression of disk virtualization. When SANs are fully baked as a technology, they are expected to provide volumes that can grow to meet application and end user needs. Simply add more physical disk devices to the pool of SAN-attached storage from which virtual volumes are made and, voila, you have a bigger virtual volume.

Storage technology consumers will be looking to realize the same value from virtual drives that they realized from physical disk drives prior to hitting the superparamagnetic barrier. They will want virtual drives that can scale in terms of capacity by 100 percent or more annually, and that will fall in price by 50 percent or more annually. The first demand may be realized fairly readily. The second is much more problematic.

To realize declining cost in virtualized disk, the first thing that will have to go is the expensive array controller. The controllers on high-end storage array platforms from EMC Corporation, Hitachi Data Systems, Sun Microsystems, IBM and others are a major contributor to the costs of these products. They are, in fact, the "value add" that the vendors provide on products that might otherwise be classified as trays of commodity disk drives. For example, without its complex and proprietary controller and software, the $450,000 EMC Symmetrix array would be Just a Bunch of Disks (JBOD) costing no more than $50,000.

But, to realize the increasing capacity/decreasing cost value proposition that companies will be seeking from networked storage, the complex array controller will have to go. Open SANs are actually made more costly and technically difficult to deploy by the presence of a diversity of proprietary storage arrays with proprietary controllers. Thus, those working on SAN operating systems are really endeavoring to "virtualize" the array controller.

When this is done, the cost for scaling capacity will be limited to the cost of acquiring the necessary disk drives and implementing them in the SAN storage pool. Not exactly a 50 percent annual reduction in drive cost, but a much more manageable cost than adding an additional proprietary storage array whenever storage capacity growth is required.

Depending on who you talk to at EMC, or the other large-scale array vendor shops, there is a lot of interest in spinning off controller software into its own business unit. For the bottom will eventually fall out of the hardware side of the storage array business. As A. J. Casamento, a "solutioneer" with Brocade Communications Systems, once remarked to me, once an open SAN appears on the market, Joe’s JBODs will perform just as well as an EMC Symmetrix array costing much, much more. "People who buy EMC will realize that they have been paying way too much for storage."

What Casamento and others are suggesting is that a virtualized storage controller architecture, as an integral part of a SAN operating system, will eliminate the need for expensive, proprietary, array-specific controllers. The result will be that large-scale array manufacturers, if they are not eliminated, will be relegated to a niche -- probably outside of the storage network.

The alternative for large-scale array makers is to create their own "proprietary SANs" that leverage their own hardware-based controllers, and that will not work readily with the products of other storage vendors.

Jon William Toigo is an independent consultant and author of The Holy Grail of Storage Management. He can be reached via e-mail at jtoigo@intnet.net.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles