Uncovering the Dark Secrets of Dark Storage
Dark storage is disk space that is unmapped, unclaimed, or unassigned.
A new term has been added to the lexicon of storage technology: dark storage. It was coined by MonoSphere in its recent announcement of Release 3.7 of the company’s Storage Horizons software product. Dark storage is real and may represent 15 to 40 percent of your current storage capacity: disk space that is unmapped, unclaimed, or unassigned.
How does dark storage happen? Simple. It's the disconnect between the way storage is seen and handled by application and server administrators on the one hand and storage administrators on the other.
According to MonoSphere, when storage administrators turns over raw storage capacity for use by the server and applications administrators (in the form of LUNs), file systems are not regularly mapped to every LUN. Unmapped LUNs go undetected by traditional capacity management tools, which may consist simply of multi-page spreadsheets designed by storage administrators who find fault with the on-array capacity management tools of the hardware vendor. The storage admins know that capacity has been shared out. The server or application administrator's file system management tools report that the efficiency of file system utilization is high, when, in fact, the underlying storage infrastructure is dramatically underutilized.
Spotting this discrepancy is technically important undertaking, requiring an astute comparison of raw allocated capacity and used allocated capacity, followed by an investigation of any mismatches found. The effort can be even more challenging, MonoSphere warned last year, by technologies such as thin provisioning on disk arrays.
Thin provisioning attempts to improve capacity-allocation efficiency by enabling storage that is reserved to an application (but not yet used) to be reallocated "behind the scenes" by the on-array thin provisioning engine. Thin provisioning techniques are, collectively speaking, a high-tech capacity shell game that falls apart if an application ever makes a "margin call" for the storage it thinks it owns. If that storage has been thinly provisioned elsewhere and there is no more capacity to be had, the result of a margin call can be catastrophic: application failure, or, in the worst case, server failure.
Server virtualization only makes spotting and correcting dark storage more difficult, though the press releases last week about MonoSphere's interoperability with VMware were too politically correct to say so directly. Virtualization itself adds an abstraction layer to resources such as storage and further obscures the relationships between array LUNs, the ESX server, VMware file systems (VMFS), VMware virtual disks (VMDK), guest OSs, and guest OS file systems/raw devices.
Capacity management problems have always existed in storage infrastructure, beginning with deliberate vendor obfuscation of raw capacity. Some vendors hold a percentage of capacity "in reserve" for their own software that the customer has either purchased with the array or that the vendor hopes to sell the customer in the future. When you hear reference to "T" bits (the technical formatted capacity of an array) and "B" bits (how much of the formatted capacity that the vendor "lets you use"), you are already dealing with putative dark storage. The problem increases with technology "value-adds" such as thin provisioning and perhaps even de-duplication and compression, which unpredictably change capacity usage forecasting models.
In their limited definition of the term "dark storage," MonoSphere focuses only on the issue of capacity mismanagement at the file system level. They claim to have the solution: an easy-to-interpret reporting facility and dashboard that displays what storage is being used and what subset of that capacity is actually committed to active file systems, databases, and so forth. The product doesn't work on all storage arrays. That’s predictable, given the close control many vendors wish to exert over what the customer can readily see about their capacity allocation, but it's a start. Provided the customer's gear is on the MonoSphere support list, the product can deliver real value.
In the demonstration I attended, discrepancies in allocated capacity and file system overlays that amounted to significant capacity waste were quickly spotted. Remarkably, this works well in virtualized server environments, too. Spotting and correcting the dark storage problem can mean deferring additional CAPEX investments and management burden for new storage deployments -- a value proposition on its own in this economy.
In addition, MonoSphere also enables users to include details about storage platform costs, which it uses to establish cost per GB. Done diligently, this costing data can be a rich trove of information useable to see not only the dollar value of wasted dark storage and the return on investment accrued to using MonoSphere, but to set the stage for improved data management. MonoSphere contextualizes the feature as a "chargeback system" enabler. In fact, it can provide ready insight into the cost of hosting data on one platform versus another -- extremely important in efforts to correctly size storage and construct "purpose-built" storage infrastructure going forward.
To be truly effective, MonoSphere's product needs a graphical mapping facility that will appeal to the visually-oriented user, whether a technical person or a business manager. They say that this has been suggested by many of their customers and will likely find its way into a future release. Even without this feature, MonoSphere's Storage Horizons is worth a look.
Your opinions are welcomed, especially if you have used this product. Send them to me at firstname.lastname@example.org.