In-Depth

Readers Sound Off on T-Bits and B-Bits

Our column on actual vs. real disk space touched quite a nerve

For many readers, our recent column on “actual” versus “advertised” disk space in array purchasing hit the nail on the head. The consensus was that certain vendors seemed to be going out of their way to confuse us about how much usable capacity we are actually buying.

Many recalled, as did SOFTEK CTO Nick Tabellion, a similar issue with mainframe DASD nearly 25 years ago, but most agreed that IBM had done a yeoman’s job of clarifying the issue of usable capacity with DFSMS and DFHSM in the late 1970s. Unfortunately, like so many other aspects of mainframe computing, straightforward capacity analysis did not carry over into the distributed computing world.

Wrote one reader who works for a large financial institution, “Your article was interesting and humorous (unless you're the one caught up in the mess and then it can be frustrating). I, too, am an ex-mainframer and can relate to the many dramas surrounding storage capacities. I can remember one more issue which still appears to be popular among storage vendors: Whether to use powers of 2 (i.e., 1024) vs. powers of 10 (1000) for KILO, MEGA, GIGA, etc. Either way is fine so long as everyone agrees, but when you buy a 100 GB drive (as reported under powers of 10) and then format it only to see space reported under powers of 2 (93.1 GB), [you feel] cheated.”

Formatted capacity was also on the mind of a reader from a telecommunications company. Citing documentation from Seagate’s Web site, he noted, “Capacity is the amount of data that the drive can store, after formatting. Most disc drive companies, including Seagate, calculate disc capacity based on the assumption that 1 megabyte = 1000 kilobytes and 1 gigabyte = 1000 megabytes.”

He took the math a bit further and offered, “Based on this math 1 GB = 1000 x 1000 x 1000 = 1,000,000,000 bytes (one can only assume they still use 8 bits in a byte!). But in the real world, 1 GB = 1024 x 1024 x 1024 = 1,073,741,824 bytes. That is 72013.5 KB or 70.32 MB difference!” “So,” concluded the reader, “for every GB the disk drive manufacturer tells you that you have … you lose about 70 MB!!! 73 x 70 = 5131.9MB or 5 GB!! So, a 73 GB drive only gives you about 67 GB of actual space [while] 146's ‘lose’ about 10 GB.”

A representative of Hitachi Data Systems contributed a comparable observation about “the difference between a 'binary' megabyte (1024 squared) and a 'decimal' megabyte (1000 squared).”

Said the HDS rep, “As far as we can determine, all vendor salesmen and their supporting documentation refer to capacities using the decimal megabyte while all [systems administrators], and their systems, allocate storage using the binary version. Obviously, as the capacities of SANs and their included disk storage systems spiral toward the ‘yottabit’ realm, the delta between any single capacity multiplied by the two 'megabytes' becomes more and more vast. We include a description of this anomaly with each proposal to prevent post-installation heartburn on the customer's part and unnecessary anguish on ours.”

Such disclosures, were they universal in the industry, would clearly obviate some of the concern about this issue. However, in the words of another reader, “Some vendors feel the need to make things as complex as possible for us consumers.”

He compared purchases he made from two different array vendors: IBM and EMC. “When dealing with IBM,” offered the reader, “we get accurate quotes for 'usable' space with no reference to 'raw' space or any other metric. However, when working with EMC, we have to deal not only with ‘raw’ versus ‘usable,’ but now we have to deal with ‘engineering usable’ versus ‘RAID usable.’ ‘RAID usable’ [means] what is available after the storage is carved into RAID ranks. ‘Engineering usable’ [means] what is available after the ‘OS overhead.’”

Yet another end user echoed the remarks above, but disagreed that even mainframe DASD storage capacity issues had been worked out completely. He provided, by way of explanation, a detailed, four-part spreadsheet that he said he uses to keep track of DASD array capacity.

Referring to the spreadsheet, he explained, “Physically, there's X amount of storage. From that, you subtract a certain quantity for spares, etc. Then, you lose some more due to the RAID architecture. Then, some is set aside for future growth, or for emergencies. Then, some is left unused for "engineering level" purposes—because we all know you can't run disk 100 percent allocated. Then, some is used for the operating system and software products. You finally get down to space that is allocated by applications, only a portion of which is really used.”

Summarizing, he noted, “What I found interesting is that, in this recent example, even with aggressive use of IBM's SMS and HSM, we are still only able to use about 30% of the space for actual application data.”

Emory Epperson, at storage management software vendor SOFTEK, offered a terse (and somewhat tongue-in-cheek) response, “Good column on BS-bits and others. Maybe we can work on future releases of Storage Manager to help IT administrators distinguish between B-bits and T-bits.”

This observation seemed to be anticipated by another reader, who added that part of the disgruntlement of the customer cited in the earlier column reflected a broader problem—a lack of skills training for storage management personnel in the distributed computing realm, and our apparent belief that vendors will solve our problems for us.

Wrote the reader, “Yes, it was hard in the ‘old’ IBM days to have efficient storage. It was the job of a learned and experienced technician to do it correctly or fix those things that those operating more expediently fouled up. Today, there seems to be the thought that knowledge and experience are no longer necessary to evaluate, purchase, configure, and manage storage devices. Even with ‘virtualization,’ someone better know what the data looks like (t-bits). My experience is that a RAID-capable device will be advertised with its JBOD capacity listed and some reference to possible RAID capacities. The technician must know the T-bit needs and lay that over the RAID configuration that he plans. My guess is that your client had stardust in his eyes when he looked at the cost per Mbyte or Gbyte or whatever and didn't consider RAID losses … oops.”

In the final analysis, we agreed with this observation. Most of those who are tasked with storage administration and management receive no formal recognition and no specialized training for doing what they do. The complexities of capacity management in an increasingly distributed architectural model are daunting enough. Getting to utilization efficiency (that is, obtaining more than 30% of aggregated capacity for actual application data support) is going to require an even greater level of skill and knowledge. These capabilities can only be derived from a combination of formalized training and field experience, and from community-based sharing of techniques and best practices among practitioners.

You can hope that vendors will help clear up the confusion. But, from where we sit, hope is not a strategy. As always, your comments are welcome: jtoigo@intnet.net

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles