In-Depth

T-Bits vs. B-Bits: What You Need to Know

Think you're getting the space your storage vendor says you're getting? Think again.

With the political season in full swing, a recent conundrum reported to me by a client reminds me of the old Nixon sound bite: “I know you think you know what you thought I said, but what you thought I said isn’t what I really meant.”

The issue is one of B-bits and T-bits. Basically, my client reported that he went to allocate a portion of his expensive disk array platform to a database whose caretaker requested more elbow room. To his dismay, he discovered that the additional space was not available.

Doing the math, he calculated that current shares of capacity allocated to other applications and to internal array foo, like point-in-time mirror splits should have left him with more than enough space to give the DBA what he wanted. He suspected that he had not been provided what he had ordered from the array maker and quickly put in a call to customer support.

He was told that he was confusing B-bits with T-bits—bits in the product brochure (raw capacity of disk drives in the array) with “technical bits” (those actually available after disk normalization, RAIDing, array software overhead, etc. When he started to protest, the customer service rep told him to back off: “Every vendor does it. You ought to know the difference between B-bits and T-bits—everybody else does.”

The last remark effectively cowed the man, who feared that his own ignorance had led him down an erroneous path. He spoke with me about it as a casual aside, watching my face to see whether my reaction would confirm the customer service droid’s remark. He seemed to heave a sign of relief when I told him that I had never heard of such a thing.

In fact, I had a dim recollection from my mainframe days that related to the distinction. I checked with ex-IBMer and current SOFTEK CTO Nick Tabellion, who knew more than I did about mainframe direct access storage devices (DASD). He responded, in two e-mails.

First, he noted, that a differentiation used to be made between raw and usable capacity in DASD to reflect “the difference between the physical track capacity if a continuous string of bits is written to it and the reality of 'blocks' of data written on a track separated by the dreaded 'inter-record-gap'.”

“Back in the old days—say, 1990,” quipped Tabellion, “MVS actually wrote records in a contiguous stream as specified by the 'Blocksize' parameter. DFSMS had a feature called 'System Determined BS' that optimized the size for space utilization and performance, taking that responsibility away from the JCL creator.” He said that he vaguely recalled that an IBM 3390 had the following specs (rounded)

1. Track size = 57000

2. Block size = 80 would yield about 5K of data and 52K of IRG's (and BS=80 was in a lot of JCL)

3. DFSMS would usually block these to 1/2 track and get utilization up to over 50K of data.

The distinction had become largely irrelevant in mainframe shops today, Tabellion noted, because “track size is virtualized and all blocks are written in a common format (I think 512 bytes, but this may be obsolete info and it really doesn't matter because of the virtualization).”

Apparently, the issue continued to nag at the CTO, because he followed up a few hours later with an additional reply, “My [first] answer back to you on this was mostly historical—there are some other things that are pertinent now. I have seen some companies advertise the 'raw capacity' of their devices—full track capacity, no RAID configuration, etc. When the customer actually configures an array for his purposes, the actual usable capacity can vary dramatically. RAID 1, for example, cuts it in half. So, RAID configuration and vendor implementation can cause significant differences in usable capacity.”

Following up with Hu Yoshida, CTO at Hitachi Data Systems, he was quick to point out that his company advertises both raw and useable capacities on its array products so there is no customer confusion about capacity. Integrators with whom I spoke said the same thing. Completing the circle, I spoke with a friend and Vice President at Seagate Technology who told me that adjusted capacity due to “disk normalization” is hogwash and whatever vendor gave such an explanation to a consumer is “putting one over on him.”

So, the advice I gave to my client (and that I now offer to all readers of this column) is to get in his vendor’s face about the discrepancy between advertised and available disk array space. Deceiving customers about usable disk capacity is NOT the standard practice of every vendor in the industry, and the distinction between T-bits and B-bits sounds more like BS-bits to me.

In the vendor's defense (a company I won't name since I did not hear the conversation at issue), the customer service representative may have been incorrect, or my client’s interpretation of the response may have been flawed: this particular customer was European and not a native English speaker. But I'll bet he received more double-talk than straight answers as he sought to address his storage dilemma. That seems to happen a lot these days: makes you wonder if we are listening to too many political speeches on CNN.

You heard it here. I said what I meant, and if you would like to share your views, drop me a line at jtoigo@intnet.net.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles