In-Depth

Storage Report Card: Tape Gets an "A"

Tape continues to deliver reliability, flexibility, and sound support of a multilayer data protection strategy.

In 2008 and early 2009, the attack on tape technology continued, with many vendors mouthing the "tape is dead" mantra from as far back as 2001. First, it was the Virtual Tape Library folks advancing the idea that driving backups to disk not only alleviated the tape backup window problem but established a disk-to-disk-across-the-WAN paradigm that would see the demise of the backup stalwart.

Anti-tape VTL marketing was quickly echoed by the resurgent storage virtualization product vendors ("why use tape if you can simply designate a remote disk set in a virtual disk and copy data to two locations at once?") and by de-duplication advocates such as Data Domain and Sepaton. Data Domain handed out bumper stickers at storage events exclaiming "Tape sucks!"; Sepaton originally advanced the view that tape had no runway, their corporate moniker is "No Tapes" spelled backward.

The arguments were not without some merit, of course. In the trenches, tape was the technology that IT loved to hate. The appeal of automating the data protection process (the primary use of tape in contemporary distributed systems computing is data protection) using disk-based replication had considerable appeal. This certainly encouraged the anti-tape crowd.

Second, large enterprise tape had been a flat-line market since the late 1990s, with vendors simply replacing older solutions but not gaining any new market ground. Distributed systems tape solutions, while constantly improving, were losing market share as disk-based solutions proliferated. It didn't help that one leading analyst had published a report stating that 1 in 10 tapes fail on restore -- a bogus claim unsubstantiated by any empirical evidence but nonetheless cited in the PowerPoint and marketing materials of every vendor seeking to advance a disk-based solution.

Third, disk drives continued to outpace tape in terms of capacity, and a combination of de-duplication and compression enabled even more backup bits to be squeezed onto an ever-expanding disk target. These facts were repeatedly used by disk competitors to advance a cost-competitiveness argument vis à vis tape technology. Rarely, however, did such arguments dwell on the media/system cost advantages clearly favoring tape at about 44 cents per GB versus SATA's $44 per GB. Instead, disk advocates hammered the amorphous labor costs of tape administration.

By the end of 2009, disk vendors said, tape would be yesterday's news. The death blow would most certainly be delivered by storage cloud services that provided to the small-to-midsize consumer at least a much more cost-effective, on-line alternative to a do-it-yourself tape backup solution.

It didn't help that tape technology was treated as a red-haired stepchild at Sun Microsystems after its acquisition of STK, a leading tape vendor in the upscale enterprise. Worse yet was the impact of the announced acquisition of Sun/STK by Oracle, which, if and when the deal closes, did not bode well for tape solutions. The uncertain future of STK tape products under new ownership saw many consumers putting a hold on budgeted expenditures for tape subsystems from Sun.

Is Tape Dead?

Despite all of the anti-tape rhetoric, I am still not buying it. Here's why.

To begin with, although IT loves to hate tape, the simple fact is that over 70 percent of the world's data resides on the medium. In fact, 2009 found the key issues with tape addressed successfully by the ecosystem of mostly quiet vendors who still support the technology.

Problems with backups failing to complete on time or within operational windows in the IT schedule rarely had anything to do with tape itself but rather with its inefficient use. To some extent, backup software vendors bore the brunt of the blame for this issue and they have begun to fix their own mess.

In distributed environments, where backup jobs are derived from multiple backup targets (i.e., servers instrumented with backup agents), most backup software simply tallied the total volume of data that was to be copied to tape devices and divided that total by the nominal recording speed of the tape media itself to provide a job-duration estimate. This was silly on its face given the differences, sometimes profound, in the burdening of servers (hence the cycles available for backup processing), their connections to LANs and to storage devices, and the amount of data to be backed up. These differences accounted for much of the pain associated with estimating the actual time required to complete a backup process. Server virtualization added to this pain.

Enter CA with ARCserve R12.5. In 2009, CA sought to rebuild market share lost to Symantec and others over the past few years by adding a novel innovation to its product: storage resource management (SRM) functionality borrowed from its old BrightStor product. Eric Pitcher and company sought to enable the backup administrator to collect relevant information about backup targets that would enable a better grouping of targets into sets with reliable completion times, among other things. This was the best thing to happen to backup software in a decade and was quickly emulated by Symantec and other leading BU software providers.

Making it easier for the backup administrator to predict the resource and time requirements for successful backups was a significant improvement in tape that will ultimately breathe new life into the technology. It builds on gains already made in leveraging capabilities such as Volume Snapshot Service (VSS) in Microsoft products, which enables hot backups of Redmond file systems, and on other timesaving innovations such as backup to disk.

Backup to disk? This one-time heresy in the tape world has been embraced in 2009 by most leading tape-backup purveyors. A sideways interpretation of the meaning of VTL, backup software vendors enable backup data to stream to a disk target -- not so much to save time (tape recording is much faster than disk writes by several orders of magnitude), but to facilitate other goals in data protection and data management. Writing data to an interstitial layer of disk, often termed a "disk buffer," establishes a location where additional services can be applied to the data itself.

CA, as well as CommVault, Symantec, IBM Tivoli, and a host of others, have co-opted VTL and de-duplication into their traditional backup wares to facilitate a "tiered" data protection strategy in which data is copied first to disk (providing a location for about 30 days of local data storage optimized by de-dupe), then to tape. The result is a local repository for data restoration that addresses the 90-plus percent of "disasters" that require the restoration of a single file accidentally deleted or corrupted by human or machine error, while providing an additional guarantee in the form of a tape backup for recovering from a major disaster.

In addition, CA and others have added fairly robust replication management and continuous data protection (CDP) technologies into their own products, enabling a disk-to-disk replication solution for mission-critical, "always-on" applications in a geographically-disbursed cluster. The collective impact has been to steal some of the wind from the sails of marketing efforts at EMC/Data Domain and elsewhere that conceive of each of these services as standalone products.

Tape Automation Improves

Backup software isn't the only area of improvement. Hardware players have been quietly improving their value proposition and capabilities.

Starting with media, Linear Tape Open (LTO) has established its tape media format as a de facto standard in the distributed world. Despite occasional overstatements about the technology from proponents such as HP (which earlier in the year claimed -- incorrectly -- full compliance for LTO with Federal Information Processing Standards on media security), and continuing differences in media insertion and other compatibility issues between manufacturers of LTO drives, the square tape has achieved ubiquity.

Although tape formats at the extreme end of the spectrum (from IBM and Sun/STK) boast greater capacity, the latest LTO has a lot going for it in terms of seek speeds, etc. The bigger advantage is that a standardized tape means that tape is, well, tape. There are no proprietary standards or formats that often force disk customers to spend a boatload of money for a commodity piece of media. That's a real asset.

When generic media is used with innovative automation, such as Spectra Logic's just-announced T-Finity library, things get very interesting. The new library, according to the vendor, "offers multiple, redundant robots, scaling to more than 45 petabytes in a single library and to more than 180 petabytes in a single, unified library complex. [It] is targeted for use in data-intensive environments such as large enterprise IT, federal, high-performance computing (HPC) and media/entertainment."

A couple of columns back, I surveyed users in media and entertainment about their selection of Spectra Logic products as storage for their massive data sets and received a remarkable amount of positive feedback. According to spokespersons for the company, Spectra is seeing year-over-year growth in vertical markets where density is highly prized and dependability and integrity are "must have's" in data storage.

The latest libraries are innovative -- arguably much more so than disk products -- and address head on the C-4 issues: cost containment, compliance, continuity, and carbon footprint reduction. T-Finity lists the features that all vendors aspire to provide, including high density in a small footprint (72 terabytes per square foot), scalability (30,000 slots in a single library and 120,000 slots in a single library complex), availability (99.99 percent uptime through dual robotics, redundant control and communication paths, and multiple redundant components), power efficiency (T-Finity tape library uses no more than half of the power per unit of data stored as competitive offerings), and data integrity via unified media lifecycle management and integrated data encryption and key management.

In addition to the one-stop-shop of Spectra Logic libraries, users can also secure new features on older gear from companies such as Crossroads Systems. The company's recently revamped Read/Verify Appliance (RVA) provides background verification of media conditions and data integrity on tapes in libraries you already own, as well as migration services to transfer tape contents to newer media.

Bottom Line

Tape gets an "A" in this report card. It continues to deliver reliability, flexibility, and sound support of a multilayer data protection strategy. Don't get me started on its effective use as an archive medium. Today, it can be safely said that rumors of the death of tape have been greatly exaggerated.

Your opinions are welcome: jtoigo@toigopartners.com.

Must Read Articles