What's Brewing In Storage?
Fibre Channel-based Storage Area Networks (SANs) are the newest-fangled fad among storage vendors. Early Fibre Channel adopters argue that networks built on FastEthernet and Gigabit Ethernet are not fast enough. Fibre Channel proponents also argue that SCSI, a short distance technology, just can't cut it in the datacenter. And LANs? Forget about it. They are too high-latency. Jeff DiCorpo, HPÕs Storage R&D Project Manager takes up the Fibre Channel cause in part one of our feature article this month.
But don't throw out the SCSI with bath the water just yet. In fact, you should be dusting off those SCSI drives because the days of SCSI-based storage are far from numbered, according to Jerry Namery, Chief Technology Officer at Winchester Systems Inc. SCSI does SAN without multi-initiators and is interoperable between all vendors. And unlike Fibre Channel, SCSI is backward and forward compatible with itself.
Every new Intel PC server and workstation made today (regardless of price) come standard with one or two 80MB/sec Ultra2SCSI ports, usually included on the motherboard. And that single PCI card supports 30 disk drives. By adding a single PCI card to a server or workstation with dual 80MB/sec Ultra2SCSI ports, you get an additional 160MB/sec peak throughput or about 142MB/sec sustained; speeds which are considerably faster than what Fibre Channel offers.
Also consider that all high-performance disk drives from every vendor come in 80MB/s Ultra2 SCSI configurations, at nearly the same price as the slower SCSI models. And that's not all. The industry is rushing to complete yet another doubling in SCSI speed: this time to 160 MB/sec per port. Called Ultra160/m, without changing cables or connectors, the doubling in data transfers will be done at the same 40MHz clock speed but by double-clocking the data. This means that both the current 80MB/sec Ultra2SCSI devices and the new UltraSCSI 160MB/sec devices will operate at full rated speed on the same bus. By late 1999 nearly all SCSI disks and Intel-based servers will come standard with Ultra160/m port. With that in mind, let's review the myth-conceptions about Fibre Channel.
Fibre Channel will solve all your I/O bandwidth problems.
Some vendors of Fibre Channel claim that 100MB/sec port bandwidth is so fast, users will never wait for I/O again. That may have been in 1996, when the fastest I/O bandwidth per host adapter was an UltraSCSI at 40MB/sec. But today's SCSI standard offering provides at least 60% more bandwidth per PCI host adapter at a lower cost, most users select higher bandwidth.
Fibre Channel lets you put up to 126 hosts and storage devices on one loop.
Users find that to maintain reliability they should limit the number of drives per loop to a few dozen. Otherwise they may experience hung loops or even lost data. The problem may stem from Fibre Channel arbitrated loop arbitration (FC-AL) or because every drive must pass the loop to the next. So, the advantage of Fibre versus SCSI may be lost: one new PCI adapter with dual Ultra2SCSI supports 30, about the same as Fibre Channel.
Fibre Channel is faster for all applications.
Fibre Channel storage is not well suited for most business applications because FC-AL arrays usually require the host to perform RAID 5. Using any host-based RAID 5 is unacceptable to most users of high performance servers because reliability and performance are dramatically reduced over hardware RAID 5. In general, Fibre Channel storage has been well received by users of video and imaging applications where host-based RAID-0 or striping is acceptable. In these applications, users like the ability of Fibre Channel to support longer distances and run over fiber optic cables.
Fibre Channel is more reliable than other storage options.
Fibre Channel drives run up to 50% hotter than identical SCSI drives. In addition, Fiber Channel is a network protocol that relies on Class 3 data transfers, which means coping with lost data packets. Because Class 3 service relies on the application to recover from a lost packet, the result to users' data may be catastrophic. Today's applications were written to support reliable SCSI data transfers. SCSI is a parallel bus technology, similar to the busses that make up all computer systems, and every transfer is guaranteed and acknowledged with proper handshakes. Fibre Channel is a serial data channel designed to deliver 1-Gigabit of information over long distances without the necessary hardware error recovery.
Fibre Channel is finally mature and interoperable.
Interoperability is even a problem between different equipment of the same vendor. And in most cases, changing one parameter like the host OS or the application or the cable length may cause problems.
Several large vendors, notably Quantum and Adaptec have recently exited the Fibre Channel market due to interoperability problems. Both these vendors announced their intention to focus all their energies on Ultra2SCSI and the upcoming Ultra160/m standard.
Fibre Channel storage is very similar to SCSI-based RAID arrays.
Most Fibre Channel arrays sold are actually "Just a Bunch Of Disks" or JBOD. And most users of Fibre Channel use them in JBOD or host-based RAID 0. This is because vendors found it extremely challenging and expensive to make Fibre Channel hardware RAID work. SCSI-based hardware RAID arrays have been the staple of the storage industry for nearly every server requiring more than a few drives for the past five years. The technology of SCSI is stable and well-understood.
Fibre Channel is the open standard for the future.
This may be the case, but users' investment in current Fibre Channel may not be protected. The first Fibre storage arrays were introduced by Sun in 1994 based on quarter-Gigabit technology and are only supported on Sun S-Bus servers. This technology is obsolete. And today's 1-Gigabit Fibre Channel arrays are not interoperable with this old standard.
Vendors are now considering a 2-Gps or faster standard for Fibre Channel, but there will be no ability to use the current Fibre Channel drives, hubs, switches or possibly even cables with this new standard. With today's Ultra2SCSI, disk arrays can be plug-and-play backward compatible all the way to SCSI-1 of 1984 and forward compatible to Ultra160/m and possibly beyond, which ensure investment protection well into the next five years.
Fibre Channel easily supports multi-hosts to form a SAN.
Without very expensive fibre switches, adding multiple hosts to one Fibre Channel loop is tricky because the host-software doesn't support multiple initiators. Often each host will attempt to reset the channel to gain complete control, which can result in even more lost packets and a hung bus.
In comparison, several vendors make SCSI hardware RAID arrays with multiple independent SCSI host ports. This allows attaching several servers to one array, each on their own private SCSI bus. This completely eliminates the multi-initiator problem. Although it's not designed for long distances, SCSI goes the distance most users need in computer rooms: 25 meters point-to-point (one host adapter to one storage array). Using SCSI hardware RAID arrays, users have no limits to how much SCSI-attached storage they can add to any server.
Jerry Namery is Chief Technology Officer at Winchester Systems Inc.
When The Storage Area Networks
(SANs) borrow an idea from mainframe-based data centers, where a network storage interface called Enterprise Systems Connection (ESCON) has been used for years to connect mainframe computers to multiple storage systems. On distributed networks, the same concept has been applied to isolate backup and recovery functions from the main backbone network, saving scarce bandwidth on the production system and allowing a far greater degree of control and manageability on the backup and recovery network.
Realizing the benefits of SANs, while recognizing the need to develop a standardized approach for distributed storage area networks, more than 50 storage and networking vendors formed an industry consortium called the Storage Networking Industry Association, or SNIA, in late 1997. The group's charter is developing specifications for a set of standards for SANs and network-attached storage (NAS) devices. In the short term, the SNIA is focusing on developing a set of standardized interfaces suitable for LANs, enterprise networks and WANs. The most promising technology seems to be Fibre Channel, an enhancement for the SCSI bus attachment favored by the industry consortium as well as by early adopters in the user community.
A Fibre Channel SAN consists of a number of servers and storage subsystems connected through a high speed hub or switch. Fibre Channel offers several important benefits: Running at up to a gigabit per second (Gb/s), it breaks the SCSI barriers in both speed and number of simultaneous users. Fibre Channel, designed from the outset for low-latency storage area networks, offers superior speed and performance. It scales well by allowing users to add storage capacity without needing to reconfigure servers. And it's also manageable as a separate element within the overall network fabric, promoting quicker fault recognition and error correction.
Fibre Channel connections can span up to 10,000 meters, or more than six miles, allowing servers and storage devices in today's widely dispersed campus environments to participate in the SAN without the need to build a wide area network. And, perhaps most important, Fibre Channel is based on a set of open ANSI standards, simplifying connection and expansion. Key to enabling companies to evolve effective backup and recovery strategies, Fibre Channel SANs also resolve the bandwidth crunch problem. By running servers and storage devices on an essentially closed system, network managers are able to use the SAN as the backup network, even as users continue to tap into network-based information over the conventional backbone. This capability meets full-availability requirements while allowing administrators to back up vital data as often as needed to ensure a quick restart in the event of a catastrophic failure.
While Fibre Channel seems to be the choice for storage networks among early adopters, alternatives still exist: primary among them SCSI and traditional LANs. However both of those choices suffer in comparison to Fibre Channel, though, particularly when used to incorporate data backup and recovery as a core SAN element.
SCSI. Running at 40Mbps, SCSI is attractive for several reasons: it's low cost, it can carry a large amount of data and itÕs a low-latency medium. It's also mature and well-understood by systems architects and network administrators around the world. Studies show SCSI is used to connect nearly 90 percent of overall organizational storage today.
But SCSI has distinct limitations: it's a short distance setup; SCSI buses span less than 200 meters. And it's not suitable for large networks; SCSI can only support 16 total devices. SCSI is also inherently unsuitable for shared implementations; unlike Fibre Channel, SCSI is optimized to attach multiple storage devices to a single server.
LANs. LANs also have several attributes that make them attractive to IT managers. They're capable of spanning long distances and connecting thousands of devices. They're already in place virtually everywhere, so the capital outlay necessary to add a SAN - rather than build a separate Fibre Channel network - is lower. LAN-based storage doesn't require that IS staff learn the intricacies of a new interconnection method. Ethernet, FDDI and Token-Ring have been around for decades and are well-understood by technicians the world over.
Designed for general purpose networks, rather than as high I/O systems LANs simply can't operate at the necessary speeds to meet SAN requirements. Even next-generation high-speed LAN technologies suffer from the overhead created by the TCP/IP protocol stack. In the case of Gigabit Ethernet, the administrative requirements begin to eat up the added bandwidth, making it unsuitable for use as a dedicated backup and restoration storage network.
In the absence of a dedicated backup loop, network administrators have two options when backing up servers and storage subsystems: use the network or attach a separate backup system to each device. As noted earlier, bandwidth on even the fastest production networks is insufficient to support normal operations and system-wide backup simultaneously.
Attaching a backup system to each storage device is both expensive and difficult to manage. The hardware is costly, as are the salaries of sufficient staff to load and unload tapes, manage each backup device and handle hardware or software faults. In contrast, centralizing backup to an automated tape library system on a Fibre Channel SAN, even assuming additional costs for building the infrastructure, should result in substantial hardware cost savings. According to some estimates, centralized backup should be less than half as expensive as the directly attached backup model. And the benefits of automated centralized backup - realized immediately in lower staffing requirements, improved reliability and streamlined administration and management - will more than offset infrastructure costs.
Server clustering has become popular as a means to achieve high availability without the expense of deploying fault tolerant redundant systems. Clustering, however, complicates the process of backup and recovery. Clusters are inherently dynamic; responsible for managing data and applications moves from server to server as needed to keep the system running.
At present, the most secure way to back up clustered data lies in the direct-attached SCSI model. With the anticipated arrival of cluster-aware backup software and intelligent use of tape backup systems, the industry is moving toward a model in which multiple clustered servers can back up to a single group of Fibre Channel-attached tape drives.
Larger multi-server organizations favor centralized backup for data consistency and easier restoration. Ideally, data storage devices, such as disk and tape subsystems, would take advantage of a separate storage loop that connects widely dispersed disk space with central backup systems. Some networked storage architectures rely on WAN links for backup and restoration of archived data. Several issues affect this approach, however. Among these is reliable access to WAN links, especially those based on public networks, and data-transfer rates over WAN links as compared with local connections.
Trends toward solutions such as data warehousing and "server farms" beg for data storage consolidation, which can provide greater central manageability and faster repair times. As a result, many users are combining centralized placement of application, storage and back-up servers with WAN-based client access, rather than linking storage or backup servers over those same links.
To realize fully functional Fibre-Channel-based wide area storage networks, some issues need to be resolved. To move from localized arbitrated loops to corporate- wide storage networks requires Fibre Channel switches and routers to create a "fabric" analogous to high-speed WAN topologies.
Also, an infrastructure for managing storage on the SAN, independent of the servers accessing it, needs to evolve. Software will provide a framework for a number of storage management capabilities: redundancy for high availability, replication for performance, remote vaulting and backup for data protection, heterogeneous access, and migration of data to and from near-line and off-line repositories.
SANs based on Fibre Channel technology offer IT managers and network administrators a fast, efficient alternative to traditional SCSI- or LAN-based storage and backup models. By the end of the century, Fibre Channel SANs will take their place alongside traditional LANs and WANs as indispensable components of enterprise information systems.
Jeff DiCorpo is the Storage R&D Project Manager for Hewlett-Packard.
Back Up To Fibre Channel
To give customers a complete Fibre Channel-based backup and recovery solution that allows them to share a tape library among several servers and attached storage devices, HP is now offering a Fibre Channel-based backup solution. A primary component of the solution is the HP SureStore Fibre Channel SCSI Bridge 2100, which provides connectivity between selected HP automated tape library systems and NT servers using HP's Fibre Channel host bus adapters. HP's Fibre Channel products offer the first steps toward building a dedicated Fibre Channel-based backup loop. In addition to the servers, SCSI bridge, tape drives and host bus adapters, users will need a Fibre Channel hub to handle connectivity for the entire SAN.
Along with the hardware components of the Fibre Channel tape solution, HP will provide Fibre Channel-aware software from leading backup application vendors, which will control all backup activity, including automated scheduling, remote device administration and fault management. The primary requirement for "Fibre Channel-aware" software is managing shared access to the tape library system among all the servers on the loop.
At this time, HP's Fibre Channel solutions are available for use on servers running Microsoft Windows NT 4.0 only. HP plans to support additional server operating systems as its backup software partners make applications available. As Fibre Channel technology evolves, future tape library systems will include the technology necessary to attach directly to a Fibre Channel SAN. Currently, external bridges are the best available solutions but, in the near future, vendors such as HP will bring automated tape library systems to market with internal Fibre Channel interfaces. - J.D.