Two Sides to the Story: Building a Prepress SAN

Can you build a high-performance prepress SAN with tried-and-true SCSI? Yes. Can the same SCIS-based SAN take advantage of Fibre Channel? Yes. Build a SAN that combines available technologies.

Can you build a high-performance prepress SAN with tried-and-true SCSI? Yes. Can the same SCSI-based SAN take advantage of Fibre Channel, while reducing some its limitations? Yes.

Creating an Ultra 160 SAN

Backward compatible with other versions of SCSI, Ultra 160 SCSI, the current version of this interconnect, provides a 68-pin bus with 160 Mbytes per port between the computer and the storage. Single PCI cards come with two of these ports. Combined, you can get up to 320 Mbytes per second. To achieve this speed, you need a PC with a 64-bit PCI bus running at 66 MHz. Some of the newest Macs and Windows NT servers offer this bus. Some come with a 100-MHz bus. The Sun servers already come with multiple 64-bit slots.

Ultra 160 SCSI provides very reliable, point-to-point connectivity, which, in turn, allows data to get on and off disks very quickly. It does have a distance limitation of 25 meters from the host to the storage system.

To build a SAN with Ultra 160 SCSI, use a star topology with the storage in the middle of the star. A cable goes from the star to the storage to each one of the servers. For example, Winchester Systems’ FlashDisk currently supports four to six servers sharing the same storage, not 126 nodes like Fibre Channel. Four servers to six servers can provide a powerful backbone to the network. The benefit includes massively parallel connectivity with not one SCSI cable to your storage, but four or six. You can scale this storage to have eight or more RAID arrays. Eight, Ultra 160 SCSI ports with only four, dual-port PCI adapter cards provide more than one Gbyte per second of aggregate bandwidth. This type of SAN offers easy and quick installing and testing.

The layout (High Availability, High Performance Ultra 160 SCSI SAN Storage) shows four servers (with different operating systems) and two RAID systems. You can have multiple RAID systems, each one with up to 500 Gbytes. (This solution can go up to 4 Tbytes.) This layout also shows a single Ultra 160 SCSI bus adapter, which has two ports on it – bus 1 and bus 2. Each port goes to each server (This bus adapter is backward compatible to low-voltage differential SCSI of 80 Mbytes). In turn, each server connects to the SAN via either Fast Ethernet or Gigabit Ethernet.

With middleware software, such as Tivoli’s SANergy version 2.0, any server in the system can fail and the entire system will keep running. SANergy allows the other servers in the network direct access to the storage. SANergy does the file locking and manages those files. The file handshake transaction for who has the file comes over the TCP/IP ethernet. The data can get transmitted via Ultra 160 SCSI at speeds three to 20 times bandwidth over what TCP/IP usually provides. This speed comes as a result of the direct access the servers have to the file area without having to transfer data over the network.

A Mixed SCSI-Fibre Channel SAN

Fibre Channel, an emerging technology, does two things: Runs the SCSI protocol at 100 Mbytes per port over optical cables and runs a unique storage protocol at 1.06 Gbits per second in packets (Fibre Channel does not currently run IP). It’s really SCSI using a different protocol. As a network topology, Fibre Channel uses a hub or a switch as a concentrator. The switch runs faster than the hub. Fibre Channel supports up to 500 meters, suitable for most applications. You can spend more money and purchase special cables and drivers to go up to 10 km.

Current Fibre Channel Arbitrated Loop (FC-AL) has one downside: It runs Class 3 service. Three classes exist for quality of service of transmission. Class 3 service doesn’t guarantee transmission, nor does it acknowledge it. If a Fibre Channel drops a packet and the software fails to catch it, the result means a hang up (or a time out), causing the system to freeze for a second. The Loop Reinitialization Process or LIP begins resetting the entire bus.

SCSI, since it’s point-to-point, doesn’t have the benefits of running in a loop. On the other hand, you pay more for the benefits of Fibre Channel over Ultra 160 SCSI. Fibre Channel consists of a converted technology from SCSI to optical to a very specialized packet and can go a greater distance than SCSI.

Building a SAN with Fibre Channel differs from using SCSI. You still have a star topology like a network with a hub and switch in the center. The bandwidth of the hub is 100 Mbytes and bandwidth of the switch is higher. Although you can have 126 nodes on this star arbitrated loop, you may have difficulty managing and debugging that many nodes. You may want to limit the nodes to 12 or 14.

The layout shows the primary servers running directly to the RAID system. In addition, you have a low-cost Fibre Channel-to-SCSI bridge to convert traditional SCSI to Fibre Channel. You can now put a hub or a switch attached to some workstations and other servers. Why do this? SCSI doesn’t require a concentrator, a hub, or a Class 3 service. You could dedicate these primary servers to mission critical functions. Your Mac and NT power users or remote users may need the faster access to data without clogging the network. They can connect via Fibre Channel to a Fibre Channel hub or switch, which in turn connects to a FC-AL to SCSI bridge directly to the storage.

Again, SANergy makes this work by providing the servers with simultaneous access to the storage on the shared RAID system. Installing SANergy on the Macs and NTs gives them transparent access to the data. If anything happens to the optical transmission or to the switch, SANergy automatically reroutes the traffic back over the LAN. So if anything hangs up or if a particular connection fails, the critical SCSI-attached servers will continue running.

SCSI provides the simple, reliable and fast connection right to the storage in the data center. Meanwhile, Fibre Channel provides connectivity for a large number of users and for distance. In the case of a failure, SANergy allows employees to continue running, not necessarily at optimal speed, until you have a chance to debug or figure out the problem. The end result: SAN Solves Traffic Problem at R.R. Donnelley & Sons’ Plant.

Some prepress applications require the transfer of multi-Mbyte files which can take several hours to get across a LAN. A storage area network or SAN offers a more effective way to transfer prepress files. A SAN comprises a central pool of storage made up of multiple host servers, attached to RAID storage devices, through a network of high-speed interconnects, such as SCSI or fibre channel. Unlike a LAN, a SAN enables applications with heavy file transfer demands to have direct access to a shared-storage repository at rates considerably faster than a LAN.

A SAN has helped the R.R. Donnelley & Sons’ Glasgow Division, Glasgow, Kentucky, increase the speed of a color swap application. This division does prepress work and printing for magazines and catalogs, producing titles, such as Yahoo Magazine and Harvard Business Review.

For the color central application, the plant prints low-resolution file versions of the original high-resolutions files. Users make changes to the low-resolution image files, which don’t take up a lot of memory. When all the files are ready for final printing, the Open Prepress Interface (OPI) software swaps the modified low-resolution images files for their matching high-resolution image files.

Tony Wallace, the prepress systems analyst who set up the SAN, says, "Swapping 50-Mbyte files to 100 Mbyte files over a Gbit Ethernet network was taking too long and consuming too much bandwidth." To speed things up, Wallace created a SAN by moving the two Intergraph InterServ 8400s, which run the same application, off the Gbit Ethernet network and attaching them to the SCSI-based Winchester Systems’ FlashDisk RAID storage system, and adding Tivoli’s SANergy middleware software.

The FlashDisk provides the speedy, central terabyte storage repository with multiple 36-Gbyte disk drives. Each FlashDisk can connect directly to four to up to 36 servers. Each server can have a different operating system. SANergy, which runs on either Windows NT, Sun, or SGI, extends the file system so multiple servers can share the same files-Windows NT, MacOS, Silicon Graphics IRIX, or Sun Solaris – directly to and from the FlashDisk. To this end, SANergy allows the files to be swapped via a SCSI point-to-point connection (cable) from the FlashDisk to the server and from the server to workstations. SANergy, which runs transparently on a server, takes the data transfer off the LAN and moves it across the SAN. Data transfers can now take place at the speed of the FlashDisk.

This technique bypasses the network and thus eliminating a lot of network congestion between other devices on the network. Wallace says that the combination of the FlashDisk and SANergy provides 10 times the performance of the previous network swapout. He adds, "The FlashDisk has the high I/O performance capabilities needed to send the files to their destination."

Must Read Articles