In-Depth

SCSI vs. Fibre Channel: Picking the Right Protocol

A reader asks for guidance in choosing between the two—and we offer a third option

A great question was received from a reader who is confused (and who isn’t?) about which disk drives to use in a storage platform for his application from a cost/value perspective. Here is the original query:

“I work for a software manufacturer…[that develops and supports] a large scale demographic software under the heading of Geographic Information Systems or GIS. I work in a very small group that supports the customers with regards to the hardware questions. My title is Systems Design Consultant. [My] question is [one of] value verses cost. Large-scale storage vendors are going to internal Fibre Channel storage devices over SCSI Ultra WIDE 3, at [a] fairly hefty cost increase. I am unable to get a good reason for this change and therefore the cost increase. I have gotten the question often enough in the past few months to do some research without being able to reach a satisfactory conclusion. Can you help?”

Thanks for writing, LR, and let me start off by saying that a final answer to this question can only be made if one were to know the input/output characteristics of your application, the capacity requirements of your application, and size of the individual files you create, and of course the prices you are seeing for equipment. Having said that, here is some guidance you may be able to use.

SCSI Ultra Wide 3 is a solid enterprise-class storage device description and protocol. These devices are very capable from the standpoint of capacity, endurance and performance and are generally cheaper than Fibre Channel drives—at least for now. Steve Sicola, VP of Advanced Storage Architecture for Seagate Technology in Colorado Springs, Colo., told me that you could do a lot worse than to base your application platform on this reliable technology.

He added, however, that Fibre Channel drives may be a better fit under certain circumstances. If scaling is important to your application (that is, if the quantity of data associated with your application will quickly scale beyond the capacity of, say, 20 drives, you will realize more value from an investment in FC drives because they can be integrated readily into a switched architecture, because FC offers the capability to attach more drives per bus than SCSI, and because these drives are typically dual ported, facilitating various redundancy schemes.

I would be hesitant to make a specific recommendation to you without more information about the specific characteristics of your application, but I commend you for doing your best, first, to try to identify application-based hardware requirements and, second, to seek the best cost/value configuration to recommend to your customer. My thinking is that, if you need enterprise-class drives, you might be wise to go with the cheapest solution you can find for now. Drive prices, including the FC variant, are falling and more drives are being equipped with FC interfaces in the future, which tends to drop the cost as a function of economics.

The story might end there, but I made another inquiry of my friends at 3Ware in Sunnyvale, CA. 3Ware suggests another approach you might want to consider: arrays based on Serial ATA or SATA technology drives.

Like Sicola, Patrick Kevill, Product Manager for Advanced Architecture at 3Ware, agrees that either the SCSI or FC would probably provide a good choice if your data changes frequently. He suggests, however, that you consider a few additional points.

“Frequency of data change determines what you will need in terms of I/O performance. The less fixed the content, the more you need to consider the high performance end of the drive spectrum,” says Kevill.

While he agrees with Sicola that enterprise-class SCSI and FC drives are the better choice for I/O intensive applications, which tend to beat up ATA and Serial ATA (SATA) drives rather severely, he offers that if your application is anything like those used in the nuclear testing labs that are among 3Ware’s current customers, you may want to think again.

If your data is fairly static, consisting of large files (200 MB+) that need to be read at high speed (500 MB/second), then lower-cost, multi-controller SATA arrays may be just the thing. He notes that many of the current supercomputing installations have switched to SATA arrays to address the increasing need for storage capacity. Lower cost, high capacity SATA works well if your environment consists primarily of large files and emphasizes reads over writes.

He also suggests that you consider a hybrid architecture rather than a one-size-fits-all approach. If your data is initially write intensive, but becomes more static over time, why not combine expensive enterprise-class storage with SATA storage to derive your best price performance?

I hope this helps.

About the Author

Jon William Toigo is chairman of The Data Management Institute, the CEO of data management consulting and research firm Toigo Partners International, as well as a contributing editor to Enterprise Systems and its Storage Strategies columnist. Mr. Toigo is the author of 14 books, including Disaster Recovery Planning, 3rd Edition, and The Holy Grail of Network Storage Management, both from Prentice Hall.

Must Read Articles