Storage at the Center

Not long ago, many enterprise IT managers saw the server as the center of the universe. That mainframe perspective -- high-performance computing with storage at the periphery -- was extraordinarily useful in the past. Times have changed. Today those same managers know the focus is on storage and not the server.

For many of us, servers and the applications they support are built around data. In fact, more system purchase decisions are built on storage requirements rather than server requirements. The analyst firm GartnerGroup suggests that storage already accounts for half the price of typical systems purchases and may amount to 80 percent in a couple of years.

With multiple operating system and multivendor servers in place, more enterprise IT architects are looking for open storage systems that are independent from the server. E-commerce and Internet-aware applications are good examples of requirements that challenge traditional approaches to providing storage. This has many implications, but one of the most serious is that independent storage must support all platforms in use in the enterprise.

What IT managers would ideally like to see is an independent cluster of storage devices that allows multiple clients to connect and meet their storage needs. One network-based solution, the SAN, allows a high-speed network to provide storage to clients instead of using internal devices. Unfortunately, SANs today only provide device-level sharing: Typically data stored on a SAN is usually only available to a single set of clients. Providing file sharing across SANs for multiple clients is nearly impossible.

Supporting SANs for multiple clients requires interoperability. It may seem counterintuitive, but multi-client support has been available for those self-contained, storage-in-a-box devices that attach to local networks. Those devices, commonly called network attached storage (NAS) units, support access from multiple platforms through standards such as Common Internet File Services (CIFS) or Unix’s Network File System (NFS).

What’s been missing is a strategy to bridge the gap between easy-to-configure local network storage and high-performance SAN strategies. It appears that bridge is being built.

Work is under way to combine the advantages of NAS standards with SAN performance and availability. Soon we’ll start to see hybrid NAS/SAN solutions using NAS as the mechanism for controlling multiuse access to the disk. It will effectively serve as a traffic manager for disk reads and writes, as well as enforce security and cache rules. Once the NAS/SAN server has established access to the files, the SAN server can provide a high-speed pipeline for data transfer.

One objection to this strategy is that it introduces the overhead of NAS protocols on top of SAN’s native efficiency. I’m convinced that the overhead is justified by the ability to introduce the efficiency of SANs to any client in a network. If a server or client is a traditional SAN client it will retain all the advantages of native, high-performance disk access. If there is contention between multiple SAN clients, the NAS component comes to the rescue by negotiating cache consistency and lock management. If the client is simply a LAN system, NAS protocols act as a gatekeeper for the high-speed storage.

This strategy is not without its challenges. Serving up a SAN over fiber using TCP/IP tends to gum up the works. TCP works well for LAN-based file sharing, but performance suffers when TCP is pushed into high-speed fiber environments. Another related drawback is TCP’s session orientation: the very features that make TCP reliable become unwieldy overhead when files need to be sent as a steady stream of information from disk to client.

Still, these are hurdles -- not barriers. The IETF, for example, has a group looking into TCP extensions that would better support disk-to-client transfer over standard Internet protocols. Another storage industry association is crafting a standards-based approach to building NAS/SAN hybrids.

That hybrid solution may become a common model for IT managers looking to make storage-centric networks available to all clients in a network. It is an important decision. According to Hewlett-Packard, 60 percent of high-end data center storage sales and almost all of its enterprise Windows 2000 storage sales are based on storage-centric decisions.

Providing a common, integrated storage architecture can no longer be an afterthought when architecting network solutions. After all, storage is central. --Mark McFadden is a consultant and is communications director for the Commercial Internet eXchange (Washington). Contact him at Check out Mark McFadden's Web-only, bi-monthly column, "Nothing but 'Net" at ENT's Web site:

Must Read Articles