In-Depth
Tiered Access: iSCSI for the Enterprise, Part 2
When is iSCSI a more appropriate solution for your data center than fibre channel?
- By George Crump
- 10/26/2004
Last week we discussed the Internet Small Computer System Interface (iSCSI) in the tiered access storage environment and HBAs. This week I'll explore iSCSI as it relates to infrastructure and how to know when iSCSI is a more appropriate solution than fibre channel (FC).
Infrastructure
In an environment where there is already an FC SAN in place (or the need for one is evident), an iSCSI gateway or bridge will be needed to allow the FC SAN storage to see the iSCSI-attached hosts. This router is either an external unit from the FC switch or a blade in the "director class" or core switch. The external products typically have a combination of 1 gigabit Ethernet (GbE) ports and fibre channel ports. These ports will connect to the 1GbE infrastructure and into the SAN FC fabric. The router acts as an iSCSI target to the servers and as an iSCSI initiator to the FC storage to assure compatibility.
The iSCSI network is typically a separate Virtual Local Area Network (V-LAN), or even a physically separate network from the standard network used by the enterprise. Many times these secondary 1GbE V-LANs are already in place, typically being used as server interconnects or for nightly backups. These “utility networks” can be used for iSCSI access as well, at least initially. Often we find that the utilization of the utility network prior to iSCSI being deployed is less than 5 percent.
Tiered storage access is not just a server access methodology. It also applies to the storage infrastructure itself. Switches and director-class storage infrastructure providers recognize this and are beginning to see director-class storage concentrators that have three types of interfaces. The first is a high-performance, non-blocking interface (each port on the blade is provided with a full 2GB of bandwidth), designed specifically for FC arrays. Since many different servers are accessing these arrays at the same time, providing them with maximum bandwidth to the storage infrastructure is an optimal configuration.
FC servers typically do not require a full 2GB of bandwidth nor can they sustain it. Consequently, director-class suppliers have developed the second type of interface, a higher-port-count fibre blade. While there is some blocking with this interface, there is more than adequate performance for most environments.
Finally, there are iSCSI interfaces that provide typically eight iSCSI ports that connect to the Ethernet core.
Enterprise iSCSI vs. Fibre Channel: Not an "Either/Or" Choice
In the medium-to-large-size data center, iSCSI has been positioned as a competitor to FC SANs, but used as part of a tiered access Strategy, iSCSI is a complement to fibre channel.
The iSCSI vs. FC debate really started with those that had the most to lose with the adoption of iSCSI—the suppliers of "fibre channel only" technology. The truth is that they are not really losing much due to the growth of iSCSI. Most of the servers that iSCSI allows to participate in consolidated storage would never have been connected if not for iSCSI’s impact on the cost of connecting a server to the SAN.
When implementing iSCSI, the customer is the real winner by being able to broaden the use of SAN storage to reduce the headaches associated with direct attached storage management and data protection.
iSCSI Starting Points—How to Make the Leap
One of the challenges iSCSI has had to face is customer resistance to another new technology. Is it safe, reliable, and fast enough? A commitment to iSCSI can be seen as risky, so one of the best strategies for a migration is leverage the existing locally attached storage while gradually moving to the consolidated FC or iSCSI storage. Implementing a continuous data-protection technique can do this.
Continuous data protection consists of three components. A data splitter agent (or OS Mirroring) and iSCSI driver are installed on the servers to be protected. Each server’s data, whether attached directly or via SAN, is then mirrored from the host to an iSCSI attached storage cluster.
Once the data is on the iSCSI storage, snapshots of that data can be captured frequently. Initially, these snapshots require virtually no additional storage. As the snapshot ages, its storage requirement grows, but very slowly. Where iSCSI really shines is in recovery. With tape-based or even disk-based backups, the time to restore data back to the downed server can take hours, if not days. Restores typically take 10 times longer than backups. Additionally, data will typically be written to some type of RAID 5 system that requires the generation of parity bits, further slowing recovery time. Disk-based systems eliminate the tape location and mounting issues, but still suffer from the time required to physically copy the data from the backup disk to the primary disk.
Recovery with continuous data protection is near instant. The iSCSI volume is instantly available via an iSCSI mount. Once the mount is completed, applications can be brought back on line and users can resume work. When the problem with the primary storage is corrected, data can be restored back to the server in the background. This creates an additional benefit of actually testing the primary system with load to make sure that it will actually work when it is brought back online.
As your needs expand, you can increase this to a full-scale, IP-based SAN. This IP-based SAN can provide a complete consolidated storage solution for the host servers in the environment as well as file services, eliminating the need for future file server purchases.
About the Author
George Crump is the vice president of technology solutions at SANZ Inc. gcrump@sanz.com