In-Depth

The Virtualization of I/O (Part 2 of 2)

In the second part of our discussion, we take a closer look at two I/O solutions: InfiniBand and 10 GbE.

by Jerome M. Wendt

As companies virtualize physical servers and provision processor and memory resources to specific virtual machines (VMs), delivering I/O to these VMs presents the next set of challenges. Last week (see http://esj.com/enterprise/article.aspx?EditorialsID=3040) we looked at the options with traditional I/O and examined virtual I/O requirements. In the second part of our discussion, we take a closer look at the topologies of two I/O solutions: InfiniBand and 10 GbE.

A Partial Solution Today: 10 GbE

Compared with 1 Gb Ethernet, next-generation 10 Gb Ethernet solutions offer more performance and new management capabilities. Vendors now offer “intelligent NICs” that allow a single card to spawn multiple virtual NICs, thus advancing these solutions for virtualized I/O. These solutions can consolidate storage and network traffic into a single link, assuming the storage is iSCSI (as there currently is no bridge to Fibre Channel attached storage).

Many consider 10 Gb Ethernet to be the network interface that is the logical choice for consolidating current 1 Gb Ethernet data networks and 4 Gb Fibre Channel storage networks into a common network. Ethernet is pervasive, cheap, and well understood by businesses. When coupled with the additional bandwidth that 10 Gb Ethernet provides, the interface seems a logical choice.

Next-generation Ethernet cards are sold by vendors such as Intel, Neterion, and NetXen and are available in two configurations -- standard and intelligent. An intelligent NIC card can create multiple virtual NICs (vNICs) and assign each VM its own unique vNIC with its own virtual MAC and TCP/IP address. Using these addresses, each VM can have its own unique identity on the network that enables each VM to take advantage of features such as quality of service that some TCP/IP networks provide. However, if companies use a standard 10 Gb NIC, all of the VMs on the physical server will need to use and share the same MAC and TCP/IP address of the 10Gb NIC. This additional intelligence comes at a price. An intelligent 10 Gb Ethernet will cost at least $1000 versus $400 for a standard 10 Gb NIC.

While 10 Gb Ethernet does provide a strong follow-on to 1 Gb Ethernet, as an I/O virtualization approach there are issues that may limit its applicability in enterprise data centers. For one thing, sharing a common connection for all application data is not always possible in enterprise data centers. In these settings, IT managers often require physically distinct networks to maintain the security and integrity of application data as well as to guarantee they meet agreed-upon application service level agreements (SLAs).

Storage presents another challenge. Fibre Channel remains the most common storage transport in enterprise data centers, and although Fibre Channel over Ethernet (FCoE) standards are under development and expected in the near future, after they are adopted it will still take some time before FCoE-based storage systems are available and deployed in customer environments, assuming this happens at all. Although companies can connect their storage systems over Ethernet networks using iSCSI now, using TCP/IP for storage networking becomes problematic in high-performance environments.

When too many NICs try to access the same storage resources on the same network port using Ethernet links, packets are dropped and retransmissions occur, slowing high-performance applications. Furthermore, on switched Ethernet networks, the queues on Ethernet ports can fill up, at which time the switch starts to drop packets forcing the server to retransmit packets. In storage networks with a physical server employing multiple VMs, the odds of this occurring increase because read and write I/O-intensive applications will transmit and receive more data in a storage network than they do in a data network. These higher throughputs found in storage networks further contribute to the problems of using TCP/IP.

The InfiniBand Option

InfiniBand is a competing topology to Ethernet that has been largely utilized and deployed in high-performance computing (HPC) environments. It is now emerging as a viable option for enterprise business computing. Unlike Ethernet, InfiniBand was originally envisioned and designed as a comprehensive system area network that provides high-speed I/O, making it well-suited for the high-performance communication needed by Linux server clusters used in HPC environments. Until now, most businesses did not demand the type of bandwidth and high-speed communication that InfiniBand provides, and as a result InfiniBand support was not widely pursued outside of the HPC community.

New options are emerging, however, to employ the high-speed transport capabilities of InfiniBand while retaining Ethernet and Fibre Channel as the network and storage interconnects. This hybrid approach incorporates InfiniBand HCAs within servers as the hardware interface, but presents virtual NICs and HBAs to the operating system and applications. An external top-of-rack switch connects to all servers via InfiniBand links, and connects to networks and storage via conventional Ethernet and FC.

The approach capitalizes on one of the distinctive benefits of InfiniBand’s design: the interface was originally intended to virtualize nearly every I/O found in datacenters including both FC and Ethernet – a feature that 10 Gb Ethernet does not yet support. Because InfiniBand is already shipping 10 Gb and 20 Gb HCAs and its roadmap calls for 40 and 80 Gb throughput, the interface has ample bandwidth now and more becoming available in the future.

Adding to InfiniBand’s appeal is that InfiniBand is highly reliable switched fabric topology where all transmissions begin or end at a channel adapter. This allows InfiniBand to deliver much higher effective throughput than 10 Gb Ethernet where its effective throughput of 30 percent is generally viewed as the maximum.

While InfiniBand possesses QoS features similar to 10 Gb Ethernet (in that it can prioritize to certain types of traffic at the switch level), administrators can also reserve specific amounts of bandwidth for virtualized Ethernet and FC I/O traffic on the HCA. Bandwidth can then be assigned to specific VMs based on their vNIC or vHBA, so administrators can increase or throttle back the amount bandwidth to a VM based upon its application throughput requirements.

The cost of InfiniBand HCAs has also been reduced by its adoption in HPC environments: 10 and 20 Gb InfiniBand HCAs are priced as low as $600 and are available from several vendors, including Cisco Systems, Mellanox Technologies, and QLogic Corporation. One new component that enterprise companies may need to connect their existing Ethernet data networks and FC storage networks to server-based InfiniBand networks are InfiniBand-based solutions such as Xsigo Systems’ VP780 I/O Director. These new solutions provide virtual I/O while consolidating both Ethernet and Fibre Channel traffic on a single InfiniBand connection to the server. Because the traffic to respective networks is brought out through separate ports, the approach maintains network connections that are logically and physically distinct.

Ten Gb Ethernet and InfiniBand present the best options for companies that need to improve network I/O management for the growing number of VMware servers in their environment. Both topologies minimize the number of network cards needed to virtualize I/O while providing sufficient bandwidth to support the VMs on the physical server. However, for data centers that must guarantee application performance, require segregated networks, and need the highest levels of application and network uptime, InfiniBand is emerging as the logical topology to use to virtualize I/O.

- - -

Jerome M. Wendt is lead analyst and president of DCIG Inc., a company that provides hardware and software analysis for the storage industry. You can reach the author at jerome.wendt@dciginc.com or visit his company’s blog at http://www.dciginc.com.

Must Read Articles