In-Depth

The Virtualization of I/O (Part 1 of 2)

As companies virtualize physical servers and provision processor and memory resources to specific virtual machines (VMs), delivering I/O to these VMs presents the next set of challenges. We examine these and look at two I/O solutions: InfiniBand and 10 GbE.

by Jerome M. Wendt

Network I/O is emerging as the next obstacle to delivering on the promise of server virtualization. As companies virtualize physical servers and provision processor and memory resources to specific virtual machines (VMs), delivering I/O to these VMs presents the next set of challenges. Virtualized servers demand more network bandwidth and need connections to more networks and storage to accommodate the multiple applications they host. Furthermore, when applications operate in an environment where all resources are shared, a new question arises: How do you ensure the performance and security of a specific application? For these reasons, IT managers are looking for new answers to the I/O question.


Why Virtualization is Different

Traditional servers do not encounter the same I/O issues as VMs. The objective of server virtualization is to increase server hardware utilization by creating a flexible pool of compute resources that can be deployed as needed. Ideally, any VM should be able to run any application. This has two implications for server I/O:

  • Increased demand for connectivity: Each VM needs physical connectivity to all networks. In this model, each VM may require connectivity to FC SANs, secured corporate Ethernet networks, and unsecured Ethernet networks exposed to the Internet. This creates a requirement to isolate and maintain the security of each of the connections to these separate networks.

  • Bandwidth requirements: In the traditional data center, a server may use only 10 percent of a processor’s capacity. By loading more applications on the device, utilization may grow beyond 50 percent. A similar increase will occur in I/O utilization, revealing new bottlenecks in the I/O path.


Options with Traditional I/O

Traditional I/O leaves administrators two options to accommodate virtualization demands. The first is to employ the pre-existing I/O and share it among virtual machines. For several reasons, this is not likely to work. Applications may cause congestion during periods of peak performance, such as when VM backups occur. Performance issues become problematic to remedy because diagnostic tools for VMware are still relatively immature and not widely available. Even if an administrator identifies a bottleneck’s source, corrective action may require purchasing more network cards or rebalancing application workloads on the VMs.

Another issue with pre-existing I/O is the sheer number of connections needed by virtualization. Beyond the numerous connections to data networks, virtualized servers also require dedicated interconnects for management and virtual machine migration. Servers are also likely to require connectivity to external storage. If your shop uses an FC SAN, this means FC cards in every server.

For these reasons, most IT managers end up needing more connectivity, which raises the second option: add multiple network and storage I/O cards to each server. While this is a viable option, it is not always an attractive one. Virtualization users find they need anywhere from six to 16 I/O connections, which can add significant cost and cabling. More important, it drives the use of 4U-high servers to accommodate the needed I/O cards. The added cards and larger servers increase cost, space, and power requirements, so I/O costs may exceed the cost of the server itself.

Server blades present a different challenge: accommodating high levels of connectivity may prove to be either costly (sometimes requiring a double-wide blade) or impossible (depending on the requirements).


What’s in Store for Virtual I/O

Running multiple VMs on a single physical server requires a technology that addresses the following concerns:

  • Avoids the need to run a slew of network cables into each physical server

  • Maintains the isolation of multiple, physically distinct networks

  • Provides sufficient bandwidth to alleviate performance bottlenecks

Emerging I/O virtualization technologies now address these concerns. Like VMs that run on a single physical server, virtual I/O operates on a single physical I/O card that can create multiple virtual NICs (vNICs) and virtual FC HBAs (vHBAs) that behave exactly as physical Ethernet and FC cards do in a physical server. These vNICs and vHBAs share a common physical I/O card but create network and storage connections that remain logically distinct on a single cable.


Virtual I/O Requirements

For virtual I/O to work in conjunction with VMs, there are specific virtual I/O requirements that the physical I/O card and its device driver software must deliver, including:

  • Virtual resources (vNICs and vHBAs), deployable without a server re-boot

  • Single high-bandwidth transport for storage and network I/O

  • Quality of service management

  • Multiple operating systems

  • Cost effectiveness

While these attributes serve the needs of both traditional server deployments and virtualized servers, the demands of virtualization make these attributes especially important in that world. Virtual NICs and HBAs, for example, eliminate the need to deploy numerous cards and cables to each virtualized server. Unlike physical NICs and FC HBAs, vNICs and vHBAs are dynamically created and presented to VMs without needing to reboot the underlying physical server.

Similarly, a single transport can reduce the number of physical I/O cards and cables the physical server needs to as few as two. A primary objective of virtual I/O is to provide a single very-high-speed link to a server (or two links for redundancy) and to dynamically allocate that bandwidth as required by the applications. Combing storage and network traffic increases the utilization of that link, which ultimately reduces costs and simplifies the infrastructure. Since each physical I/O link can support and handle all the traffic the server can theoretically deliver, multiple cables are no longer needed.

Quality of service (QoS) goes hand in hand with a single transport because the use of a shared resource implies that rules will be available to enforce how that resource is allocated. With virtualization, bandwidth controls become particularly useful since they can be used in conjunction with virtual I/O to manage the performance of specific virtual machines. In production deployments, critical applications can receive the same guaranteed I/O they would achieve with dedicated connections without the need for dedicated cabling.

Finally, compatibility and cost requirements are obvious. For I/O to be consolidated, it must function across servers with different operating systems. It must be cost effective because companies will generally only pursue an alternative course of action if it is financially feasible and as cost effective as their current approach.


Virtual I/O Choices

With these requirements in mind, companies now have a choice of two different topologies to meet their virtual I/O requirements: 10 Gb Ethernet and InfiniBand.

We will explore these options next week in the second part of this article.

- - -

Jerome M. Wendt is lead analyst and president of DCIG Inc., a company that provides hardware and software analysis for the storage industry. You can reach the author at jerome.wendt@dciginc.com or visit his company’s blog at http://www.dciginc.com.

Must Read Articles