In-Depth

Mainframe Legends Guide Virtualization, Partitioning in an "On Demand" World

Partitioning and virtualization bring the mainframe's flexibility to distributed system and serve as the building blocks to utility-based (capacity-on-demand) solutions.

"On demand" or utility computing is being touted by system vendors as a more flexible and cost-effective approach to server acquisitioning and operations. Many organizations are intrigued by the possibilities and are anxious to set the "on demand" vision in motion. In line with this trend, recent technology innovations are bringing mainframe-like system flexibility to the distributed systems world. These technologies, specifically partitioning and virtualization, will be the building blocks for the newer "utility"-based solutions. Today, these technologies can provide practical benefits through server consolidation, application isolation and fault tolerance, and more effective utilization of server resources than typical in a distributed systems environment.

Fear not, good performance and capacity-planning practices for resource management will not be eliminated even when utility computing visions come to fruition. Rather, these practices will enable organizations to optimize the IT environment and address business-service objectives. In fact, some might say that those companies that don’t understand business performance or resource requirements will be at the mercy of their hardware vendors or service providers to make provisioning decisions for them.

The Allure of "On Demand" and Mainframe Leadership Reflections

Traditionally, new distributed-systems application services are delivered to the business as a package, which includes dedicated IT infrastructure components. The only shared components typically are the network infrastructure and perhaps a few system peripherals, such as printers and storage devices. This approach results in minimal optimization, resilience, and sharing of IT infrastructure—an approach at odds with many organizations' focus on efficiency and resource maximization.

Part of the solution lies in the strategies developed on the workhorse of the enterprise—the mainframe. Where early implementations of online transaction processing (OLTP) applications favored a single CICS region for each application, systems evolved quickly to a share-everything parallel-sysplex environment. A business unit didn’t "own" a mainframe, nor did they wish to own disk farms or even partitions. They demanded the systems resources they needed, when they needed them, without constantly re-evaluating their requirements. A parallel-sysplex implementation allowed mainframe system managers to provide dynamic-capacity provisioning and higher availability than would be possible with a share-nothing, application-centric IT infrastructure.

Initially, distributed systems evolved because business users pulled out of the mainframe pool due to high chargeback costs, opting to create their own mini data centers, requiring them to provision all components themselves. While the initial goals addressed the burgeoning applications requirements of the client-server era, the costs of maintaining multi-server infrastructures and reliable business service levels have become burdensome. The benefits of the mainframe lesson are beginning to be heard, with the emergence of adaptive, on-demand, and real-time infrastructure strategies.

Approaches Toward “On Demand”

If "on demand computing" has an essential goal of providing a virtual pool of resources as needed, what mechanisms help you get there? The following initiatives all aim to provide policy-based, dynamic, cost-efficient provisioning, resource optimization, and predictable service availability.

  • Provisioning provides a way to hide and minimize the impact of IT change on the business user. It allows IT to focus on business alignment with lower risk of failure, based on a standard set of policies for servers, software, and peripheral acquisition. In a sense, this is the use case for classic capacity planning.
  • Virtualization enables a group of resources (servers, applications, databases, networks) to behave as a single resource from which all users can draw. The goal is typically high availability, load balancing, increased utilization, improved scalability, flexible capacity-on-demand, and simplified systems management.
  • Grid computing pulls resources together, dynamically utilizing the IT infrastructure from the mainframe to the desktop, treating all distributed management components as shared resource entities. In a sense, grid computing is an example of virtualization “out,” creating a large server image from a number of interacting components.
  • Autonomic computing is a self-healing, self-managing, policy-based infrastructure. It requires definition of a process to dynamically make reactive tuning changes in line with business demands. Some key components required include common system administration, autonomic monitoring, policy administration, transaction measurement, and problem determination/resolution.
  • On Demand/Utility computing, a variant of autonomic computing, requires that the application layer, business processes, and end users be considered. Applications must be segmented into services where the server is a virtualized resource and available "just in time."

On-demand is a long-term business management strategy that will evolve as these initiatives and supporting technologies mature.

Benefits of Virtualization and Partitioning

A concept common to “on demand” initiatives is the containerization of individual application elements that provide a standard business service. By profiling common services—such as a mail server, database server, Web server—and creating standard software profiles for application servers by type, the individual elements can be merged and treated as a whole.

Containerization of individual elements, in conjunction with partitioning or virtualization of the "server hardware," provides greater flexibility and increased utilization of hardware resources. It also delivers solid benefits in reduced hardware, operational, and facilities costs. The application server essentially "sees" a generic set of system resources that are hidden from the operating system itself, thereby increasing options for hardware deployment.

Virtualization software creates a server image that can draw upon shared resources with other server images, thus improving overall utilization and eliminating redundant expenses such as network interface hardware and Fibre Channel bus adapters. In some cases, consolidation of servers from single CPU hardware to an eight-way architecture can reduce hardware costs by up to 50 percent over a three-year period. Reduced power consumption and space requirements are additional benefits of hardware consolidation.

In addition, the ability to manage more servers from a central point can reduce administrative overhead for system maintenance activities and allow scarce IT resources to be redeployed to higher value business services.

Aside from cost reduction through server consolidation, virtualization provides the ability to rapidly install and deploy server resources to meet business requirements. Short-term systems requirements for development, testing, and quality assurance, and short-lived business initiatives can be addressed by drawing on underutilized servers without adding additional clutter to the IT shop. The virtual server provides fault tolerance and isolation so that individual users may be unaware that they are running on shared resources.

Hardware partitioning provides similar benefits to virtualization by allowing the containerization of application resources out of large, highly scaleable, multi-CPU (32, 64, and up) system frames such as the HP SuperDome, Sun Sunfire, and IBM pSeries Regatta architectures. While each vendor’s architecture has specific implementation requirements, in many cases, the ability to isolate and support multiple business services on a single host can be cost justified by the reduced overhead costs of a centralized support structure.

In either case, the effective use of server containers requires a methodology that allows the IT resource provider (i.e., a capacity planner) the ability to understand current application requirements, make initial assessment of what applications are suitable for containerization (and size those system requirements) and measure and manage the applications responsiveness to ensure the satisfaction of the business user.

A Process for Virtualization

The first step in the process is to identify the application assets and performance characteristics of current servers that may be candidates for containerization. Systems management software can be used to identify underutilized resources, but more importantly, determine which work has business value. The process of workload characterization relates system processes, users, and systems to business processes, and can determine unnecessary activity that may be eliminated in conversion to the virtualized servers. One of the legacies of the client-server era is that there are often a number of rogue applications running daily on departmental servers that have little or no business value.

The second step of the process is to size the new applications and test deployment scenarios through the "what-if" planning capabilities of capacity planning and management software. This allows the planner to explore deployment options and measure the impact and overhead of mixing virtual servers under one hosted environment.

Finally, in the deployment stage, best-practice methodology requires the ability to measure and manage resources in response to changes in business cycles. Managing the system utilization is not different in a virtualized world. However, the ability to assure application responsiveness to the business user is even more important to a successful virtualization strategy. Regular reports to the application and business user will assuage fears of the loss of an independent infrastructure. The ability to identify performance bottlenecks before they affect service levels is more critical than ever.

Summary

Virtualization and partitioning technologies are the building blocks for future provisioning strategies, including utility or capacity-on-demand computing. At the same time, these technologies can now be leveraged to support isolation and improved performance tolerance of mixed workload systems and enhanced utilization of new and existing system resources. These technologies also allow for temporary provisioning of development and test environments and consolidation of underutilized application servers for maximum IT return on investment.

Must Read Articles