Clustering Windows NT: A Plethora of Good Options
Test Track
The growth of Windows NT Server as a platform to host mission-critical applications is helping to drive an interest in boosting system and application availability. One approach that is growing in popularity at Windows NT sites is the use of clustering technology.
ENT and Client/Server Labs tested five products from four vendors, with each product addressing a collection of clustering requirements. We quickly found that clustering products are like opinions: Everybody has one, and they are often very different.
For testing purposes, we chose a scenario that typifies a business application that needs a high level of availability in today's 24x7 world: a Web serving environment. We installed and operated each product on an isolated test network. The configuration generally consisted of at least two servers to support the application and workstations to act as clients using the Web site.
For most of the products tested, we used a pair of IBM Netfinity 5500 servers, running Windows NT Server 4.0 with Service Pack 4 installed. One exception was the Microsoft Cluster Server test, where Microsoft Corp. shipped a preconfigured Aviion "cluster-in-a-box" system from Data General Corp. The Cluster-in-a-Box configuration consisted of a large chassis loaded with six dual-processor Pentium II Xeon systems, a Clariion disk subsystem, power protection and a flat screen display console. The Data General machines were preconfigured with Windows NT Server 4.0 Enterprise Edition.
We focused our efforts on determining how difficult it was to configure each clustering environment, what level of flexibility the systems afforded and how they behaved in failure situations. Failures were simulated by events such as disconnecting network adapters, halting services or shutting down servers. We did not simulate catastrophic failures, such as power loss or system damage.
Microsoft Cluster Server
Microsoft chose to deliver a fully configured system, so we did not go through the setup and configuration phase. But based on prior experience building a Microsoft Cluster Server environment, we found this bundled approach to be convenient. It would not be a necessity, however, for a moderately literate NT shop.
Microsoft Cluster Server maintains application availability by allowing it to run on either of two servers. The two member servers are connected to a central disk storage system -- in this configuration, a Clariion disk array. Each server owns some of the storage space on the connected array. While providing for rapid data access between members, this strategy also leaves the disk subsystem as a single point of failure.
Each member server requires at least two network interfaces. One is connected to the corporate network, and the other is used as a private communications link, called a heartbeat, between the two halves of the cluster.
A snap-in for the Microsoft Management Console manages the cluster. Setting up an application for service by the cluster is a several step process. First, the administrator defines individual resources, such as data storage devices, applications, NT services, IP addresses and the like. Then the resources for an application are gathered into a resource group. Finally, the resource group is assigned to one or the other server.
For our test, we created a resource group that included an IP address, a disk storage device and the WWW Publishing service as part of the NT Internet Information Server. Once set up, we activated the resource group and connected test clients to our Web site through the system's IP address.
We simulated a failure by shutting down the server that owned the resource group. After a few seconds, the other server noticed the first server was not operating. It took control of the resources for our Web site, and less than a minute later client requests were being processed by the new server. Beyond the brief outage, the process was invisible to the clients. With Microsoft Cluster Server, any data that was written to disk becomes available to the backup machine when it resumes processing for the failed machine. Data that was not written is lost.
In general, we found the cluster to be relatively easy to configure and operate. There were two main shortcomings. First, it is difficult to understand the dependencies among resources in a group. For our simple, three-resource group this was not a problem, but with larger groups and large numbers of groups, things would quickly become more complicated. Second, unexpected conditions can cause a group to be passed repetitively back and forth between servers. For example, if you had a configuration problem in your Web service that caused it to terminate, it would bomb regardless of which machine you were on. Depending on the nature of the service, it may be impossible to stop the service manually and fix it. Likewise, if an administrator shuts down a service manually, it may get kicked over to the other machine.
FullTime Cluster
FullTime Software's product got off to a good start with its Concepts Guide. Logically organized, the aid walks the reader through well-crafted explanations of most of the concepts involved in setting up and operating a cluster.
From an operational standpoint, FullTime Cluster is primarily directed at managing applications running in a distributed network of up to 100 server nodes. Data access can be partially managed by manipulating standard Windows NT shares. Also, FullTime Software markets a separate product, FullTime Data, for data clustering, but that product was not tested for this review.
In a cluster of up to 100 servers, each server runs an agent application. These agents are designated as either primary or secondary after installation. The distinction is that secondary agents carry out reporting and monitoring functions only, while primary agents are responsible for executing actions, such as starting and stopping services or making decisions on how to resolve a down application.
The primary agents communicate with each other across the network to trade information about the state of the cluster and to maintain a shared database of rules. Because the primary agents back one another up, for most networks it is unnecessary to designate many of them. In fact, FullTime recommends no more than two to five primary agents, even in large networks.
For our test network, we installed primary agents on each of our two Netfinity servers, as recommended in the Concept Guide. The installation was easy, but unforgiving. We realized after the installation that we needed to change the IP address on one of our servers. We followed the instructions in the readme file for changing an IP address after installation, but without success. The Agent services would not start at the new address. We uninstalled and reinstalled without success. Like many applications, the uninstall left files and, presumably, registry entries untouched. In the interest of time, we corrected the problem with the extreme step of reinstalling NT on that server, something that would not be acceptable on a production system.
Next, we defined the resources for our Web site, including asking the system to monitor the state of the Web Publishing service, identifying the nodes where the service could run, and defining an IP address to be associated with the resource group. The screens for defining these attributes were clear and the process of creating the rules was easy.
Finally, we configured the Web service on each of the cluster nodes to start manually and to know where to draw data if it received a request targeted at our cluster IP address, and then brought the resource group online on one of the two servers. We chose to set up duplicates of the Web data on each machine, but we could have just as easily had each system pointed to an NT share and added that share as a manipulated resource in the cluster.
With all of this in place, we began sending client requests to the cluster address. We then shut down the Web service process on the primary server and waited for the failover to occur. To our surprise, the service did not move to the other server, yet client traffic continued unabated. We discovered that we had inadvertently defined our rule to restart the failed service rather than push the resource group to our other server. FullTime had done exactly what we asked and kept the service up.
We corrected the rule to move the service and repeated the test. This time, the service moved properly to the alternate server and client traffic resumed in a matter of seconds. We noted with some dismay that the "Live Monitor" function of the administration tool showed the resource group shut down and restarted, but did not show that the group had changed servers. This data was only updated when we exited and re-entered the screen, though the systems behaved correctly.
Overall, we found FullTime Cluster easy to setup and easy to work with. The power of the rules engine seems quite impressive, even extending to the ability to do some load balancing in the form of moving applications from one server to another based on events and conditions.
NSI Double-Take
Where some clustering forms center on moving application processing from place to place, NSI Software's Double-Take concentrates on keeping data available, as well as providing failover services at the system level. Clusters in a Double-Take environment are set up as either one-to-one or many-to-one relationships. Selected data from one or more source systems are replicated onto a single target system, which will assume the identity of the source system should the source system fail.
Installation of Double-Take onto our two test servers was quick and simple. We were pleased that the installation prominently explained Double-Take’s use of NT user groups for security.
Once installed, we used Double-Take’s Management Console to define the relationship between our two servers. One server was designated the source and the other the target. We set up our Web server on the source machine, complete with the required data directory. We then set Double-Take to create a copy of the Web server data directory on the target system. Initially, Double-Take creates a complete copy of the original source data on the target system. Once that is complete, Double-Take passes updates as data is modified.
Double-Take performs these tasks at the level of the file system. This approach has advantages and disadvantages. In environments with a high volume of data changes, this technique may result in data at the target system not being completely up to date with the original, especially if slower links are used. But it also allows data transfers to occur over longer and or slower links than might otherwise be possible, bringing a significant measure of protection to environments where the volume is lower.
The operator can tell Double-Take the speed of the link between the systems and can define the maximum percentage of that bandwidth to take for data replication. Time-based processing also can be enabled, which allows replications to use more bandwidth during off-peak hours.
Having set up our data replication rules, we set up the Web service on each of the machines to use the appropriate directory when answering requests directed at the IP address of the source machine. We then used Double-Take’s Failover Control Center to have the target server monitor the source server. Once that was accomplished, we began client transactions and initiated a failure of the source server by shutting it down.
Unlike some of the other solutions we tested, Double-Take is an all-or-nothing approach to failover. It is the server itself that is monitored rather than a particular service or group of resources. Thus, when our target server detected that the source server was no longer active, it assumed the identity and IP address of the source in addition to its own. The target server is expected to be running the required services to support users under its newly added identity.
Double-Take did what it was asked to do. The target quickly assumed the source identity, and Web client traffic resumed. We hit a snag during the process of restarting the down server. Double-Take requires that the failed server be corrected before it is reattached to the network. In some networks, that may be difficult or impossible to accomplish, especially if the original failure was something network related that cannot be corrected in a detached state. It took us three attempts to correctly execute the failback sequence to resume normal operations.
Overall, Double-Take presents some interesting features that may make it attractive, especially for configurations with services that, though low volume, are still business critical. It also has an interface that is easy to understand.
Vinca Co-Standby Server
Compared with the other products we looked at, Co-Standby Server from Vinca Corp. had a fairly steep learning curve and more stringent system requirements, especially if it is being added to an existing system.
Co-Standby Server provides data mirroring between a pair of servers at a low level -- though not at the hardware level. The product applies logic to allow either server to assume the identity of its partner in the event of a failure.
For a typical installation, Vinca recommends that the servers in the cluster each have two network interface cards and either a minimum of three separate hard disk drives or three logical drives in a hardware RAID array. What was not clear on our first reading of the installation guide is that at least one drive on each server must be left unpartitioned. This confusion led to some configuration problems further along when we tried to establish mirroring.
After going through the installation guide and preparing what we thought was the correct configuration, we proceeded to install the product. The installation process was easy to follow. We liked the warning screens, which clearly explained the implications of several of the TCP/IP configuration selections we were asked to make. As part of the process of installation, the names for the servers are changed: New names are assigned to the physical servers, and the previous names are reassigned to a logical construct called a Failover Group.
The newly assigned names, however, did not match the pattern in the documentation, making examples difficult to follow and the actual concept somewhat confusing on the first pass.
We found the installation guide did an excellent job of walking us through the operational aspects of Co-Standby Server. After reviewing this material, we set about configuring our test cluster using Co-Standby Server's Management Console. As alluded to earlier, when we tried to set up the mirroring for our test Web site's data files, we hit a snag. No potential target locations could be selected, and we did not find help messages to suggest possible causes.
It turns out we had not met the set-up requirements. We had created three logical drives on our RAID array, but had done so within Windows NT rather than at the hardware level. After that problem was resolved, things went more smoothly.
With Co-Standby Server, mirroring takes place across a dedicated TCP/IP network connection between the server pair -- the reason for the second network card in each machine -- thus restricting distances only to WAN limitations. Our small sample data set was mirrored between the servers in a matter of moments, even using a low-bandwidth 10 Mbps connection that we provided. Vinca recommends a minimum of 100 Mbps for this connection.
Having mirrored our data, we proceeded to set up the remaining portions of the cluster, including defining the appropriate IP addresses and the application to be failed over. We then started clients accessing the Web page on one of the two servers. We simulated failure by a shutdown of that server. Failover occurred in a few seconds. Because we were using a standard Web client, all we needed to do was reissue the request that had been pending at the time of the failure. When we re-started the failed server, the service returned to the original server with similar rapidity.
Of concern, however, are the instructions that detail the steps to recover from a catastrophic failure, such as failed system disk. Though we did not simulate this type of event, it appears that much, and possibly all, of the cluster installation and configuration must be recreated from the ground up on both machines. It was unclear whether other forms of backup could be used to recreate any of this material, so it would behoove an administrator to keep extremely good records of all elements of the configuration.
Overall, we found Co-Standby Server to be a good solution for environments where data replication is important, especially when that data may be changing rapidly and there is little margin for allowing the mirror set to be even slightly out of synch. It was not as easy to set up and configure as it could be, and the process of recovery from a major failure appears to be a significant task.
Windows Load Balancing Service
Microsoft’s new Windows Load Balancing Service (WLBS) is an add-on to Windows NT Server, Enterprise Edition. Microsoft obtained the technology for WLBS when it acquired Valence Research Inc., which marketed the technology as Convoy Cluster. Available as a no-charge download from Microsoft’s Web site, the WLBS driver allows the redirecting of network traffic among as many as 32 servers, based on a set of rules defined by the administrator.
We installed WLBS on three of the Aviion servers in the cluster-in-a-box system Microsoft provides. These servers had not been participants in the Microsoft Cluster Service test done on that system. WLBS was installed into the NT Network configuration on each server as if it were a driver for a network adapter. A shared IP address is defined for all members of the cluster, along with rules for how IP traffic at various TCP and UDP ports is to be spread among the cluster members.
The only installation instructions were found in a help file made available after the driver is installed -- an irritating oversight and a major hurdle for an inexperienced installer. We also had to work through confusing settings. It seemed, for example, that the priority number we assigned to each server was really more of an ID number that had to be unique. But when a problem arose with another setting it turned out that number really was used to determine priority.
We defined a simple rule for our three-node cluster that said, in effect, "take all IP traffic bound for the phantom IP address of our cluster and spread it equally among the three nodes."
We then installed our sample Web site to each of the three servers. The three Web servers were identical except we included the name of the server in the home page so we could see which server responded. We were very surprised by the client behavior. A Web browser seemed to pull data from different servers, and then stop doing so, settling on a single server and staying there. Rules can be configured to do this sort of locking between a client and a server, but ours had not been.
Server-based performance statistics showed that traffic was indeed being generated fairly evenly against all three of the servers. A little investigation revealed that the problem was on the client side, with the Web browser polling the server and then caching and redisplaying whichever page had the most recent time stamp -- exactly what you would expect in a system designed to spread load across several identical servers.
Roundup
Each of the products reviewed offers strengths that make it suitable for different application requirements. The demands of your environment, available bandwidth for cluster management and replication traffic, and the types of servers installed should all factor into deciding which product may work best for you.
Products and Vendors Tested
Double-Take
NSI Software
Hoboken, N.J.
(888) 230-2674
www.nsisw.com
Price: $1,895
FullTime Cluster
FullTime Software
San Mateo, Calif.
(650) 572-0200
www.fulltimesoftware.com
Price: $3,000 per server, plus 15 percent for support and maintenance
Co-Standby Server
Vinca Corp.
Orem, Utah
(800) 934-9530
www.vinca.com
Price: $5,499, which includes one year of support and maintenance
Microsoft Cluster Server
Windows Load Balancing Service
Microsoft Corp.
Redmond, Wash.
(425) 882-8080
www.microsoft.com
Price: Components of Windows NT Server 4.0, Enterprise Edition ($3,999). They do not have individual pricing
What kind of Cluster?
At a high level, there are three broad categories of clusters. Two of the forms, high availability and load balancing, are more common in Windows NT configurations.
High Availability clusters are designed to ensure that resources remain available even under adverse conditions. High availability clusters may be geared toward ensuring the availability of data, processing or both. How a given product achieves these goals varies widely. Today, many clustering solutions support only two member machines.
Load Balancing clusters are used to spread processing tasks, such as Web client requests, across multiple systems.
Cooperative Processing clusters take a particular task, such as a highly complex calculation, and coordinate the work across several systems. None of the solutions we tested were of the Cooperative type.