In-Depth

High Availability for Windows NT

In the high availability and clustering market for Windows NT, many companies have emerged with solutions. Products range from fault tolerance to early clustering, and the solutions all promise some form of high availability, failover and even clustering. Even though solutions are different, they fall into three main architectures. Understanding these architectures can help you see through the hype and find a solution or solutions that best fit your company’s needs. The three architectures that categorize today’s NT solutions are shared disk, mirroring and file replication.

Shared Disk

Shared disk is an architecture that relies on some type of shared storage device between machines. In this configuration, two machines share a common disk set that allows the surviving server to take over the disk resources and data in the event of a failover. These systems also allow failure of applications and services through a management console that defines the resources needed to failover to the surviving server.

The strengths of shared disk are low data protection overhead and disk-level data protection (most commonly RAID). Large amounts of data can be used in these environments. Because there is only one copy of the data, it does not need to be moved anywhere; however, that is also the architecture’s greatest weakness. Shared disk puts all the ownership of data fault tolerance on the shared device and more importantly, in one drive cabinet. This data copy can be called many things, but plainly and simply, it is a single point of failure. There is a high level of fault tolerance built in these boxes, but it is also true that they all reside in the same cabinet and the same server room.

Shared-disk architecture lends itself to quick and solid failover as the data and state of the server is the same when the failover server takes over the disk. This known state allows for transaction tracking and other rollback features of the OS and applications to work as designed. Shared disk introduces new complexity and limits the hardware available to your company for this solution. If your server is not designed for external disk sharing, odds are you will be buying new equipment, not adding to existing hardware.

Mirroring

Mirroring, more properly termed "remote mirroring" (as opposed to internal mirroring/duplexing) is similar to the shared disk architecture, with only a few key differences that make it unique. Mirroring is a technology that was first used on Intel servers in the mid-’80s. The concept of mirroring is to split write requests to the drive so each drive is being written with the same data at the same time. Mirroring writes identical blocks to two drives, providing identical failover storage devices on all servers. The drives are always in a known state because a commit is required on both disks to complete the write to the operating system.

This architecture has been advanced to provide high availability to NT servers by putting the second drive in a second NT server on the network. Further advances have allowed mirroring to occur bi-directionally between two servers, thus Server A has a data drive on Server A and on Server B, and vice versa. Server A controls both drives in its group (one drive in each machine) and server B controls both drives in its group (one drive in each machine).

Mirroring presents a great alternative to the shared disk architecture because there is no single point of failure. Mirroring can be done over the local area network, most commonly done by a dedicated link, thus giving distance capability to your high availability or clustering solution. With mirroring, the machines can be placed in the same server room or in different buildings in a campus environment.

In fact, with proper bandwidth and failover initiation precautions, these servers and data can be separated by many miles to give you protection from disasters that could destroy a single location.

Mirroring does not come without costs. The very technology that provides a mirrored high-availability solution adds a new facet that must be factored into the equation: mirroring time. Mirroring time is the amount of time it takes to initially synchronize the drives and, depending on the type of failure, the time to resynchronize the drives after a failure. Depending on many factors, mirroring time can become prohibitive on large data stores. Shared disk architectures should be considered for volume sizes greater than 100 GB.

When remirroring, both servers are up and running with users logged in, but failover is turned off until the drives are resynchronized. Other benefits of this architecture are the mismatch of hardware that can be used: Virtually any servers can be paired together to create a mirrored set. This means that you can use existing hardware and add one new server to create a clustered pair.

Most of these solutions only require enough hard disk space to support the mirror and a dedicated LAN segment made up of any NT-supported NIC cards. Others use proprietary links to provide the mirror.

File Replication

File replication entails copying files from the file system to another server. Some vendors refer to this as mirroring, but it is not real-time mirroring accomplished at the block level. Mirroring is a block-based procedure that works below the file level, so there is no such thing as an open file, locked record or a file system.

File replication works at the file level and therefore must deal with open files, locked records and the file system. The benefit of this solution is that a user may choose certain files and/or directories to protect or back up. This can be very useful for small amounts of data.

As we have seen in the previous architectures, strengths in one category become weaknesses in another. File replication has two weaknesses. Unlike the previous architectures, file replication has to contend with open files and the file system in order for the solution to work. Some complexity is added to the system by managing the file system. The other weakness is that file changes are not necessarily written to the disk. File replication agents that attempt to send all file changes to other servers are bypassing the file system. This means that the transaction tracking and database backout mechanisms may not work.

An advantage connected with file replication is the ability to use low speed links to get data to offsite locations. This linking ability separates this architecture from the previous solutions since the shared disk architecture cannot separate the storage from the same shared box and mirroring needs a high-speed, high cost link to keep the data mirrored. Latency is handled through built-in queuing of changes, using available line bandwidth (often low) to replicate the changes over time.

This draws an obvious conclusion to this segment that is often missed: File replication solutions are disaster recovery/data vaulting solutions, not high availability solutions. Suffice it to say, this architecture excels at latency and low-speed file transfers and does poorly at low latency and identical states of the data.

The Choice Is Yours

With a better understanding of these architectures, a clear summation begins to unfold. While all of these solutions protect NT servers and, to different degrees, applications, they truly fall into two categories based on the strengths and weaknesses of the architecture. Shared disk and mirroring excel in preserving the state of the data – shared disk has the same data, and mirroring has two drives in lock step. These solutions are better suited for high availability. Both solutions begin to have difficulty when distance is introduced. Shared storage has the distance limitation of fiber channel (2 kilometers), while mirroring relies on high-speed connections between servers which can become cost prohibitive over great distances. File replication, on the other hand, falls into the data vaulting or disaster recovery category. File replication allows for file-by-file replication and low speed links that can be used to get data offsite or to a remote location. The overhead needed when at a file system level, coupled with queues for low speed links, make it a poor architecture for the demands of high availability.

What becomes clear is that the very strength of an architecture in one category becomes its weakness or undoing in another category. Shared disk uses a single device for quick access to the data, yet the data is all within inches of each other in the array. Mirroring uses the block-based dual writes to keep the disks in lock step with each other, yet the link between servers can get very cost prohibitive over great distances. File replication uses the file system and queuing of updates for offsite replication of files and data, yet the overhead introduces latency that is unacceptable for high availability.

A factor that has been intentionally left out until this point is application high availability. Without getting into all the details, let’s focus on one application that is common to most Windows NT users – Microsoft Exchange.

High Availability in Exchange

Exchange has many of the key pieces of a critical application. We will focus on the data environment, but there are many other aspects that are needed for application failover. Exchange consists of several databases (priv.edb, pub.edb, ds.edb), checkpoint files (edb.chk) and transaction logs. These files are all interrelated by updates, pointers, etc. It is critical that these files are always in sync with one another. The result of these files getting out of sync is corruption and/or rebuilds that cost time and lost data. This is a non-issue with shared disk and mirroring since the data is always in a known and matched state. It is the same data when mirroring the writes and commits blocks happen at the time of the I/O request.

File replication does not write immediately. In fact, some implementations need to wait for the file to close to update the remote server. This can take hours and may never happen with certain database files. Open file management can help in getting these open files copied out to the remote server but the cost to the system is overhead and latency. Latency adds to these files getting out of sync and causing costly rebuilds and data loss.

High availability or clustering and disaster recovery or data vaulting are two mutually exclusive categories with one architecture. There is no single architecture that fits all purposes. Some architectures are suited for high availability, while others are suited for disaster recovery. The challenge is to find out what you need, and find the architecture that best fits those needs. Picking an architecture that doesn’t fit the correct need is trying to put a square peg into a round hole, and will surely result in a less than desirable outcome.

About the Author: Jeff Adcock is Product Manager for NT products at Vinca (Orem, Utah). He can be reached at (801) 223-3100, or via e-mail at jadcock@vinca.com.

Must Read Articles