In-Depth

Hosting E-Mail in a Storage Cluster

The need to build resilient low-cost infrastructure for hosting e-mail archives is growing fast.

Hardly any business person today would disagree that e-mail has become a key component—though perhaps a dreaded one—of their workaday toolkit. It has been estimated that over 6 trillion non-SPAM messages flowed across the Internet (and its backwaters in the cell phone networks) in 2006. That’s 25 billion per day, or about 600 per business person per week.

It shows no signs of abating. By 2010, according to Ferris Research, the number of business e-mail users should increase from about 674 million to 935 million. That is about 561 billion messages per week if the 600:1 ratio holds.

That’s a lot of data to store for most organizations—typically critical data, too. For starters, e-mail is how work gets done for many companies. Ask the National Football League: in the hours following the 2005 hurricanes that played havoc with the headquarters of two pro football franchises, the number one recovery priority was the re-establishment of e-mail communications. During the high season of grid iron competition, the NFL lives on e-mail—to coordinate the many details of putting on mobile events in multiple cities simultaneously. The New Orleans Saints went on to fight another day because of the rapid response of NFL New York to re-supply them with the systems, software, and storage needed to re-host their e-mail operations. Data itself was recovered from tape.

The Saints' experience, recounted in last year’s Disaster Recovery and Data Protection Summit in Tampa, FL, underscores the primacy of e-mail and the need to to protect it from external threats such as natural disasters and "bot networks"—zombie systems that generate phony e-mail and SPAM that collectively account for the non-business percentage of e-mail traffic today—estimated to be 70 to 80 percent of overall network load.

The current regulatory climate is another factor contributing to the importance of e-mail is. Everyone knows that the Fed is very concerned about e-mail, both from a financial records-keeping standpoint and from a privacy standpoint. Chances are good that the e-mail system in most companies has attracted the attention of corporate legal eagles. Protecting e-mail data from inadvertent disclosure and making it searchable for e-discovery are de rigueur in most firms I visit today.

The third factor contributing to the importance of e-mail is, of course, its business value. E-mail is used to send product info to sales folks and customer prospects, to negotiate contracts and supply-chain schedules, to facilitate and confirm purchases, and to coordinate logistics on product delivery. You don’t have to be the NFL to be e-mail-dependent, you just have to be in business.

Okay, we’re agreed. E-mail is mission critical for a lot of companies. A significant percentage of companies, perhaps the majority depending on the analysts you read, are using Microsoft Exchange Server as their e-mail system of choice.

The Exchange Connection

Microsoft Exchange Server 4.0 (a follow-on to Microsoft Mail 3.5) was introduced in 1996 and has gone through multiple versions since. The latest, until Exchange 2007 hits the streets, is version 6.5, which is closely tied to Windows 2003 Server.

Aware of the growing importance of e-mail to company operations, Microsoft has made improvements along the way to reinforce the availability of the system and to insulate it from failures. The company has introduced, among other things, SPAM filtering capabilities and active-active and active-passive cluster configurations for mail servers themselves.

Redmond’s flirtation with active-active clusters in Exchange 2003—designed to keep two instances of the server synchronized at all times—fell on hard times when measurements were revealed about the performance hits experienced by both production servers. A year and a half ago, as the marketing build-up began about the next generation of Exchange Server (Exchange 2007), mention of active-active was notably absent. Active-passive has become the configuration that most companies seeking a resilient Exchange environment have opted to deploy.

An active-passive configuration is a failover cluster. Two servers are set up with identical software. A heartbeat function is established between them so that if server 1 fails, server 2 comes on line to handle the load. Server 2 cannot be used while server 1 is active—it is a failover system exclusively.

While Microsoft refers to this as a "shared nothing" cluster, the truth is that only the Exchange software itself is not shared—the servers in the failover configuration share what is arguably their most important component: the e-mail data itself. If you lose the e-mail data, server and software failover become irrelevant.

In Exchange 2007, Microsoft will introduce a refinement of active-passive clustering that leverages the Cluster Continuous Replication (CCR) features of its latest operating system. This method sounds like active-active clustering, but this perception quickly changes once you dig into the details. What CCR brings to the party is an "active-passive" hosting solution in which logs of Exchange transactions are shipped asynchronously between active and passive nodes (servers) of the cluster—a sort of continuous data protection (CDP) solution. The logs are "replayed" on the passive node and applied to a separate instance of the database located there. We will need to wait and see how this approach performs (a beta copy of Exchange 2007 is on Microsoft’s Web site for free download for the ambitious) and what it entails in terms of e-mail storage replication.

For now, there are alternatives to explore that can replicate servers and data over distance, creating another layer of failover capability and fault tolerance. One that we are testing in our labs currently is LeftHand Network’s high availability clustering.

An Alternative

As previously discussed in this column, LeftHand Networks has one-upped the competition by bringing to market one of the most efficient storage clustering strategies you can buy. Based entirely on software called SAN/iQ, the solution enables you to cluster together virtually anyone’s iSCSI storage targets to create a resilient and scalable storage infrastructure—and one specifically well-suited to Exchange e-mail.

With LeftHand’s wares, you establish the data layout for the cluster as a function of Microsoft’s own Multi-Path I/O (MPIO) software stack. What this means is that seeks for a specific piece of information do not require an additional hop to and from a data layout table stored on a master cluster node (a common technique in many storage clusters), nor the delay that this "conversation" represents: LeftHand’s lookup process is in the path courtesy of the vendor’s MPIO plug in.

From where we are sitting, the LeftHand approach delivers a lot of value. You can scale your e-mail storage on the fly, leveraging low-cost storage if you prefer. Moreover, storage clustering adds resiliency at the data level, completing the high availability story that Microsoft initiated at the server level.

The only question is distance: how far apart can the Exchange nodes be placed from each other? Certainly, LeftHand facilitates the replication of Exchange data in a campus environment—interpreted to mean placing servers and storage on different floors of a building or in different buildings on a campus interconnected with a common backbone network. Over greater distances, the company provides remote data replication services as part of its software. SAN/iQ Remote Copy provides asynchronous data replication over distance between clusters that looks promising for Exchange hosting environments as well as other Windows-based applications.

I focus on LeftHand Networks in this column in part because as I write this, I am preparing for a Webcastthe week of 26 February, courtesy of the publisher of ESJ.com, on high availability e-mail clustering (you can view a replay by the time that this column is published). That said, I recognize that there are many ways to approach Exchange resiliency, including the use of geo-clustering solutions from vendors ranging from CA to Neverfail Group to Symantec. A major benefit of the LeftHand approach is that it makes sense from both a resiliency and availability viewpoint and also from the perspective of storage scaling behind e-mail systems.

Going forward, in addition to hosting live mail, companies will need to build resilient low-cost infrastructure for hosting archival repositories of e-mail, as required in numerous regulatory mandates. That promises to increase the data load exponentially, driving the requirements of a scalable, price-sensitive storage clustering solution.

That’s my view. I look forward to hearing yours: jtoigo@toigopartners.com.

Must Read Articles