Optimizing Microsoft Exchange 2010

The changes to Exchange 2010 are significant, and migration to this version will be felt across the organization. Here's how to take advantage of that natural disruption to reduce overall costs of running Exchange and improve its performance.

By Lori MacVittie

The disruption within IT caused by upgrading business-critical application services varies based on the application and the version being upgraded. In the case of Microsoft Exchange 2010, core architectural changes to the application will almost certainly be highly disruptive to application, network, and network application infrastructure regardless of what previous version might be currently deployed. This is particularly true because of a lack of backward and the specific operating system requirements of Exchange 2010: Windows Server 2008 Service Pack 2 (SP2) or later or Windows Server 2008 R2 is required. An in-place upgrade from previous versions of Exchange is not possible, which means existing hardware supporting Exchange will be difficult to leverage for a new deployment.

Additionally, there are significant architectural changes introduced with Exchange 2010 that will impact both capital and operational budgets. It may be necessary to invest in more modern hardware or additional solutions to successfully deploy the latest version of Exchange. Although the changes in requirements for deploying Exchange 2010 will certainly disrupt in the network and application infrastructure and budgets, they also afford organizations an opportunity to optimize their critical communications' infrastructure.

The Changes

The core change in the architecture of Exchange 2010 will be felt by server and Exchange administrators as well as by network and application delivery network administrators. With Exchange 2010, users no longer connect directly to Mailbox servers even when using Outlook in native MAPI mode. Instead, all user access to e-mail, regardless of protocol, is achieved via Client Access Servers (CAS). As a result:

  • Outlook data connections go to the RPC Client Access Service on CAS instead of connecting directly to Mailbox servers
  • Address Book Service on CAS replaces the DSProxy interface and handles all Outlook Directory connections
  • Public folder connections connect directly not to the Mailbox server but through the RPC Client Access Service running on the backend
These changes normalize the Exchange 2010 architecture nicely. All clients and protocols now use the same mechanisms to access mail, but this may require network changes to essentially re-route internal clients to CAS servers. Along with these changes, Exchange 2010 requires that the internal connections to CAS now be load-balanced, a practice with which organizations may or may not be familiar depending on the size of their current Exchange deployment.

Other changes that will have an impact are focused on implementation recommendations, rather than changes to Exchange. For example, Microsoft specifically called out at TechEd Europe that "single Exchange role" servers were not designed for high scalability and that, architecturally, servers should perform more than role for maximum efficiency. At the same time it was noted that if a server is assigned more than one role, a hardware load balancer will be required.

Combined with new hardware requirements for CAS (Microsoft now recommends a 3:4 CAS:Mailbox processor core ratio) a migration strategy may require an investment in new hardware.

Migration of any system as critical as e-mail infrastructure can often be disruptive. The new recommendations and requirements introduce potentially new solutions to the mix (hardware load balancers) as well as new architectures to support the changes made internal to Exchange.

Optimizing through the Disruption

Along with the disruption, however, comes opportunity. The new requirements regarding CAS and load balancing provides the opportunity for the organization to reexamine its supporting infrastructure and architecture and make improvements and adjustments where possible. These adjustments and improvements can provide higher efficiency in the network and the application infrastructure as well as deployment of technologies that can improve the performance of Exchange and Web-based applications simultaneously. The introduction of the recommended hardware load balancer also affords options for optimization that were likely not previously available.

Connection Management Optimizations

With Exchange 2007, a single client connection was equivalent to a single session across the Exchange infrastructure. Exchange 2010, with its introduction of CAS as the middleware layer, eliminates the client-mailbox session relationship. Exchange 2010 employs a TCP multiplexing-like connection management scheme between the CAS array and mailbox servers, thus optimizing that layer of the infrastructure.

Similarly, because it is recommended that the CAS array be load balanced by a hardware load balancer, the connection management between clients and the CAS array can also be optimized. Using TCP multiplexing techniques at this layer of the architecture can provide a significant reduction in resource consumption on the CAS array and ultimately reduces the number of servers necessary to support both internal and external client access.

Acceleration

Most modern load balancers (aka application delivery controllers) can offer acceleration services in addition to their core load-balancing functionality. Because there are multiple scenarios in which a hardware load balancer is necessary for the deployment of an Exchange 2010 implementation, it would be advantageous to leverage the acceleration capabilities of the hardware load balancer to further optimize the client access layer of the Exchange architecture.

Asymmetric and symmetric acceleration can be applied to requests and responses, both from internal and external users, to improve performance and optimize resource consumption. Optimally some acceleration functions, such as compression, should be intelligently applied based on the unique access characteristics of the user. Compression may actually degrade performance when applied to internal users because the act of compressing the data takes more time than it would to simply transfer the data via the LAN. Conversely, compressing data that will traverse WAN or Internet routes optimizes performance by reducing the total amount of data that must be transferred.

Eliminating Unnecessary Mail

Another means of optimizing Exchange 2010 and reducing the investment necessary to support the new architecture is to eliminate spam before it must be processed by the e-mail infrastructure. Recent research from ENISA (European Network and Information Security Agency) indicates that 95 percent of all e-mail traffic is spam. Processing and potentially storing so much "junk mail" incurs a high cost in the overall infrastructure and leaves Exchange spending most of its resources accepting and subsequently disposing of spam.

Leveraging message security solutions that are reputation-based, for example, can significantly reduce the volume of requests processed by Exchange 2010 and thus drastically reduce the hardware requirements of the servers necessary to support the Exchange infrastructure.

Improving Virtual Machine Density

A large number of organizations are adopting virtualization as a means to optimize the use of their hardware and an architecture such as Exchange 2010 that requires multiple tiers and servers to implement is a target-rich environment in which to apply virtualization technology.

Microsoft has noted that Exchange 2010 is not virtualization-aware, and although this is not a recommendation against virtualization at all, heed their considerations regarding virtualization and its associated overhead (approximately 12 percent in Microsoft testing) when sizing Exchange 2010 implementations.

Using TCP connection and protocol-level optimizations available in load balancing infrastructure can offset that overhead and effectively improve the density of virtual machines on physical hardware. By increasing virtual machine density without negatively impacting performance it is possible to reduce the total number of physical servers necessary to deploy Exchange 2010.

Strategy First, Deployment Later

Changes in network and application infrastructure required to support Exchange 2010 require that organizations develop a migration strategy before leaping into an implementation. If that strategy takes into consideration the capabilities of modern load balancers (application delivery controllers), you can optimize the new Exchange 2010 architecture in such a way as to reduce the overall hardware requirements necessary to support the organization's critical communication needs. By optimizing the Exchange architecture, users will see benefits through improved performance, especially when accessing Exchange remotely, whether in the office or on the road.

The changes are significant, and the impact of migrating to Exchange 2010 will be felt across the organization. By taking advantage of the natural disruption caused by these changes it is possible to reduce the overall costs of running Exchange while simultaneously improving performance for users both local and remote. If you're going to have to rip apart the data center to upgrade, it's a good time to optimize as much as possible when you put it back together.

Lori MacVittie is technical marketing manager at F5 Networks. You can contact her at l.macvittie@f5.com.