How to Optimize Scalability for ASP.NET Web Applications

How to keep your Web apps running smoothly when workloads rise.

By Iqbal Khan

Web applications based on Microsoft ASP.NET technology are increasingly being used in high-transaction environments with thousands of users. As a result, these applications are being hosted in Web farms consisting of multiple Web servers with a load balancer.

With this increased load, many applications are experiencing major performance and scalability bottlenecks. Enterprise managers are either already grappling with these issues or are unknowingly on a fast track to crash right into those troubled areas. Scalability means keeping the same performance even during peak usage times or while supporting a higher number of users in general.

The usual scalability bottlenecks arise when an application has to take on more user load, increasing expensive trips to the database or any other data store, and this data store becomes a bottleneck and causes the application performance to drop drastically.

There are two types of data in ASP.NET applications that cause these bottlenecks. One is user session data; the other is application data. User session data is stored in State Server or SQL Server provided by Microsoft. Application data is stored in a relational database (Oracle, SQL Server, DB2, etc.).

Using an in-memory distributed cache easily removes bottlenecks for both these types of data. For user sessions, distributed cache can be integrated without any programming effort and through a software plug-in module and replaces the existing user session storage options.

However, for application data, a distributed cache does not replace a relational database, but only augments it by helping reduce expensive trips to it. Incorporating distributed cache here requires some small amounts of programming to call the cache API.

What is Distributed Cache?

Distributed cache is fast and scalable in-memory data storage. A distributed cache exists on multiple servers and synchronizes cache updates across all these cache servers; this allows it to scale easily. The main strength of a distributed cache lies in its ability to scale as an ASP.NET application scales, something that a relational database or ASP.NET Session State Server cannot do.

ASP.NET session data can use a distributed cache to replace the storage options they already have. Most distributed caches provide Session State Provider (SSP) software plug-in so it can be integrated without any programming. All that is needed is to modify the application’s web.config file and register this SSP instead of the one provided by Microsoft, and the application automatically starts saving user sessions in a distributed cache.

For application data, a distributed cache keeps frequently used data, both read-only and transactional. For transaction data, the information may change as frequently as a few minutes or even every 15-20 seconds. For busy applications, cached data may be fetched hundreds of times during this short period. Without distributed caching, the ASP.NET application would need to go to the database each time. However, database trips are very expensive in terms of performance and also cause a major scalability bottleneck.

A distributed cache stores frequently used data close to the application as objects. When the application fetches these objects, it finds them in prepared form as objects and does not have to recreate them, which it would do if it went to the database. Hence, retrieving objects from memory close by and in prepared object form is much faster than going to the database far away and creating these objects every time.

The scalability boost, on the other hand, is possible because a distributed cache can live on multiple cache servers and allows IT managers to increase the number of servers in a cache cluster as their ASP.NET Web farm grows. Different caching topologies are provided to handle specific sizes of activities or data loads. An effective distributed cache provides mirrored, replicated, partition, partition-replica, and client cache topologies. These topologies attack the scalability issue from different angles to provide the best possible answer compared to having one size that fits all.

Caching Topologies

A mirrored cache is a two server active/passive caching topology. All clients connect to the active server and all updates (both active and passive) are made asynchronously (that is, without asking the application to wait for them to complete). If the active server ever goes down, the passive server automatically becomes active and all clients switch to it seamlessly and transparently. A mirrored cache is good for Web farms containing 5-10 Web servers.

A replicated topology is intended for read-intensive cache usage and copies the entire cache on all cache servers in the cluster. It is an active-active topology in which each cache server has its own set of clients.

If operations are not read-intensive, the best option is to use either a partitioned or partition-replica topology. These provide the ultimate in scalable topologies that handle a large number of servers without problems. In this case, a 5:1 ratio between Web servers and cache servers is a good rule of thumb.

Partition cache partitions and distributes the cache evenly across all cache servers and is best suited for transactional environments. It doesn’t copy the entire cache on all servers. Instead it breaks it up and each partition goes to a different server. For example, if there are four servers in a cluster, every server has one-fourth of the cache. The data is located on server one, two, three, or four.

Partition replica adds a large measure of reliability. Every partition has a copy of itself on a different server, so if any server goes down, its backup or its replica is immediately active. It’s like a mirrored topology, but has more than two nodes with each node a partition and every partition having a mirror.

Last, there is client cache targeted at read-intensive applications where the cache resides on a separate server (a separate caching tier). Client cache sits on the client machines either as in-process or out-of-process and caches data close by. It is best described as a cache on top of a cache. Client cache minimizes trips even to the distributed cache, thus making it blazingly fast and scalable. It minimizes those trips by maintaining really close to the application a working set of the data needed at a certain point in time.

Getting Started

Enterprise managers should understand that if they need to scale their ASP.NET application from a few hundred to thousands, tens of thousands (or even more) users, they need to plan for the scalability bottlenecks they are surely going to encounter. They might already be experiencing scalability problems when performance drops at peak usage hours. In both cases, they need to look for a distributed caching solution for their application.

The first place to start is user-session data: make sure its storage is scalable. This by itself removes 50 percent of scalability issues for applications making good use of user-session data. It requires no programming and therefore could be applied to third-party applications running in production already. All enterprise managers need to do is to modify their ASP.NET application’s web.config to specify a customer SSP.

The second place to look is application data. Here the first question is whether the application is from a third party or developed in house. For in-house applications, distributed caching can be incorporated through a small programming effort, but it is difficult for third-party applications unless the enterprise managers have access to the developers of that third-party application. If they do, they can pressure the third-party vendor to incorporate distributed caching into their application.

In either path, the best route to take is to identify a distributed caching product that meets three critical requirements of performance, scalability and 100 percent uptime or high availability. Some products promise performance, but fail to scale. Others scale but they don’t provide 100 percent uptime. However, a combination of data replication and dynamic nature of a cache cluster can provide that 100 percent up time.

Iqbal Khan is president and technology evangelist for Alachisoft. You can contact the author at iqbal@alachisoft.com

comments powered by Disqus