In-Depth

The Evolution of the Application Front End

As load balancers evolved into application front ends, the underlying architecture changed—as did the services they could provide.

The networking industry is keen on creating new terms every few years. What used to be a load balancer became an intelligent traffic manager (ITM) which became an application front end (AFE). Other terms get thrown into the mix: ADC (application delivery controller), AADC (advanced application delivery controller), WFE (Web front end), and even the simple “next generation load balancer” are common terms used to describe a networking appliance that sits in front of a Web application and helps provide availability, scalability, and acceleration/optimization services for that application. To minimize the confusion, we’ll use AFE here, if just for the simple fact that it best describes the physical location in the network where the device sits.

There’s more to this technology than just a name. As load balancers evolved over the years to become AFEs, the underlying architecture changed and, as a result, the list of services these products can bring to an application has increased significantly. AFEs now present a networking solution that consolidates many functions into a single platform, helping the application run better and faster while keeping it more available and secure.

Traditional load balancers were essentially smart routers and switches which incorporated intelligent tables and TCP tracking technologies. These mechanisms basically made them connection brokers that could determine which servers were available, distribute incoming connections between a set of identical servers, and then keep the connection going back to the same server. As demand for Web and HTTP applications grew, these products were forced to look deeper and deeper into the packet flows to understand the higher level protocols. To do this, these products became creative in the way they handled TCP connections using mechanisms such as delayed binding or TCP splicing to parse HTTP headers without breaking the TCP connection flows. Fundamentally, however, they were still operating packet by packet which caused them to have a functional ceiling in terms of the new services that Web applications demanded.

At this point industry started seeing an architectural shift in the way these products operate. The switch/router model gave way to a new, proxy-like architecture that no longer operates at a packet-by-packet level. The new architecture terminates client TCP connections and deals with incoming requests on a request-by-request basis. On the server side, the platform opens connections to the servers as needed. This is the fundamental difference between the architecture of a load balancers and what we now refer to as an AFE (at least for the time being!).

What’s significant in this new architectural model that makes AFEs more capable than their predecessors is that AFEs can operate at the transaction level rather than the packet level. This opens the door for offering a host of services and features that were simply impossible to provide with the traditional packet-based system. Since the AFE is the TCP endpoint of all client and server connections, it can provide TCP services as well as features that are applied to the individual transactions flowing over TCP. This way they can provide HTTP-specific functionality that is much more advanced than the traditional model.

TCP offload is one of the most basic services AFEs can provide for the servers they’re front-ending. Server TCP stacks don’t like large volumes of WAN-based TCP connections. An AFE can terminate the thousands of incoming TCP connections from the users and then, in turn, establish a small number of long-lasting TCP connections between itself and the servers. Using persistent HTTP connections, requests from all the incoming connections can be sent to the server over the small number of server-side connections. This mechanism (often referred to as TCP multiplexing or TCP pooling) alleviates all the TCP pain points from the servers, letting them funnel their processing power towards the application itself rather than its overhead.

Load Balancing

Load balancing now becomes a natural extension of the AFE’s inherent functionality. Since the device already has connections to all the servers, all it needs to do is make an intelligent decision to pick the best server for the request. Also, since the AFE operates at the transaction level, parsing the HTTP headers is much easier, allowing complex load-balancing tasks (such as URL switching or cookie persistence) to be performed with ease.

SSL offload is another significant benefit that an AFE can provide an application. It’s a well known fact that the cryptographic algorithms associated with SSL significantly hamper the performance of a server. By using SSL hardware to offload the security algorithms, the AFE can terminate SSL sessions at scale and then send the requests to the servers over non-encrypted HTTP, offloading the SSL overhead from the servers. Even if end-to-end security policies mandate that all requests must reach the servers securely, the AFE can use lighter encryption keys and longer lasting SSL sessions between itself and the server to significantly minimize the impact of the secure session processing on the server.

Content compression is yet another major service that an AFE can offer an application. Since it is dealing with whole transactions, at the request and response level, the AFE can compress content traveling from servers to clients. Since all popular browsers can now handle compressed content, this feature can be seamlessly integrated into the network without changing the application itself. Compression has huge benefits for Web applications: it can reduce the client response time and minimize the amount of outbound bandwidth used, along with the associated costs.

The ability to offer these services is a direct result of the underlying architecture of an AFE. This isn’t where it ends, either. Depending on the vendor, AFEs can provide features such as caching, DDoS protection, SSL VPN, content rewrite, traditional and application-layer firewall, filtering, XML acceleration, and bandwidth management, among other services. These services can be offered only in the transaction-based architecture of an AFE. This provides an additional benefit in that the features can be consolidated into a single platform rather than individual point products. If implemented properly, the services can be integrated to work together seamlessly. For example, requests arriving over SSL sessions can be load balanced and the responses compressed before being re-encrypted. This level of integration is only possible since the features are offered in a single platform.

There is no doubt that AFEs are important to application networking by offering features within the network infrastructure that impact the application itself. The architectural shift was necessary to create a new breed of networking appliance that is more intelligent about the applications it delivers. This evolution has created a consolidated platform that can offer many features together and enabled the network infrastructure to help the application itself by providing offload, optimization, and acceleration functionality.

Must Read Articles