In-Depth
The Benefits of Going Virtual
Business continuity is critical, so operations must be up and running 24x7. Data center virtualization could be the key IT needs.
We know that if business continuity is an organization’s key objective, operations must be up and running 24x7. Best practices suggest using geographic redundancy to establish multiple data centers or sites located in different geographic regions, each with replicated applications and data. Do you need to replicate everything? Not necessarily—just those things that are deemed mission critical. Some organizations will feel that the bulk of their applications and data are mission critical, whereas others will have a smaller subset.
You can implement geographic redundancy in a number of ways. You can deploy multiple sites and use a product such as Veritas’ Smart Location or EMC’s Replication Storage to duplicate applications and data, which is a significant investment. Today, most IT professionals still build redundant sites as a backup and manually manage data replication and failover to the secondary site when needed. IT has its site sitting inert as an insurance policy, but also as a nonperforming asset. By virtualizing data center resources at both sites, you can turn non-performing assets (with the exception of a disaster) into an ongoing available asset that will function in a distributed scenario to achieve maximum reliability and performance regardless of location.
For example, in an active-active data center configuration (a design that provides backup, disaster recovery, and continuity of operations), you could do data replication, upgrades, and maintenance on a more-frequent basis, increasing your overall uptime and time-to-market for services. There are other benefits to virtualization when you look at the data center itself. If you need maximum availability and high performance for your applications and data, you can deploy a very reliable midrange server with RAID and redundant power supplies that cost half a million dollars. You’ll still have a single point of failure because you have a single system. You could also achieve your business objectives by throwing very expensive hardware at it, trusting that all the components will keep running.
A better practice is to virtualize your server and application resources—a much more cost effective and a better overall architecture. Instead of deploying that very expensive mid-range system, virtualize multiple, low-cost, high-performance servers with applications and data, so when one server fails you are not impacted. This gives you the opportunity to achieve high availability and performancewithout breaking the bank.
Considerations for Virtualizing Your Data Center
It starts with the application. Can this application be deployed in a manner that can be virtualized? Does it support clustering or are there tools that help it support clustering so each application instance recognizes state? If so, that application is a great candidate for virtualization within the broader context of the application-delivery network framework.
Can the underlying applications be replicated in real time between redundant sites so they can resolve requests at any site at any time, ensuring that the data is current? If you can’t replicate the data in real time, there might still be an opportunity to virtualize redundant sites if the data being served doesn’t require up-to-the-minute freshness. There are many scenarios where that does makes sense. What day-old data is acceptable?
Ultimately, you have to look at the underlying application infrastructure to determine what you can virtualize. The same is true for virtualizing connectivity and links. You also must consider the amount of data and performance during the replication process. In this case, the primary challenge is not the bandwidth or link capacity—the challenge is how much of that data can be concurrently transferred or put into the pipe while eliminating protocol communication overhead. We’ve seen customers with OC-3 connectivity between data centers with replication processes using only a fraction of that pipe. They have much data to transfer and it just trickles into the pipe, so replication literally takes days to complete—it’s just not efficient.
Fortunately, there are solutions that use symmetrical WAN acceleration to mitigate this situation, so replication processes that took days to finish now get completed in hours. That’s a better model and a better use of the underlying infrastructure, which includes available bandwidth.
The Benefits of Data-Center Virtualization
From an architecture standpoint, there are many benefits to virtualizing your resources that deliver applications. The savings are profound, such as better use of infrastructure, 99.999% availability, and simplified management. It boils down to better operational efficiency.
With virtualization, there’s efficiency in the underlying hardware requirements. In essence, you need less hardware or less-expensive hardware to do the same work. You can get five times the performance for a third of the cost when you compare a midrange system to a modest server farm.
If I can put 10 low-cost servers in a virtualized resource pool, I have five to 10 times the power of the most powerful midrange system at a third of the cost. By virtualizing my servers, I realize a tremendous cost savings and have a much better architecture for availability and ongoing maintenance. If I need to bring one server down, I don’t impact the others, and I can gracefully add and remove systems to support my underlying architecture.
For tasks such as ongoing maintenance and management, you can realize significant efficiencies. For redundant active-active data centers managed by an intelligent DNS system, I can very easily bring down one data center for maintenance without affecting the other data centers or impacting users.
The benefits of virtualization run the gamut: ongoing maintenance and management, reduction of hardware acquisition costs, and better architecture for availability, security, and performance. This is why virtualization is becoming the standard for designing IT resources for the future.
Virtualization really isn’t a new concept, though. What is new is thinking about all the points in the WAN and LAN infrastructure where you can realize virtualization benefits regardless of where you started. Consider your need for worldwide employees to securely access your network and applications at any time from any device and from any location. Sometimes sites go down for maintenance, connectivity problems, or disasters. If you provide worldwide access that is only available 95 percent of the time and is under performing 98 percent of the time, you are not achieving your goal of round-the-clock worldwide access.
Here, virtualization integrated with access technologies (such as SSL VPN) comes into play. Virtualization of distributed access devices that route users to the best possible site, which hosts your SSL VPN access control, provides access to applications and network resources without any interruption of service.
Routing users to the best available site is completely transparent and does not require updating client software or reconfiguring clients, which is fraught with problems. Again, virtualization is a better model. Think about virtualization from a holistic architectural approach to fully realize its benefits.
Conclusion
When you consider virtualizing your IT resources, you must consider all critical junctures of your network topology. What is your current environment? Do you have multiple data centers, do you currently multi-home or provision multiple ISP links from different providers? Do you have applications that you want or you can virtualize? Where are your users coming from—the branch office, overseas, or remotely from the road? Are those users private employees, public users, contractors, suppliers, and customers? Finally, what are your business goals, objectives, and SLAs?
All are key questions to ask first if you wish to experience the benefits of virtualizing your resources later.