Q&A: IT and the Virtual Desktop

Desktop virtualization brings the computing power from the local workstation back to the data center, changing the entire computing paradigm.

Although server virtualization has received considerable attention from IT over the years, desktop virtualization is just now attracting attention. To explore what behind the interest, the challenges IT faces in deploying virtual desktops, and the benefits and ROI of the technology, we turned to Jack Hembrough, executive vice president of sales and marketing at Leostream. The company's Connection Broker helps enterprises manage hosted desktops and integrates clients and back ends, and supports viewer, authentication, and security protocols.

Enterprise Strategies: What is a virtual desktop infrastructure (VDI)? How is this technology related to server virtualization?

Jack Hembrough: The technology is similar in that both put a hypervisor on a bare metal server to let that piece of hardware act like many pieces of hardware, but the similarities stop there.

Where server virtualization consolidates many virtual data center machines onto one physical data center machine, desktop virtualization brings the computing power from the edge (the local workstation) back to the data center. Desktop virtualization changes the entire computing paradigm.

In addition, there are significant differences in how servers and desktops are used. For example, if a server that processes e-commerce transactions or stock orders goes down even for just a few minutes, the consequences to the underlying business can be severe. Individuals, however, don’t need 24/7 desktops access -- a ten-minute period of downtime in the middle of the night will, at most, be an annoyance to a small number of people and, to be frank, users of physical desktops are, unfortunately, accustomed to dealing with desktop crashes and application failures throughout their work day.

One consequence of these differences can be seen in storage. Virtualized servers require expensive storage arrays that switch without losing a beat when there’s a technical “glitch.” However, although desktops do need back up, clean up, maybe even duplication, inexpensive RAID storage can meet the necessary requirements. That last, very expensive nine in “five nines” availability is unnecessary in desktop storage: four nines of availability may be sufficient.

IT organizations have been virtualizing their servers for a number of years, yet desktop virtualization doesn't seem to have taken off until recently. Why is there suddenly so much interest?

First, high ROI is easily demonstrated. When one factors in the equipment for home use and/or for a disaster recovery site for physical desktops, CAPEX is often less for the hosted solution. When the tangible OPEX savings one derives from virtualized desktops is factored in, the ROI overwhelmingly favors a VDI.

Second, security and compliance concerns are causing managers to drive data from the edge and into the data center where it can be better managed. A physical desktop infrastructure (PDI) can have 20 GB of sensitive data sitting on a myriad of remote workstations and laptops that can be lost or stolen. A VDI has that data de-duplicated, managed, and secured in the data center.

Third, desktops need to be refreshed on a regular basis. The failure of Vista and our economic woes have led to an out-of-date stock of physical desktops. Enterprises can no longer wait for the issues to be rectified, and since they’re going to buy new desktops anyway, IT is giving VDI a good, hard look -- and they like what they see.

What are some of the challenges that enterprises face in deploying a VDI?

A VDI is technically complex, and there aren’t that many people who have the expertise to build one. The VDI touches the data center, the network -- local and wide area, plus the end points -- and all of these resources need to work together.

That leads to the second major challenge; organizational ambiguity. Do the data center folks own the project or is it a desktop initiative? For 25 years, "data center" and "desktop" have been two separate groups. For a VDI to succeed, they must operate as one.

Perhaps the biggest challenge is gaining user acceptance. Users value a faster CPU, a lighter laptop, or multiple display screens. A well-designed VDI can give the user a superior computing experience. Just ask the doctors who are able to sign into their desktop just by walking into an exam room, are automatically logged off when they leave, and then access the same desktop in the next exam room, all thanks to a proximity card around their neck. Users must have a superior experience if they are to buy into giving up their physical desktops.

Several analysts have recently cited a lack of interoperability among major VDI systems as a major challenge for enterprises deploying VDI in the future. Do you agree? What challenges does it pose? How should an enterprise overcome this problem?

As analysts have pointed out, no single vendor can currently provide a complete VDI solution. Although it’s true that currently a single vendor dominates the virtualization market, every significant IT system becomes a multi-vendor market once it matures. Organizations that lock into a single vendor miss out on deploying innovative solutions from other vendors in the future, and they weaken their ability to connect their VDI to security, network management, and other critical IT systems -- and they don't make maximum use of existing resources.

The key to an interoperable VDI is the connection broker. This lies at the heart of any VDI deployment because it connects a wide variety of systems to create a coherent, seamless desktop experience for the end user. Ironically, in the scheme of virtualization deployments, the connection broker accounts for as little as 5 to 10 percent of the budget. Yet, many of those bundled into large vendor “deals” are less than robust; failing to interoperate with existing resources, creating IT headaches including unfamiliar user requirements. In addition, these “free” connection brokers can also break down when the number of end users exceeds a few hundred, and restrict available technology options for future decisions.

If the connection broker can not interoperate with a large number of other systems, and vendor offerings, neither will the VDI. This is what makes an enterprise-class, vendor-independent connection broker especially vital to large-scale deployments.

What’s the advantage to reusing the physical desktop infrastructure that an enterprise already owns? Why not just convert wholesale to virtualized solutions?

Nearly all PDIs have been purpose built with best-of-breed elements. In one corner you have the leading-edge organizations: a company that has specialized in virtual private networks (VPNs) since the category emerged; a company that won the battle to become the dominant provider of load balancers; and a company that must innovate to maintain its lead in the storage solutions marketplace. In the other corner you have a software company that built a virtualization hypervisor and now wants to offer a VPN, a load balancer, a storage solution, and so on.

In generation after technology generation, the corner marked “best of breed” has won. Why would desktop virtualization be any different different?

Enterprises have also instrumented that PDI with management tools to ensure the infrastructure’s efficient operation. Likely there is already a desktop provisioning procedure. Groups of users get certain desktop capabilities. Whether those desktops are physical or virtual, the users have the same needs and same rights. The same tools work fine.

In addition, the IT staff knows how to run that physical infrastructure well. Implementing a VDI is already asking for change in the staff. It only makes senses to use and build upon the systems management expertise those folks have already developed rather than waste it.

Selling new IT projects to management can be tough in this economy. Often the most successful projects come with a strong ROI, but that can be difficult to calculate if some of the benefits don't have a bottom-line impact. What benefits can an enterprise expect by moving to desktop virtualization? What factors go into calculating a VDI project's ROI? What's the average payback time for such projects?

A VDI provides lower CAPEX and substantially lower OPEX -- as to how long it will take before these savings pay for the solution, that’s harder to say, as large installations have only been in production for a short time.

On the CAPEX side, the savings are straightforward, especially if the enterprise replaces fat desktops with thin clients. For OPEX, IT managers should look at how much they spend on help-desk services, plus energy and routine maintenance of endpoints -- a VDI will reduce all of these costs substantially.

People can easily find a wide variety of ROI calculators for VDI on the Web which they can use to get a rough estimate of their own potential savings. Speaking generally, Forrester put out a study last year showing that the risk-adjusted four-year ROI was 255 percent and payback occurred in 17 months (see http://www.vmware.com/files/pdf/Total-Economic-Impact-VDI.pdf). Even without considering ROI, given that a well-designed VDI provides better security, more consistent backup, tighter adherence to policies, and a superior end-user experience, there are plenty of other good reasons to make the leap to virtual desktops.

What are some of the common mistakes enterprises make when they deploy VDI?

As I mentioned earlier, the biggest mistake is that roles are not clearly defined during the deployment between the desktop people and the data center organization. If these roles are left amorphous, people end up stepping over one another, turf battles erupt, and the project descends into chaos.

The second biggest mistake is to accept the connection broker that large vendors often throw in to sweeten a deal. The truth is, these “free” brokers typically don’t interoperate with other systems, make it impossible to reuse most of the physical infrastructure, and are not robust enough to provide good performance in large deployments. Even with a ‘”free” connection broker, the organization ends up paying a cost in the end while never realizing the full benefits of VDI.

What best practices can IT follow to avoid these mistakes?

The organization needs to ensure that both groups -- desktop and data center -- have clearly defined roles so that they can work together on what must be a joint operation.

As for the connection broker issue, it’s important to conduct serious due diligence. This element typically accounts for just 5 to 10 percent of the overall cost of the deployment, and so often gets overlooked, yet it’s arguably the most critical piece of the puzzle.

What products or services does Leostream provide, and how are these different from your competitors' offerings?

Leostream makes the Leostream Connection Broker, which is a leading vendor-neutral connection broker on the market, providing end users with a virtual desktop experience equal or superior to traditional desktops. With Leostream, end users can access their virtual desktop from any machine, regardless of location, due to the technology’s ability to direct users to their own virtual desktop image. Furthermore, IT managers can maximize the virtualization of existing resources by easily integrating an array of clients, back-end systems, and viewers. It is delivered as a virtual appliance and natively supports VMware’s ESX, Microsoft’s Hyper-V, and Citrix XenServer.

comments powered by Disqus