Q&A: Unified Computing

Improving IT efficiency is a prime driver in these times of tight budgets. Unified Computing can help. We explore what the technology is, what benefits it offers, and where it’s headed.

Unified computing is being touted as a technology that can make your data center more efficient. Anything that helps IT do more with less is greatly welcomed, of course, but what, exactly, does unified computing do, and what benefits can IT truly expect?

For answers, we approached Vik Desai, the CEO of Liquid Computing, a Connecticut-based firm whose LiquidIQ is a complete "data center in a chassis" blade system that eases IT infrastructure management thanks to unified computing.

Enterprise Strategies: Why all the buzz about unified computing? What does unified computing mean?

Vik Desai: The buzz is coming from customers and vendors alike who see unified computing as the next big opportunity to drive real data-center efficiencies while improving end-user serviceability.

Enterprises and service providers are increasingly moving towards public or private cloud services to lower costs and improve service by tapping into resources “just in time.” However, in the transition they’re discovering that the underlying IT infrastructure wasn’t designed for this new dynamic reality. Today, provisioning new hardware takes weeks or months, driving up administrative costs as operators follow time-consuming, inefficient, manual procedures to configure and reconfigure infrastructure.

Unified computing solves this problem, enabling data centers to drive down the time and cost of managing IT infrastructure through complete software-based control of all resources including servers, networking, and storage.

How is a unified computing system (UCS) different from existing IT infrastructure or from a blade system?

Today, IT infrastructure consists of static silos of servers, networks, and storage, each requiring manual and separate configuration by specialized IT staff. Unified computing breaks down these silos of systems, people, and processes, creating a pool of virtual resources that can be completely managed and controlled in software. Unified computing streamlines the entire workflow, reducing provisioning time from weeks or months (even, in some cases, to minutes).

A blade system typically refers to a specific physical configuration of computing resources, and it solves a completely different problem. The aim of a blade system is to reduce physical footprint, thereby lowering space and in some cases power utilization requirements. They don’t address total infrastructure management and control.

What is the main reason companies want to move to UCS?

It comes back to the time and cost of delivering cloud services on traditional infrastructure. What we hear most often from companies looking at UCS is that they’re struggling to meet simultaneous demands to support service levels and lower costs. By automating manual processes and shortening provisioning times, UCS can reduce administrative costs by up to 80 percent while providing the agility and responsiveness required to deliver on-demand services.

How does IT go about moving to UCS, and what role does virtualization play?

If the UCS solution supports industry standards, the transition is seamless. By industry standards, I mean, for example, that IT can run unmodified applications, operating systems, or hypervisors on the system without the need for proprietary drivers, and can connect directly to standards-based data and storage networks without the need to install proprietary or vendor-specific gear.

Notice that I included operating systems as well as hypervisors. Although the adoption of virtualization is accelerating, today only 20 to 25 percent of applications are virtualized, which means that in evaluating a UCS solution, IT should ensure that it delivers the same benefits in both bare-metal and virtualized deployments. That said, virtualization and UCS together are a powerful combination, as it provides operators with real-time operational agility and control that extends from the application all the way down to the underlying physical infrastructure. Virtualization alone can’t do this, as it only works within silos, and within physical resources.

What common mistakes do enterprises make trying to meet these challenges, and what best practices can you recommend so IT can avoid such mistakes?

When assessing the total cost of ownership (TCO), we often find that companies are focused on costs related to hardware acquisition, software acquisition, power, and space consumption. The reality is that spending on administration is equally significant and growing faster than any other cost within the data center. Unified computing can provide an order of magnitude savings in administration, but to see this, organizations need to look holistically at what their infrastructure really costs to support.

The second issue is the obvious resistance to change that comes with any new technology, particularly one that automates manual processes. To address this, we’ve found that customers start with initial deployments for specific applications or initiatives where there is the greatest need -- often green field projects where resources are limited and there is no “status quo” way of doing things. Common examples include cloud services, virtualization, IT modernization, or infrastructure consolidation. After seeing the benefits, they continue to roll out the UCS into new areas.

Where is UCS headed? What are the future challenges of UCS?

UCS plays a clear role in the evolution towards more dynamic data center operations, but it’s obviously a part of a larger ecosystem. As standards evolve we expect to see tighter integration between all of the pieces -- the application, hypervisor, operating systems, and management tools that automate and orchestrate all of these resources in real time in response to demand.

The challenge will be for the industry and vendors entering this market to promote and embrace open standards rather than pushing proprietary solutions, as that’s the path to market growth and delivery of real customer value.

What products or services does Liquid Computing provide, and how do you differentiate your company from your competitors?

Liquid Computing’s core product, the LiquidIQ unified computing system, is a complete “data center in a chassis” that drives down the time and costs of managing IT infrastructure through unified software-based control of servers, storage, and networking resources. As for differentiation, it’s simple -- while competitors are talking about capabilities they will bring to market in the future, LiquidIQ is a mature product that’s been on the market since 2006, and is the only standards-based unified computing system that’s available and in production with customers today. There’s more information at www.liquidcomputing.com.

Must Read Articles