Best Practices for Implementing a Private Cloud (Part 2 of 2)
By Rob Clyde
Don’t Lose Your Job
No one sets out to lose their job when they start a private cloud project. However, that can easily happen if you go about your project in the wrong way. In the first part of this series, we discussed the importance of usage accounting in private-cloud implementations. In this article, we’ll discuss why policy-based optimization is critical to private cloud success.
Standing on the Shoulders of Giants
IT has a long history of automation. In the 1880s, Herman Hollerith’s mechanical tabulator, arguably the first IT system, was designed to tally census results, automating a tedious manual process. Since then, IT systems have automated more of our lives, improving productivity while saving time and money.
Private clouds are the latest generation of this continuing evolution. Private clouds automate the process of provisioning servers, putting the power in the hands of users who can simply request a new service with a few mouse clicks via a self-service portal.
When a user requests a service, the private cloud instantiates one or more virtual machines to provide that service. These virtual machines are created from a pool of available hypervisors running on physical servers.
The World is Not Static
A private cloud management system must decide where to allocate the requested virtual machines. Most systems allocating the virtual machines on the first hypervisor available and leave it at that. This appears to work well initially, but breaks down over time, as we will see.
No IT system is static, and private clouds are no exception. Users are constantly starting and stopping services. The services themselves go through periods of high and low loads, and the hardware environment is always changing: machines fail while new machines are added and others are decommissioned. A simple one-time allocation simply cannot cope with this real-world dynamic environment.
Policy-Based Optimization to the Rescue
On an ongoing basis, the optimization layer reviews the private cloud environment and looks for ways to optimize it. One example: Over time, VM sprawl occurs. This means that virtual machines will end up spread across many lightly utilized hypervisors. The optimizer will notice this and use live migration to consolidate the VMs onto fewer hypervisors. The end result is that much less hardware is required to service the same workload. The optimizer can also notice such things as failing hardware and steer workloads away from it. This reduces downtime and improves service reliability.
The policy part of policy-based optimization means that you can define of policies that describe how your private cloud should operate. The optimizer then applies these policies, taking into account several factors, including:
- Available compute, storage, and network resources
- Workload demands
- Server health status
- Security constraints
- Maintenance reservations
- Special hardware or software requirements
- Licensing needs
Even the best IT person in the world can’t keep all these parameters in his or her head at once, and IT people need to sleep. Policy-based optimizers automatically enforce your policies, even when you’re sleeping.
Some policy-based optimizers, such as the one in our Moab Cloud Suite, can utilize thin provisioning and overcommit resources while maintaining service-level agreements. This can reduce the total amount of hardware required by another factor of two or more.
Don’t Waste Time and Money
Without policy-based optimization, a private cloud must rely on error-prone manual intervention to keep running smoothly. Policy-based optimization continues in the proud tradition of IT evolution: automating tedious, repetitive processes. Implementing a private cloud without a policy-based optimizer is costly, error-prone, and wastes resources.
I hope this two-part series has helped you better plan your private cloud deployment and avoid the firing squad along the way. I am interested in your feedback. What best practices have you seen in private cloud?
Rob Clyde is the CEO of Adaptive Computing and has 25 years of of experience as an enterprise software executive at startup as well as large enterprises, including Symantec and Axent Technologies. You can contact the author at firstname.lastname@example.org.