In-Depth

Internal Clouds: All Muscle and No Brain?

When it comes to IT infrastructure, having a lot of muscle is good, but making intelligent use of all that muscle is even better. When it comes to internal clouds, a brain is key.

By Andrew Hillier, Co-founder and CTO, CiRBA

Constructing an internal cloud typically involves combining scalable infrastructure with flexible virtualization software and a catalog-based request mechanism. The goal of all this is to manage large pools of resources in a more efficient and agile way, providing capacity to users when they need it and enabling resource sharing for higher efficiency.

The challenge is that the combined system, although powerful, may not end up looking that much different than a big, clumsy, virtual environment. Each brings high value to the table, but in combination they often fail to become greater than the sum of their parts, and, in some cases, may not even work as well as "previous generation" technologies.

The reason is complexity. Any time already-complex technologies are combined into a single system, the resulting "higher order" complexity must be managed. A good example is a modern fighter aircraft, which would be completely un-flyable if it weren't for the sophisticated control systems that have been developed to control all the moving parts. A cloud is similar, and having a "brain" to link the components together in an intelligent way is critical. Without this, an internal cloud just becomes a bunch of technologies doing their own things: the servers provide resources, the hypervisor enables sharing, and the request portal lets users ask for capacity. None of these elements is responsible for managing operational risk, optimizing efficiency, or thinking ahead.

Like all good brains, the control system for an internal cloud must be capable of acquiring and interpreting data from the "eyes and ears" of the environment, which in IT terms are the monitoring systems, performance management tools and user request portals. This gives the brain "situational awareness," which is very valuable because the scale of modern virtual and cloud environments makes it difficult to keep a handle on what is going on.

Brains should also be able to send instructions to the "arms and legs" of the environment, which in clouds are the provisioning and orchestration frameworks, the workload managers, and even the service desks. This allows actions to be taken and managed consistent with existing service management rules (which don't go away just because you call your environment a cloud).

This analogy is useful in that it lays bare the problem with not employing a proper control system. If a cloud's eyes and ears are "hard wired" into its arms and legs, then it tends to operate at a relatively basic level, with little intelligence powering its action. This is much like having a body with no brain: if you tap its knee it might kick, but it is only capable of basic, reactive behavior. With the right control system, however, the entire system becomes significantly more intelligent and proactive in its ability to manage the environment.

Being intelligent and proactive requires more than a brain; it requires that the brain understands the problem it is trying to solve and that it has some notion of what is about to happen. In IT terms, the cloud control system must consider the policies used to manage IT environments and the "bookings" (that is, the upcoming capacity requests).

Cloud management policies are the rules and constraints that govern the relationship between capacity and the workloads that run on it. These policies are critical to cloud management and affect everything from utilization thresholds to resource over-commitment, technical compatibilities, and business rules. Interestingly, most of these considerations have existed for some time in IT environments, but there was less need to formalize them into an active policy until now. This is mainly because physical environments are not flexible enough to warrant it, and early virtual environments have targeted the "low hanging fruit," typically at the departmental level, which tends to be comparatively simple to manage.

However, a cloud with no policy is not viable, as it by definition hosts disparate applications from different business groups, each having different performance, availability, and security requirements. Throwing these together randomly is a recipe for failure and can lead to many performance, compliance, and efficiency issues.

The other key ingredient is a representation of the capacity bookings for workloads coming into an environment. This is the piece of the puzzle that allows the brain to attain a level of predictive analysis, which is essential to proactive management. Similar to hotel reservations, these bookings represent the up-coming demands related to new application deployment, migration, and consolidation activity as well as other requirements.

Although most IT professionals think of clouds as a way to get capacity "right now," in many organizations the goal is more strategic than tactical, and booking capacity with some level of advanced notice is far more consistent with the operational model. This provides tremendous advantages in terms of agility, by removing the procurement cycle from the critical path while providing a level of control over IT assets. Immediate requests can still be accommodated, of course, but may incur a premium due to the volatility they introduce.

With the notion of policies and bookings added to the mix, the brain of the cloud is now capable of determining what needs to be done. Applying analytics to interpret operational data in the context of the prevailing policy for a cloud (or specific workloads within it) enables risks to be identified and efficiency quantified, thus providing judgement as to whether action is needed to improve the environment. Factoring in bookings provides accurate views of future capacity requirements, where policies can be used to drive predictive analysis of potential risks and inefficiencies.

Just like the brain, these current and future actions can be communicated to the "arms and legs" of the environment (e.g., management, provisioning, and orchestration tools) to achieve the desired outcome in a far more proactive way then previously possible.

The benefits of this approach to cloud management can be staggering and can allow a cloud to become much greater than the sum of its parts. Isolated and reactive tools, however well integrated, are simply not capable of managing IT to a high degree of efficiency. However, the combination of these tools with a proactive, policy-based control system provides significantly higher efficiency and lower operational risk. When it comes to IT infrastructure, having a lot of muscle is good, but making intelligent use of all that muscle is even better.

Andrew Hillier is the co-founder and CTO of CiRBA, a provider of data center intelligence software.

Must Read Articles