Healthy Fear of the Cloud is Good, But Absolute Terror is Shortsighted
By Mike Thompson
Many public cloud doomsayers have felt validated in the last couple of months when Amazon Web Services' failures have taken down widely used applications such as Foursquare, Instagram, and Pinterest. They've been telling everyone how right they were to be afraid of the cloud.
To some extent, they were right: the public cloud is still the Wild, Wild West. The technology hasn't been around long enough, and the users aren't yet experienced enough to know how to design their architecture for maximum uptime in the cloud. The safest route is to shun the cloud until things become a bit more stable and predictable. The question is whether the safe play is the smart play.
We have to balance our fear of the cloud with a respect for the power that judicious use of the technology can bring to our business by enabling us to do more with less or for less. To anyone who isn't considering how the cloud fits into their business, they'll soon be left behind just like all of the people who denied VMware's message of virtualization about 10 years ago. The key is to avoid the mindset of just "throwing" a workload or an application into the cloud. Although this may be acceptable for less-critical workloads, it doesn't advance an organization toward becoming cloud savvier.
When we deploy new applications in our own data centers, we carefully consider the architecture to ensure that it can meet service-level agreements (SLAs) and that when -- not if -- a failure occurs, we can have things back in order within an acceptable amount of time (Recovery Time Objective or RTO) with an acceptable amount of data loss (Recovery Point Objective or RPO). RTO and RPO will vary for almost every workload, and this must be taken into account when designing the physical environment.
By the same token, we should give cloud-based workloads -- or workloads utilizing public cloud resources -- the same due diligence. To put it another way, IT organizations must pay as much attention to architecture and infrastructure when moving workloads to the cloud as they pay to their physical environments.
Business continuance and disaster recovery (BC/DR) in the cloud are a great example of this. We wouldn't dream of deploying a workload in our internal data center without at least understanding the BC/DR implications. Unfortunately, many people think that deploying in the cloud is their BC/DR plan.
We have to accept that cloud computing at some point will be the future of IT. Let's look at a few use cases for the cloud and how organizations can judiciously apply this powerful technology for the optimal benefit.
Public cloud is true, utility-based computing. You get the benefit of instant resources and unlimited scalability that you can turn off when you don't need them. The drawbacks are in security and control of the environment. Today, public cloud is best for "spiky" workloads that are not security-sensitive, lower-value applications, or applications that have been designed to run stateless (does not require a persistent application and can be restarted from a new image). It allows organizations to eliminate the cost of dedicated hardware that will be underutilized while still having all of the computing power they need when they need it.
Private cloud isn't just virtualization; it's much more. The benefit of a private cloud is that it helps organizations optimize their resources while also allowing them to report or even charge specific cost centers for the use of resources. In this case, there is still the sunk cost of physical hardware, but it gives organizations the flexibility that helps them eliminate waste. Private clouds can be deployed in an internal data center or hosted externally with a managed hosting company. They are good for workloads that must meet specific business requirements or that require a greater level of security than can be offered by the public cloud.
Hybrid cloud mixes public cloud resources with either private cloud or dedicated hardware resources to keep portions -- perhaps more sensitive parts -- of a workload on more secure hardware while serving less sensitive or less predictable parts of a workload via the more elastic and less expensive public cloud. It can also be effective at providing on-demand capacity for offloading workload during peaks or providing cold or warm standby capacity for business continuity purposes.
Each of these scenarios presents its own set of BC/DR benefits and challenges. Here are a few examples of how this plays out:
- In a public cloud environment, it's almost always necessary at least to back up your data to another data center regardless of RTO/RPO, preferably in another region. If the multiple Amazon storage meltdowns over the last couple of years have taught us anything, it's that public cloud architecture can sometimes lead to a domino effect of failures, often resulting in significant, if not catastrophic, data loss.
- Replicating across public cloud "availability zones" can be an effective way to achieve a low RTO.
- In private cloud environments, low RTOs often require powered-on standby resources in remote data centers.
- "Linked Clones" give you the ability to achieve an RTO of minutes in the event of a private cloud failure, so you don't necessarily need to have a full "hot site" running a workload synchronously in order to have really short RTO.
Editor's Note: The author has provided a cheat sheet explaining how you can mitigate those challenges in public and private cloud environments (hybrid cloud recommendations would depend on your configuration). These rules are based on the RPO and RTO of different workloads, so you may have to apply several of them in your environment.
You can view the document here.
Mike Thompson is director of business strategy for virtualization and storage management software at SolarWinds. He can be reached at email@example.com