In-Depth

The Year Ahead in Cloud Computing

These three trends will play a role in deciding which applications are the best candidates for the cloud.

By Dave Laurello

This year, IT departments will continue to look for ways to ease operational pressure, reduce costs, and streamline business processes. Many who look to the cloud for answers should realize that it’s not a panacea. For all the attention it gets, cloud computing is still an emerging technology. It’s at a point of maturity similar to where virtualization software was around 2006 ... great promise with many improvements yet to come.

Here are three trends I expect to see in cloud computing over the next 12 months that should play a role in deciding which applications are best suited for the cloud.

Trend #1: Business critical applications should steer clear of the cloud

According to IDC’s CloudTrack survey, the cloud software industry is poised to grow five times faster than the software market as a whole through 2016 (to $67 billion) propelled by a 24 percent compound annual growth rate (CAGR). They also reported that 80 percent of the Global 2000 companies will still have 75 percent of IT resources running onsite by 2016. Like many other companies, the G2000 believe that although many business applications may be suitable, mission-critical applications should stay away from the cloud where the risk of extended downtime is too great.

Organizations of every size and in every industry have applications that simply must stay up and running, no excuses. More than half of my company’s applications live in the cloud and the rest are in our data center. My CIO knows where they are, what platform they are running on, when and how they are maintained, where data is stored, and what the current revision levels are for everything that touches them.

More important, we know what is being done and by whom if the server goes down (which hasn’t happened in years). If there is a failure, I can be reasonably sure when we’ll be back up and running. Until cloud technologies can offer this level of comfort and reach the stage where downtime isn’t so common, critical applications are better kept under direct control.

Trend #2: Unplanned outages will continue to plague cloud computing

Over the last 12 months, we’ve witnessed hundreds of examples of downtime that plagued a variety of industries. From retail stores to social media to emergency dispatch operations, it seemed that no one was safe. One outage is often enough to do serious damage to a company’s revenue and reputation. Unlike prior years, however, 2012 was the year of repeat offenders. For example, Go Daddy experienced an outage lasting several hours in September that resulted in significant backlash from customers using the hosting site. Not long after agreeing to give affected customers a one-month credit, Go Daddy went down again in October for over three hours. Go Daddy wasn’t alone -- companies and services such as Amazon, Gmail, iCloud, and United Airlines all felt the effects of multiple crashes.

Unplanned outages and unpredictable recovery times will continue to plague cloud computing into the New Year. The strategy of cloud service providers today focuses on lowest-cost per compute cycle and capturing market share, as it should be for savvy competitors. With so much low-hanging fruit, the financial exposure and customer dissatisfaction potentially resulting from downtime are acceptable costs of doing business. That’s little comfort to the thousands of businesses shutdown while their cloud provider figures out its problem and resumes services, but it’s not going to really improve anytime soon.

Trend #3: More companies will realize the negative impact of downtime and plan accordingly

When it comes to preventing unplanned downtime in the cloud, the first step is to know your cost of downtime. This requires factoring in a number of considerations such as lost revenue and productivity, the financial impact of customer dissatisfaction, and IT recovery costs. Top research firms such as Gartner, IDC, and Aberdeen estimate an hour of downtime can cost upwards of $100,000. The reality is that many companies either don’t know how to calculate this number or don’t want to take the time to determine what it is.

Fortunately, we’re starting to see more emphasis placed on prevention, which is evident in a recent Rackspace Hosting survey that found 77 percent of online retailers are taking steps to reduce or eliminate Web site downtime this holiday season. Companies are starting to realize not all applications are equal. Although some can handle downtime, business-critical applications cannot and therefore require more protection. This survey is a good sign that more companies are starting to understand downtime and measures to protect against it. We can expect to see similar surveys from other industry groups that will show improvement in 2013.

Final Thoughts

As the cloud continues to grow into adolescence, the issue of downtime will become a higher priority among service providers as a point of competitive differentiation. Customers will go where they are treated best.

Dave Laurello is the president and CEO of Stratus Technologies, a company that specializes in fault-tolerant servers and high-availability software. He rejoined Stratus in January 2000, coming from Lucent Technologies, where he was vice president and general manager of the CNS business unit. At Lucent, Dave was responsible for engineering, product, and business management and marketing.. You can contact the author at [email protected].

Must Read Articles