Amazon Explains Cloud Outage

"Routine network upgrade" to a hub cited as problem.

Amazon Web Services today issued an explanation about what caused last week's cloud service outage that left several customers crippled.

The company also apologized for the event, which left some customer sites offline for several days and permanently lost some data. Amazon also promised to issue credit to those affected.

Since the outage, the company had been quiet about the problem except to point to its Service Health Dashboard.

"We want to apologize," the company said in its postmortem. "We know how critical our services are to our customers' businesses and we will do everything we can to learn from this event and use it to drive improvement across our services. As with any significant operational issue, we will spend many hours over the coming days and weeks improving our understanding of the details of the various parts of this event and determining how to make changes to improve our services and processes."

The problem began on Thursday, April 21, when the company was performing a routine network upgrade to an "Availability Zone," or hub, at its Northern Virginia data center in an attempt to increase capacity. The upgrade was executed incorrectly.

"During the change, one of the standard steps is to shift traffic off of one of the redundant routers in the primary EBS [Elastic Block Storage] network to allow the upgrade to happen," the company explained. "The traffic shift was executed incorrectly and rather than routing the traffic to the other router on the primary network, the traffic was routed onto the lower-capacity redundant EBS network.

"For a portion of the EBS cluster in the affected Availability Zone, this meant that they did not have a functioning primary or secondary network because traffic was purposely shifted away from the primary network and the secondary network couldn't handle the traffic level it was receiving. As a result, many EBS nodes in the affected Availability Zone were completely isolated from other EBS nodes in its cluster. Unlike a normal network interruption, this change disconnected both the primary and secondary network simultaneously, leaving the affected nodes completely isolated from one another."

The company said it is taking steps to make sure such an event doesn't recur. "We will audit our change process and increase the automation to prevent this mistake from happening in the future. However, we focus on building software and services to survive failures. Much of the work that will come out of this event will be to further protect the EBS service in the face of a similar failure in the future."

Customers that were affected by the outage will automatically receive 10-day credits equal to 100 percent of their usage of EBS volumes, Elastic Compute Cloud (EC2) instances, and Relational Database Service (RDS) database instances that were running in the affected Availability Zone, Amazon said. Although the credits will be welcomed by affected customers, in some cases they may not equal the business lost by the outage.

Amazon also promised to improve its communications in the future: "We would like our communications to be more frequent and contain more information. We understand that during an outage, customers want to know as many details as possible about what's going on, how long it will take to fix, and what we are doing so that it doesn't happen again."

About the Author

Jeffrey Schwartz is editor of Redmond magazine and also covers cloud computing for Virtualization Review's Cloud Report. In addition, he writes the Channeling the Cloud column for Redmond Channel Partner. Follow him on Twitter @JeffreySchwartz.

Must Read Articles