Avoiding Cloud Lock-in Relies on Migration Options, Speed
The cloud is no different than any other application when it comes to IT concerns. After all, just as with selecting a payroll or network monitoring application, IT must worry about “lock in.” What if the service doesn’t keep up with technology or the law? What if it gets too expensive? What are our options? How do we avoid vendor lock-in?
With cloud computing, selecting the right provider can be tricky. It’s a nascent industry, after all; some providers may be underfunded or their tech support or performance may be substandard. This may not come to light until you actually sign up for the service, load your data, and then run into glitches. It’s no wonder IT wants to keep its options open.
In a report from Nasuni, Bulk Data Migration in the Cloud, the enterprise storage company tested cloud-to-cloud migration using cloud computing resources with three of the most popular storage providers based on its State of Cloud Storage Providers report from December -- Amazon S3, Microsoft Windows Azure, and Rackspace. The results are eye-opening.
The company used 5 percent of a 12 TB test bed containing 22 million files in all, with a variety of file sizes (but averaging 550 KB). Files were encrypted and compressed so that moving them would pose no security threat. From this test, Nasuni estimated the minimum migration time for a 12 TB storage volume. Overall results varied “significantly depending on the time of day and the number of compute machines used to transfer the data,” as you’d expect, but neither of these was the critical factor.
When Amazon S3 was the cloud destination, times were shortest. An S3-to-S3 transfer took four hours, as did the Azure-to-S3 move. When Azure was the recipient of data from S3, the task took 10 times as long: that transfer was estimated at 40 hours. However, that was fast compared to moving from S3 to Rackspace -- which took “just under one week.” Going in the opposite direction -- moving from Rackspace to S3 -- was much speedier; the task was complete in five hours.
Based on these results, the report concludes “the biggest limiting factor appears to be the cloud’s write capability.” The Amazon S3 results might have been faster had Nasuni worked with more resources. The company was limited to 40 machines for its tests, so “engineers couldn’t push Amazon S3 to its limit.”
You can download the white paper (which contains the test methodology and results) here; a short registration is required.
-- James E. Powell
Editorial Director, ESJ
Posted by Jim Powell on 03/23/2012 at 11:53 AM