Top 3 Trends in Server I/O: A Look Back, The Road Ahead
If there were so many server platform updates in 2012, why are I/Os still held back?
By John Fruehe
In 2012, we saw major updates to server platforms, but in light of massive core upgrades and more memory capacity than applications can really take advantage of, why are we still being held back by I/O? Even with PCIe Gen 3 now hitting the server market, people are starting to see that I/O just isn't keeping pace with the other platform updates. Maybe the problem isn't inside the server. Maybe the future is in view if we just think outside the box.
2012 Trend #1: PCIe Gen3 gave us (allegedly) greater I/O bandwidth
On paper, who can argue that PCIe Gen 3 doesn't deliver more bandwidth? With double the theoretical bandwidth, shouldn't that mean that the I/O bottleneck is smashed? In reality, expanding PCIe bandwidth is really only opening up bandwidth on the board, not necessarily all the way up the chain.
Although PCIe Gen 3 has all of the specifications that spell success, customers are beginning to find that the problem isn't the PCIe bandwidth on the board but the I/O limitations upstream. Gigabit Ethernet in a Gen 3 slot operates pretty much the same as in a Gen 2 slot. The promise of greater bandwidth only happens when you start looking at faster (and far more expensive) peripherals like 10GbE. The only peripherals that must have Gen 3 speeds are some of the latest GPU compute cards and Full Data Rate Infiniband. For the current crop of 10GbE and 8Gb Fibre Channel, PCI Gen 2 delivers plenty of bandwidth. It's not the slots, it's what's down the road that you have to consider.
2012 Trend #2: 10GbE down to the server wasn't prevalent
The notion that 2012 would be the inflection point and 10GbE down to the server would become the newest standard didn't happen, and prospects for this happening next year aren't much better. 10GbE is a mainstay at the top of the rack, the de facto connection for uplinks between the top of rack switches and the end of row or core switching. For that task, it makes perfect sense. As a matter of fact, 40Gb is on the horizon already as a top of rack replacement.
However, most servers are fine with a few GbE ports and very inexpensive Gigabit switches at the top of the rack. For the time being, 10GbE is the perfect solution for uplink bandwidth (east-west), but from the top of rack switches to the server (north-south) we can just expect there to be more lanes of Gigabit Ethernet instead of the 10GbE autobahn that people have been predicting.
2012 Trend #3: Converged infrastructure didn't converge with business
A single cable to carry both the LAN and SAN traffic sounds like a great idea until you realize that while you now only have one network to manage, you have two competing data types to manage on that network. The gated communities that we built up in our data centers for LAN and SAN that guaranteed a QOS and bandwidth that could be individually controlled are now vying for electrons over the same copper.
Again, the promise of the technology is running up against the cold realities of running a business. Combine this with a significant acquisition cost and it is more difficult to visualize the "soft cost" savings with when you are trying to justify the hard cost impacts that are very clear and in the forefront.
A Look Ahead: Top 3 Trends to Expect in 2013
2013 Trend #1: Disaster recovery will be sexy again
A few weeks ago I would not have written this, but watching the coverage of hurricane Sandy pummeling the east coast put a fine point on disaster recovery. Offsite storage is just the beginning; it's about facilities, infrastructures, and, quite frankly, cities. After 9/11, there was a dramatic rise in the disaster recovery chatter. We saw companies clearly addressing the issue with plans, contingencies, co-location, and multiple locations, but how many people in Manhattan looked across the Hudson to Hoboken and thought that locating data centers over on that side of the river would help deliver business continuity? Even enterprises that chose such places as Vienna, VA or Boston are probably feeling the potential pinch if their backup site is the "go to" location -- but only until the storm creeps that far.
With billions being lost due to the storm right at the end of the year, disaster recovery will be front and center for many companies as they enter 2013.
2013 Trend #2: The cloud will continue to grow
The disaster recovery trend will clearly help influence the move to the cloud. Most companies that were hosting applications and data in the cloud will find that despite a hurricane, they can continue their businesses. For many, resistance to cloud-based technologies had to do with wanting to own their servers/data/applications, and now there is a clear indicator of why "everything, everywhere" can work. Cloud technology can deliver the resiliency that means a hurricane may slow things down a bit, but being online and available all of the time trumps rapid access most of the time.
2013 Trend #3: Denmark or bust!
I once spent a week in Billund, Denmark. It's remote and unexciting, but it is home to the world headquarters of Lego. Those modular blocks symbolize the future of servers and technology. We've already seen virtualization apps such as VMware and Hyper-V take the CPU and memory from physical to virtual. SAN and NAS technology have taken storage out of the server box and made it poolable and shareable. The last man standing in the server world is I/O. Although everything in the server has become modular, shareable, and reconfigurable, I/O remains bolted to the system board, limiting the true potential of the server.
As we move into 2013, the server world will be more modular. I/O is moving out of the box and into the top of the rack through I/O consolidation appliances. This will bring a world where the rack is full of modular building blocks -- small-form-factor rack servers (1U) will become more prevalent as the I/O devices migrate to external units. This will give customers compute blocks, storage blocks, and I/O blocks that can be populated in the racks. This will give companies more flexibility, more bandwidth, and the ability to pay for exactly what they need, driving down the cost of computing in 2013.
The future is modular, like Legos, and customers will build exactly what they need, when they need it.
As we look forward into 2013 and beyond, it is clear that budgets will continue to drive much of the decision making in the data center. The global economy has seen a transition in how technology has been deployed over the last few years, and one of the biggest impacts for the future will be how budgets are going to drive deployments. There are a lot of "shiny objects" out there and people continue to hold out examples of exciting new trends, but the reality is that unless there is a clear ROI, those projects won't get past the planning phases.
A more practical and pragmatic wave has passed over the industry and companies are making their decisions based on real benefit to the business. Look for a financial scrutiny that causes people to take a more measured approach to what they put in their data centers.
Some of the promises for large-scale architectural changes will not happen in 2013, but that means that the dollars that are being allocated will be focused on higher ROI projects with a smaller scope. The good news is that this means quicker deployments and faster ROI, which should help to drive business forward and get the world economies back on track faster.
John Fruehe has been in the enterprise market for more than 20 years and is the vice president of outbound marketing at NextIO. He is responsible for helping the company roll out its marketing strategy for the vNET I/O Maestro, which helps companies affordably pool, share and manage I/O. You can contact the author at email@example.com.