In-Depth

Enterprises Look to 2012 for Return on Assets

With the limitations in IT budgets in 2012, the focus will be on investing in systems and services that optimize all the storage assets that enterprises already have rather than buying more capacity.

By Hu Yoshida, Vice President and Chief Technology Officer, Hitachi Data Systems

We're in the final stretch of 2011. The choices enterprises made this year will lead to big changes for 2012, so before we begin looking ahead to what's in store for the enterprise, let's first look back at the major shifts that occurred in IT this year.

In the beginning of 2011 we saw budgets open up and organizations had the opportunity invest in a host of new server and storage virtualization technologies with a desire to increase the agility of their data centers and increase productivity so all areas could align with business needs. This year, many companies took the first steps to do this through server virtualization and on-demand, pay-as-you-go, cloud services.

This placed a new level of strain on storage infrastructures -- as the adoption of server virtualization accelerated, virtual machine per server ratios increased and IT transitioned to production applications. ]With this "all eggs in one basket" approach, high-availability, enterprise storage systems were deployed to handle the significant increase in processing, reliability, and scalability requirements of virtual servers. Bottom line: customers began to transform their data centers with virtualization and place a higher priority on their storage assets. This is a trend that will continue next year.

The industry has, over the past year, shifted from cloud computing hype to early adoption. Customers are now looking at cloud computing not as a technology by itself but rather as the delivery layer. As organizations embrace the delivery of services, it will further enable a more sophisticated integration of computing, network, storage, and software elements, creating a centralized approach to data management. This transition to a centralized model won't happen overnight and cloud computing will take several years to reach full maturity, but we're seeing major steps being taken in the right direction.

Looking Ahead: What's on the horizon?

There are many changes on the horizon for the IT industry. Just as quickly as we saw budgets open up in 2010 and 2011, we will see them come under pressure in 2012. Stagnation in the economy and possible supply constraints due to the floods in Thailand will impact the ability to buy storage capacity.

Due to these concerns, IT managers will look to capacity optimization technologies such as thin provisioning, dynamic tiering, deduplication, content platforms for archives, and storage virtualization to extend the life of existing storage assets. There will be a concerted effort to increase the utilization of storage assets from what has traditionally been 20 to 30 percent to 55 - 65 percent.

Over the past few years, the focus has been on consolidation. Storage area networks helped to consolidate multiple storage frames onto a single storage frame that attached to multiple servers, and server virtualization consolidated many servers onto a single physical server. This has helped to reduce costs, but most of the benefits of consolidation have already been achieved. To achieve further cost savings, data centers must switch the focus from consolidation to convergence. This includes the convergence of server, storage, networks, and applications. APIs, which offload workload to storage, can make servers and memory more efficient.

Pre-configured, pre-optimized server, storage, and application solution packages with orchestration software will help to consolidate the management, automation, provisioning, and reporting across local, remote, and cloud-based server and storage infrastructures. As consolidation helped reduce CAPEX, convergence will help to reduce OPEX.

In 2012, storage systems will need to become storage computers as new interfaces such as VAAI offload more functions from the servers into the storage. Old storage architectures with general-purpose controllers that service all these new functions (along with the normal I/O workload) will not be able to scale. New storage architectures with separate pools of processors to handle these additional functions will be required. Additionally, server and desktop virtualization will increase the need for enterprises to scale up storage systems non-disruptively as physical server demands increase.

Modular storage systems will need to be replaced by enterprise storage to service the increasing performance and availability demands of virtual servers. Scale-out storage architectures will not be able to meet the scale-up demands of the increased server and desktop virtualization that will occur in 2012.

Finally, the biggest hype for 2012 will continue to be around big data. The explosion of unstructured data and mobile applications will generate a huge opportunity for the creation of business value, competitive advantage, and decision-making support if this data can be managed and accessed efficiently. The massive size of data sets will make it impractical to replicate, back up, and mine this data through traditional means. The data will need to be virtualized from the application by storing it once, in an object format with meta data and policies in a content platform, and replicating it for availability. Once the data is virtualized in this manner, it can be accessed in place and analyzed by other applications. Big data will be more about the information that can be derived from the intersections of many data sets or objects. A content platform is the first step toward big data and also helps with the optimization of storage capacity by eliminating backups and multiple redundant copies.

With the limitations in IT budgets in 2012, the focus will be on investing in systems and services that optimize all the storage assets that enterprises already have rather than buying more capacity. IT will be measured on ROA, the Return On All the Assets, rather than the ROI of a single investment.

Hu Yoshida defines the technical direction for Hitachi Data Systems and leads the company's effort to address the issues in aligning IT strategy with business requirements. He was instrumental in evangelizing the unique Hitachi approach to storage virtualization, which leverages existing storage services within Hitachi Virtual Storage Platform and extends it to externally attached, heterogeneous storage systems. Hu is a popular keynote speaker and can be followed on Twitter (@HuYoshida).

Must Read Articles