In-Depth

Best Practices for Storage Management in 2011

These strategies will help you manage your growing storage needs in 2011.

By Rick Clark, CEO, Aptare

If you thought 2010 was a tough year to be a CIO, 2011 could even be tougher. Enterprises remain cautious in growing their data center infrastructure even with the promise of tax incentives for capital purchases. Paradoxically, the demands on these data centers continue to increase both in terms of the physical resources and human capital necessary to manage an enterprise’s information demands. Although budgets may be flat in 2011, data needs will still grow. CIOs are going to be challenged to find the resources necessary to effectively handle increased data growth -- especially storage resources. How are the CIOs going to cope with this challenge in 2011?

One strategy to manage the growth in storage resources that is catching on is the use of “thin provisioning” of storage resources. Most enterprise storage systems can now allocate storage resources on a just-enough basis that can grow as the demand for storage grows. Of course, the downside of thin provisioning is that without a watchful eye on these storage resources, what was once thin can soon become fat. Enterprises need resource management tools that can forecast and send alerts about storage consumption so they can smartly manage thin-provisioned storage resource pools. With the proper storage resource management tools in place, storage administrators up to the CIO level can manage the enterprise’s storage resources proactively rather than reactively.

Another strategy that can help reduce capital outlays for new storage resources in 2011 is to allocate storage based on its information value. This is achieved by the judicious allocation of data to different classes of storage, assigning each class a different price point. Storage vendors refer to this concept of differentiated allocation as tiered storage. What’s important is to assign data based on its enterprise value to the correct storage tier along with the appropriate data protection policies.

Tiered storage solutions can be implemented by adding cheaper raw storage to existing enterprise storage systems or by acquiring mid-tier storage systems and moving non-mission-critical data to these storage arrays. The result is that space on expensive tier-one storage systems is freed up.

Keep it simple. Identify and keep mission-critical data on tier-one storage systems, move everything else to a cheaper storage tier along with appropriate recovery point objectives for each storage tier. Do not fall into the trap offered by many enterprise storage vendors of using their automated tiered-storage capabilities. Basically, today these capabilities are not ready for prime time because only your enterprise knows the true value of your data and where this data is stored.

In 2011, we may realize that reality sets in on virtualization panacea. Granted, desktop virtualization can be and will be a big win for enterprises trying to regain control of the PC proliferation. On the other hand, we have actually seen diminishing returns on server virtualization outside of testing and Web-services deployments. Virtual machine sprawl has created more problems than server virtualization has solved, not the least of which is the impact on storage of trying to manage changing virtual machines along with real servers. Without the necessary management tools to monitor and manage both the physical and virtual environments, 2011 may see a negative return on investment for enterprise virtualization efforts.

This year we will also see continued growth of unstructured data within the enterprise. We seem to have a good handle on the structured information running our organizations stored in databases. However, unstructured data -- in billions of files -- is increasing at an exponential rate with no end in sight. The impact is a huge strain on storage resources in order to keep pace with this unstructured information growth.

CIOs should be asking, “What’s in my file systems?” Surprisingly, they may find that their file systems are being clogged by large amounts of archaic, duplicate, and even unnecessary files that is creating a compliance nightmare. Understanding what is really stored in these file systems and characterizing these findings is a challenge. However, the payback -- the return of manageability of enterprise information and freeing up wasted storage resources -- can be well worth the effort. Analysis tools that make sense of what’s stored in file systems can go a long way to characterizing unstructured data.

If there is one recommendation I can make to CIOs for 2011 it is to take proactive measures to monitoring and managing their storage resources. All too often, the answer to increased demand on storage resources is the acquisition and deployment of new storage systems. These, too, will be fully consumed within some finite period of time, often much sooner than planned. The result is that the amount of storage resources within the enterprise has become literally obese. Where once we measured storage resources in the data center in terms of terabytes, it’s now petabytes. If this trend continues, tomorrow it’s going to be exabytes.

We need to go on a diet and regain control of these storage resources. The way we’re going to do this is to monitor, manage, and control the storage resources we have in our data centers to ensure they are being used effectively and efficiently.

Rick Clark is the CEO of Aptare, a company specializing in storage resource management and reporting.

Must Read Articles