In-Depth

5 Storage Management Trends to Watch

Five hot technologies are having a big influence on how you manage storage. The executive director of Dell's storage strategy explains why each one is worth your attention.

By Carter K. George

Storage hasn't always been top priority for IT teams -- until companies started running short of it. To avoid potential crisis, many buy more than they need despite tight budgets. Although virtualization has increased efficiency in the server world, it's put more pressure on storage. An application that used to run on an old server has now been consolidated to a VM, meaning its data moved from an old disk to a SAN or NAS platform in the data center.

Questions arise: How much more capacity is needed to keep up with demand? How soon can I get it? Will I run out of capacity again or need to overhaul my environment? Are there ways to make my existing storage more efficient?

Luckily, advancements in areas like thin provisioning and auto-tiering can help ease these concerns by creating more space, optimizing computing performance, easing management, and increasing IT staff productivity. Additionally, in 2012, some key data management trends also will change the enterprise storage landscape, and new technologies will make the move from hype to real production use.

Trend #1: Cloud Computing

In 2012, the cloud computing buzz will continue but focus will shift. Analysts expect continued momentum for cloud adoption through increased consumption of cloud-based applications, cloud services, and cloud systems-management software. Research firm IDC expects cloud-related spending to reach $36 billion worldwide in 2012.

Cloud computing's continued growth is due largely to the widespread adoption of virtualization. Companies that have invested in virtualized data centers are best positioned to transition to the cloud. You can VMotion a running application to the cloud just as easily as you can "motion it" from one server to another in your own data center. This allows companies to thin provision their server footprint -- instead of buying extra capacity "just in case" -- so they can burst to the cloud if there are unexpected increases in processing requirements.

Similarly, cloud storage is not a replacement for data center storage; it becomes an extension of it. Replicating to the cloud not only provides a protected copy of data in another location, it's what enables you to fire-up an application in the cloud. Look for storage systems to start including the cloud as though it were another (really big) tray of drives. The cloud will become another tier of storage, and you'll be able to backup, replicate, or tier to it as though it was part of your on-premise system.

Object storage technology in cloud environments also is gaining popularity and fundamentally changing how files and archival data are economically stored.

Today's cloud designers are building storage using object technology, which is infinitely scalable and allows low-cost building blocks to be combined to create large storage at a cost close to the commodity price of disk drives. Object stores can be accessed natively through object interfaces or through layers that make them look like traditional storage.

Objects allow rich metadata to be stored, making them ideal for archives. By providing information on context and content, object storage enables organizations to understand the value of their data and how to better manage it.

With lower costs compared to other storage options, growth of features, and overall market acceptance, object storage will see increased adoption in corporate data centers.

Trend #2: Virtual Desktop Infrastructure

Enterprise IT professionals know that rolling out a system update or a patch can be cumbersome and time intensive. A virtual desktop infrastructure (VDI) helps alleviate this pain and offers additional data and application management advantages where monitoring regulation compliance is a priority.

The biggest obstacle to VDI is storage. For example, a 9:00 a.m. "boot storm," when hundreds or thousands of workers all boot their (virtual) desktop, can cause a sudden, huge load on storage. With VDI, all those OSes are on your SAN. Can it keep up?

SSD and dedupe provide an elegant solution to the boot storm problem. First, instead of having a thousand copies of the same OS, with dedupe for primary storage, they might all be one copy on disk. This means it's more feasible to put them on very fast storage. Enter SSD, memory-based storage that's much faster than mechanical disk.

As storage technologies meet the demands of VDI, IT organizations will begin more seriously considering this approach, particularly in heavily regulated industries such as finance, health care, law, and government.

Trend #3: Memory-based Storage

There's no question that memory is faster than disk. Standard server memory, called DRAM, loses all of its data when power is turned off, making it ineffective for storage. NVRAM (Non-volatile RAM) preserves data just like a disk when power is turned off and is increasingly showing up as an alternative to disk-based storage. A solid-state disk (SSD) is NVRAM packaged to look like a disk drive. PCI Flash is NVRAM put on a card that you can put in a PCIe slot.

However, SSD and PCI-based Flash cost significantly more per GB than disk. In most shops, 80 to 90 percent of data stored on disk is idle -- no one would notice if it was on faster storage because they aren't looking at it. The value of memory-based storage is for the other 10 to 20 percent of data. The trick is finding a way to efficiently deploy memory-based storage devices to get hot (active) data on faster storage while buying only enough to get the job done.

Auto-tiering and intelligent caching technologies unlock the potential of memory-based storage. Auto-tiering in the SAN automatically puts active data on the fastest storage while moving less-used data to less expensive tiers of disk. A good auto-tiering solution allows for a big impact from a relatively small tier of SSD or Flash.

Caching is complimentary to tiering. Server speed has increased faster than storage speed. To provide data at the rate that modern servers can consume, data needs to move closer to the application. A PCI Flash card may be able to achieve 250,000 IOPS for a few thousand dollars. That kind of performance from an enterprise array might cost 100 times that, but such a card doesn't have the capacity nor the protection mechanisms -- snapshots, replication, mirroring -- that a SAN does. This year we will see the emergence of intelligent caching software that makes the PCI Flash cards in a server essentially become part of the SAN -- moving data closer to the application when it's going to be used and extending all the protection mechanisms of an array to data on the cards.

Memory-based storage and clouds can be thought of as matching bookends. Both extend traditional storage rather than replace it.

Trend #4: Data Protection

Frustration with the inefficiency and ineffectiveness of the traditional backup and recovery process will likely spark a data protection revolution in 2012.

Most backup software used today was designed 10 or more years ago, when data lived on servers and one terabyte was a huge data set. The backup model was to freeze applications and during that "backup window" copy all of the data off the server to a tape drive. Today, data sets are much larger, data is on a SAN array or NAS filer rather than the server, and applications need to run 24/7 -- there's no time for a backup window.

Many shops have extended the life of older backup systems by backing up to fast disk instead of tape. Dedupe technology has allowed fast disk to be a cost-effective replacement for tape, but it doesn't solve the fundamental problem: that most traditional backup software is doing the wrong thing to begin with.

Most shops realize that they couldn't survive if they had to restore their whole system from a backup, so they also use snapshots and replication and plan on using the replicas in the event of a disaster. However, replicas often don't provide the reporting, catalogues, or archiving and compliance features of a backup.

In today's uncertain economic environment, it's increasingly difficult to justify redundant and uncoordinated protection systems. Expect startups and established vendors to take advantage of new infrastructure technologies and growing frustration among customers and soon deliver new solutions.

Trend #5: Deduplication and Compression

Dedupe in backup systems has been a hot technology for some time. As noted, dedupe has extended the life of legacy backup software by years, but dedupe and compression also will become an increasingly important part of primary storage systems. This year we will see more SAN and NAS platforms with dedupe and compression built in in a way that lets them be used for active data.

The benefits are obvious -- data growth is a challenge facing most organizations, and one way to counteract data growth is to shrink the data. If data is compressed, then key data center workflows (including backup, replication, tiering, clones, and other jobs) can be accelerated.

Dedupe plays a potentially enabling role for both memory-based storage and the cloud. By compressing data, you can get more data on an SSD or PCI Flash card, lowering storage costs. The cloud offers cost advantages, but the "cloud" itself may be quite some distance from your data center. Dedupe and compression can make the cloud seem closer. If you're tiering or making a backup copy of data to the cloud, compression can make that WAN connection more efficient. Look for dedupe and compression systems in primary storage that can also be extended to, or used in, your preferred storage cloud.

A Final Word

With one quarter behind us, there's still plenty of innovation ahead in 2012. The continued data surge and advancements in technology will keep driving momentum in these key areas. As IT budgets remain under scrutiny, companies will turn to technologies that help them achieve business goals while operating efficiently and effectively.

Round Rock, Texas-based Carter George is executive director of Dell Storage strategy. You can contact the author at carter_george@dell.com.

Must Read Articles