3 Top Trends in Data Protection for 2011
As data centers struggle to meet their backup and recovery objectives, what will managers look for in 2011?
By Dennis Rolland, Director of Advanced Technology, Office of the CTO, Sepaton, Inc.
In 2010, large enterprise data centers continued to grapple with explosive data growth, limited budgets, and increasing demand for higher levels of service. Although these pressures are not new to enterprise IT managers, a variety of factors outside the data center (such as continued economic uncertainty, mergers, acquisitions, and corporate reorganizations) added new levels of complexity and urgency to an already stressed data protection environment.
As a result, large enterprises in 2010 focused on data protection planning and spending on cost-controlling technologies such as data de-duplication, disk-based data protection solutions, and bandwidth-optimized replication.
Although the majority of enterprises still used some physical tape for their data protection, most continued to accelerate their deployment of disk-based technologies with deduplication that could help them solve the ongoing challenge of backing up growing data volumes within shrinking backup windows. This move to faster, more flexible, disk-based data protection is driving a trend toward a more holistic approach to data protection started in 2010 that will continue into 2011 and beyond.
With this holistic approach, data center managers are moving away from “point solutions” to specific issues and toward solutions that will have the greatest positive impact in the overall data protection lifecycle. Hand in hand with this trend is an increased need for better reporting and management of data as it moves through the phases of the data protection life cycle -- backup, de-duplication, replication, restore or recovery, erasure, and archive. With limited budgets and staffing resources, these tools are key to ensuring that data protection service levels are met and that existing systems are tuned and optimized to operate at peak efficiency.
In 2010, there was also a heightened emphasis on the need to improve disaster protection. Many enterprises were not meeting their recovery-time and recovery-point objectives (RTO and RPO) in a consistent manner and set improving disaster recovery protection as their most important goal for 2010.
What’s Ahead in Data Protection for 2011
A key trend for 2011 will be a focus on reducing complexity and controlling costs in the data center in several ways. The complexity of data protection has increased exponentially over the past few years. Throw new computing paradigms, heterogeneous backup-application environments (such as NetBackup, Symantec Open Storage API (OST), and TSM, among others), and multiple network choices (e.g., Fibre Channel, Gb Ethernet, 10 Gb Ethernet) into the mix and that complexity has increased even more dramatically.
Enterprise data centers will be far less likely to “rip-and-replace” data protection technology or networks to meet their performance goals. Instead, IT managers will look for a data protection platform with features (such as storage pooling) that support consolidation and migration to enable them to bring new technologies and networks online with a seamless, phased approach. With the platform, IT managers have the flexibility to separate data into independent, secure pools of storage on a single platform and to allocate resources (such as the disk type) to each pool as needed.
Storage pooling enables data managers to meet a variety of essential data protection requirements – such as data tiering, policy management, and classes of service. Migration to new technologies is simpler as well. For example, enterprises can continue to use Fibre Channel to back up data to one storage pool and use new 10 Gb Ethernet to back up to other storage pool(s) in phases as it is implemented in the data center.
Another trend for 2011 will be a consolidation and automation of data protection. Many large data centers have had to buy numerous disk-based backup appliances to meet their growing demand for more capacity and performance. Although these point solutions met the immediate need for faster backups, the administrative complexity and cost required to load balance and tune them is prohibitive. Dividing data onto multiple “silos of storage” also limits the benefits possible through deduplication, and significantly increases the likelihood of human error as the complexity of the backup environment increases.
In 2011, enterprise data centers will move away from these “silos” to data protection platforms that provide orders–of-magnitude more single-system capacity and performance. Data protection platforms with grid scalability enable enterprises to purchase a system that meets a data center’s immediate needs and enables them to add capacity and/or performance as needed in convenient increments.
By consolidating data protection on these systems, enterprises streamline administration costs and enable improved deduplication efficiency as well as a more detailed level of reporting and management control. Coupled with storage pooling, these system also enable cloud-enabling functions (such as multi-tenancy) and chargeback reporting.
Data centers will also put increased emphasis on disaster recovery. Many enterprises were not meeting their RTOs and RPOs) in a consistent manner and set improving disaster recovery protection as their most important goal for 2010. There will be increased adoption of technologies that enable remote replication of large volumes of data in a bandwidth-optimized format. Faster, more automated replication will enable more consolidated disaster protection solutions, particularly for decentralized organizations.
Enterprise data centers will continue to look to new technologies to solve their ongoing data protection challenges. However, as larger, more powerful data protection platforms are adopted, IT managers will look to gain a higher level of business value from these systems with higher expectations for efficiency, flexibility and disaster protection.
Dennis Rolland is the director of advanced technology, office of the CTO, at Sepaton, Inc. He oversees the architecture and future direction of the company’s data-protection technologies. Dennis has more than 20 years of storage experience in the areas of hardware and software development, having held senior-level engineering management and architecture positions at leading storage technology companies.