In-Depth
Using Distributed Computing to Reduce Downtime, Accelerate IT Recovery
Architecture changes and technologies are emerging that enable IT professionals to provide solid infrastructures, eliminate downtime, and deliver applications with consistently high availability.
By Frank Huerta, CEO and Co-founder, Translattice
A recent study shows that North American businesses on average suffer 10 hours of IT downtime a year and an additional 7.5 hours of compromised operation because of the time it takes to recover lost data. This study concludes that slow recovery from IT system downtime costs North American businesses collectively $26.5 billion in revenue each year. Similar studies estimate that the average cost of a single hour of downtime is approximately $100,000.
The good news is that resilient, fundamental architecture changes and technologies are emerging that are enabling IT professionals to provide solid infrastructures, eliminate downtime, and deliver applications with consistently high availability for global and mobile workers.
The complexity of small business networks today dwarfs those of large enterprises 15 years ago. Although replication, server virtualization, virtual machine migration, SAN arrays, converged networks, and other relatively new technologies provide benefits, implementing them comes with significant costs that many organizations overlook. Complexity makes implementation errors and system failures even more likely. Ironically, the message is that enterprise systems are too complex and likely to fail; yet to prevent failure, you need to add even more complexity!
Organizations make significant investments to achieve high availability and business continuity. Every time a new application is deployed, these investment expenses increase as redundant infrastructure is scaled up. Because of the intrinsic complexity in current application deployments, attempts at redundancy are often ineffective and application availability suffers.
What’s now required is an application infrastructure that inherently provides high availability without the additional dedicated infrastructure needed with 2n or 3n redundancy. If a site became unreachable due to an outage, geographic redundancy would preserve the availability of applications and data.
Decentralizing Applications and Data Improves Disaster and Network Recovery
Emerging technologies that fundamentally decentralize applications and data greatly improve business resilience and simplify disaster and network recovery. They are designed to handle less-than-perfect performance from all components of the infrastructure.
One emerging approach to scalable application computing simplifies IT infrastructure by combining the various required elements -- including storage, load balancing, database, and caching -- into easily managed appliances or cloud instances. Unlike conventional infrastructures (where redundancy and performance are increased by "scaling up" and adding additional tiers of components), this provides an architecture where additional capabilities are added by "scaling out" and adding additional, identical nodes.
These systems automatically store data across the nodes based on policy, usage, and geography and intelligently deliver information when and where it is needed. All information is replicated across multiple nodes to ensure availability. If a node fails, users are re-routed to other nodes with access to their data so that productivity does not suffer. When the original node recovers, it resumes participating in the flow of data and applications and local users are reconnected to it. The system automatically synchronizes data in the background so no data is lost and performance is not compromised.
A New Approach to Application Deployment
Organizations today are more geographically dispersed than ever and many IT organizations have dedicated significant resources to ensure adequate response time performance for their remote offices around the globe. These organizations have usually invested heavily in infrastructure; such as WAN optimization, federated applications and high speed network connections.
Today’s typical application infrastructure requires a variety of components -- a pair of hardware load balancers, application servers, and database servers, as well as storage for their data. Moreover, to attain redundancy, much of this infrastructure must be duplicated off-site.
The complexity of this type of infrastructure requires continual investment simply to maintain the systems and components. Yet poor performance and spotty availability are often a reality for those working in remote offices.
Taking a new approach to application deployment can result in significantly lower costs. Using inexpensive, identical nodes at each site and eliminating the need for a separate failover site could dramatically reduce initial capital expense. Another factor contributing to lower costs is the simpler, fully integrated stack, which makes applications much easier to deploy, manage and troubleshoot.
Successful Global IT Infrastructure Includes High Availability Access in Remote Locations
Despite business globalization, with customers, partners, and employees more likely than ever to be located around the world, IT has consolidated data centers. The underlying assumption is that consolidated data centers will allow information technology organizations to better control resource costs for space, energy, IT assets, and manpower. With the stampede to consolidation, concerns about availability and performance for users in remote locations are sometimes overlooked.
Unfortunately, the consolidation cost savings aren’t always as dramatic as anticipated and new problems are often introduced as a result. For example, maintaining availability and performance for remote workers is still a challenge. Additionally, high-speed WAN links used in attempts to address these problems can be prohibitively expensive, particularly outside North America.
If all the required application infrastructure components resided on comprehensive nodes, the nodes could be placed in small and remote locations. Because virtually all of the supporting infrastructure for an application would be included in a node, performance and responsiveness would improve at each site.
Ongoing support costs would also be reduced because scaling an application in this way is much easier than with traditional deployments. If a site is growing and needs greater scale, a node can be easily added at that site. This approach only makes sense if no additional IT staff is required at the remote sites. For instance, the addition of a node should be easy enough that non-IT staff can complete the work.
The Future of IT Infrastructure Intertwined with Embracing Virtualization and Cloud Computing
As organizations look at ways to leverage the economics and efficiencies of virtualization and cloud computing, it is becoming painfully clear that the traditional approaches to infrastructure that underlie most of today’s cloud offerings do not effectively enable the potential agility of these new models.
Organizations are wrestling with ways to take advantage of cloud economics while maintaining control of their data and providing improved support for remote users. Now is the time for technology that enables options for deploying on-premise, in the cloud, or a combination of both.
This is the next phase in enabling IT organizations to deliver applications with consistently high availability and performance to global and mobile workers while maintaining an elastic and robust infrastructure within the constraints of tight budgets.
Conclusion
The future of enterprise computing requires truly distributed computing that enables remote workers to be highly productive. Simplified, smarter application platforms that integrate disparate technologies such as data storage, database, application servers, and load balancing will surpass existing solutions in cost, manageability, and reliability.
Frank Huerta is CEO and co-founder of Translattice, where he is responsible for the vision and strategic direction of the company. He started his career in engineering and product development at Hughes Aircraft Company, Santa Barbara Research Center, and served in product management roles at Seagate Software as well as VeriFone, Inc. Mr. Huerta earned an MBA from the Stanford Graduate School of Business and an undergraduate degree in physics from Harvard University cum laude.