The Data Services Revolution: The Changing Face of Service Delivery Management
How do enterprise information systems managers know if their users are satisfied with changing data services? More importantly, what can enterprises do to ensure that their carriers are delivering their goods more efficiently than they have in the past? The answers are here with the new dynamics of service delivery managment.
How do enterprise information systems (IS) managers know if their users are satisfied with changing data services, and whether or not they are getting value for their money? Traditionally, data services have been bought on a monthly rental basis. Given all of the service set-up costs and lead times required to enable the delivery of these services, changing carriers was difficult, and often only considered in extreme cases, such as when users were completely dissatisfied with the service they were getting.
Even with services that involve relatively simple site connectivity, it is difficult to determine actual service quality. Service level agreements (SLAs), as we know them, are limited in their capability to measure quality. Given the dynamic nature of data services, the term "SLA" has become somewhat discredited as service providers claim that they cannot, in practice, prove the services they are delivering actually meet the required level of quality. In addition, service quality information divulged by the service provider may be out of date or inappropriate.
What makes anyone think this situation is going to improve, given that we live and work in a world in which carriers will need to provide higher-level, value-added services and applications? With traditional services and a single vendor infrastructure, carriers have been able to report on service quality in a relatively straightforward manner. In today’s world, we’re seeing the proliferation of new, intelligent routers that are equipped with the capability to set up on-demand connections, and deliver quality of service that is appropriate to the needs of specific applications. These new service delivery platforms sell on their ability to set up services at mouse-click speed. In this new environment, how can carriers offer service quality tracking? More importantly, what can enterprises do to ensure that their carriers are delivering their goods more efficiently than they have in the past?
The Drivers
Four factors driving the evolution in data services are:
Broadband access. Industry experts project that the U.S. will experience massive deployment of broadband access in the local loop over the next several years, effectively removing bandwidth as a constraint in the deployment of new-generation services and applications.
Competitive access in the local loop. Already happening in the U.S., and in the beginning stages in the European market, this is expected to further stimulate the emergence of competitive service providers.
The business Internet. Major global carriers and Tier 1 Internet service providers (ISPs) are building traffic-engineered backbone networks in the anticipation of attracting premium business traffic. This evolving resource provides the necessary infrastructure to deliver committed quality of service to meet the diverse demands of different applications.
IP is the common language. The public Internet has resulted in IP being the "common language" for data services. For the first time, vendors can focus their efforts on developing creative access technology, without having to worry about upstream standards.
The public Internet has opened our eyes to the prospect of universal connectivity in a way previously available only at premium prices to corporations. The connectivity model is in place, and technology is improving so quickly that business services delivered over the Internet will soon become a reality.
In the future, it’s clear that all connectivity, whether over a LAN or a WAN, will be broadband and based on IP and its derivatives. The universal adoption of IP leads to the potential reality of affordable, near-universal connectivity, which is forcing carriers to seek profits elsewhere. As the reliability and predictability of installed connectivity improves, a whole new networked applications market is emerging, in which we’ll see a broad range of services. Some of these services will be hosted by traditional carriers and/or ASPs. Entirely new network architectures and companies are being created to bring this new, dynamic services marketplace to fruition.
Existing WAN infrastructures of high-end IP routers and asynchronous transfer mode (ATM) switches are being slowly equipped with features to support more services and greater service differentiation over much higher bandwidths. The WAN is a necessity for the provision of these services, but the technology, once it is improved, will take a back seat. The outcome for businesses is that they will be able to understand the services being offered, and will pay for them only when needed. Meanwhile, as bandwidth becomes increasingly commoditized in the WAN, including the last mile, the desire to outsource key business functions is on the rise, even for small- and medium-sized enterprises. After all, why invest in installing a CRM product in-house, when the bandwidth exists to access it remotely?
The Growth of the ASP
In an effort to attract and retain customers, service providers are increasingly partnering with ASPs to offer a rich portfolio of solutions. According to CIMI Corporation, customer turnover can be as high as 70 percent when a customer subscribes to a one-time application or service. When customers subscribe to two services, this churn drops to 20 percent, and to virtually zero when customers subscribe to three-or-more services. This clearly illustrates the importance of a service provider "brokering" a range of application services to win and keep new revenues.
Who, then, will customers be buying their data services from? One school of thought points to the emergence of Building Local Exchange Carriers (BLECs) in the U.S. BLECs target large, multi-tenant businesses that are aligned with real estate companies. They deploy technology in basements, allowing new tenants to purchase business-class connectivity to IP services on-demand. BLECs point the way to a new breed of service provider, concerned more about packaging connectivity into application services than about bandwidth and site connectivity. In fact, the mantra of this concept will be, "Owning the customer is more important than owning bandwidth." In this new environment, service providers will find it harder to retain their customers. They will need to move up the value chain to find new revenue sources, and seek new ways to foster customer loyalty. How? By offering differentiated services and quality levels.
Traditional Quality Assurance
One of the factors enterprises and users need to consider when choosing a service provider is the service level guarantees they offer. Typically, SLAs cover availability, mean time to repair and delay measurements.
But how can enterprises be sure their SLA is being met? If you look at availability, for example, how is it being measured? More than likely, it will be based on reported faults, and when those faults are cleared by the network operations center. What is really needed is a way for the system to track true service availability, and then use this information to drive self-correcting processes. In this way, the report becomes a reliable confirmation that the quality agreed to was actually delivered.
There is also the question of what enterprises can do when SLAs are violated. Typically, there would be some form of money-back commitment, but how useful would that be to a beleaguered enterprise IS manager? While they may want to discontinue the relationship and take their customer elsewhere, in the old service paradigm, this was simply not possible.
Another issue encountered by service providers deploying traditional SLA tools, is the difficulty of collecting suitable quality data. Most SLA and network management tools rely on one of two techniques to acquire data: trap-based solutions and polling-based solutions.
Trap-based solutions simply respond to events that are asynchronously identified by network devices. They fail to identify "soft" conditions, such as over-utilization or congestion. By contrast, polling-based solutions demand that the software explicitly address the network hardware on a regular basis to request a measurement. This can result in poor accuracy when measuring variables such as availability. Few products have been able to successfully combine both data sources into coherent, service-level information. It’s easy then to see why the concept of SLAs may no longer work as well in the new data services market.
The Service Provider Challenge
Service providers are trying to move out of commodity bandwidth and into value-added services. They are seeing the emergence of the ASP market, which provides connectivity via the Internet – and they can see how this is going to grow once business-class Internet connectivity emerges.
Many service providers are enhancing their infrastructures to incorporate hosting centers that will house e-commerce applications. Others are choosing to act as brokers, providing a portal with quality connectivity to a vast portfolio of applications.
For these service providers, the competition is going to get tougher as customer churn increases, further emphasizing the need for differentiation through quality of service (QoS) and value-added feature offerings.
One thing is certain: To deliver application connectivity, service providers are going to need to deploy new devices at the network edge that are application-aware, and that can deliver application-specific QoS. These devices will have the ability to establish a connection rapidly, on demand, to a selected ASP.
For example, in a NetMeeting, the service request would set up simultaneous connections with appropriate bandwidth and latency characteristics from each participant’s intelligent access device to the application server. The key is that connectivity to an application is delivered to the end user, based on committed QoS for that specific flow.
In this scenario, the service provider’s challenges are compounded for two reasons. First, there will be a greater diversity of devices at the network edge, which will offer different methods of delivering QoS metrics. Second, services are going to be provisioned more dynamically, even auto-activated, in some cases. In this dynamic environment, traditional SLA toolkits will simply not be enough. Rather, integrated flow-through provisioning will be required.
In this highly dynamic environment, it will be difficult to create tools to measure service performance. Not only are services changing conceptually, but, at any given moment, there may be different types of customers buying completely different types of services, possibly from multiple carriers. Yet, all will share the same multi-vendor network infrastructure. In addition, making use of such tools will be the only way that carriers will be able to ensure the services offered over their WANs evolve towards the levels of quality their customers demand.