In-Depth

Next-Generation Business Service Management -- The Year Ahead

In the current economic climate, IT must align itself closer to the business needs of its users to achieve success. We explain how this IT/business alignment can occur as IT becomes a marketer of business services to its user base.

By Scott Fagen, Distinguished Engineer for the Mainframe Business, CA Technologies

When looking at the trends that are driving the next generation of mainframe management, it is clear that achieving success given the pressures of the current economic, competitive, and technical forces requires an even closer alignment of business needs with the delivery of capability traditionally supplied by IT. The advent of cloud computing adds to this pressure, even for companies that choose not to take advantage of public cloud offerings, because it introduces a disruptive technology that intensifies competition by raising concerns in the business about the cost and agility of internal IT.

These trends cut across all of IT, not just the mainframe side of the house. To respond, IT must look for cost-effective ways to deliver more efficient services to the business. Although there's nothing new to report here, the trends are real and we will all have to react to them. In order of importance:

First, Business Results on a Shoe String

As is true across the business, IT is under pressure to reduce cost or, at the very least, produce more at the same cost. We sometimes forget that IT exists primarily as a means to reduce the cost of producing business results through the automation of tasks and processes that at one time were performed by people. It is also important to remember that the business doesn't care how a result is achieved; the primary concern is that the result is achieved in the most cost-efficient manner. There is no reason to deploy IT equipment to a task that is less expensively achieved through other means.

Second, Leverage Technology to Transform IT

IT must also continue to be more responsive to the needs of the business, making better use of the technology at their disposal, and be vigilant in the pursuit of new technologies that can be applied to business needs. For example, two new technologies currently receiving significant hype are cloud computing and the new IBM zEnterprise system not only as service delivery mechanisms but as a paradigm for the internal deployment of IT resources.

It may even come to pass that these forces may drive IT to have a new name and new functions -- the next step in the evolutionary process from "data processing" to "information systems" to "information technology." This transformation could likely be called "information logistics."

Third, Breaking from the Traditional Structures

Before being able to align with business needs, IT needs to realign itself internally. One of the things I see all the time in large data centers is the "silo mentality" that separates the mainframe and distributed camps. This is obviously a historic construct, from a time when business services were more discretely delivered by a single application on a single platform. Additionally, within these platform silos, teams are often aligned by product, subsystem, or even software vendor. This creates a situation that is not particularly conducive to agility, growth, and accurate accounting of cost (all of which are the hallmarks of a sales call by an external cloud service provider).

Consider the position that an application architect is placed in when faced with implementing new or changed functionality that crosses multiple platforms, disciplines, and underlying software. He or she not only has to understand the impact of the change across those platforms, but it is likely that additional capacity is going to be needed and there may also be additional "non-functional" requirements. For example, data might now need to be encrypted or additional storage must be included in the disaster recovery plan.

In addition, constructs must be implemented to ensure regulatory compliance for the new functionality. Implementing the change requires alignment of many groups -- groups whose measurements (and compensation) make them averse to any kind of change to the system. This kind of misalignment of goals puts negative pressure in the system -- the kind of pressure that makes IT look to the business like it is apathetic and unresponsive to their needs.

In response to this, I do see IT areas moving to collapse the walls between the platforms, trying to bridge the gaps between the management of the platforms and network, but this is only the first step. The next step is more radical -- aligning people and resources to the results that business is asking for.

In this environment, IT staffers will be able to distinguish themselves from their peers by becoming more sensitive to the business, first by understanding needs and eventually by anticipating them. Another important distinguishing factor will be the breadth of the resources that IT can manage. "Ambidextrous IT staffers," who are enabled to manage across IT platforms and disciplines through a combination of improved tools, training and knowledge transfer will be key to transforming IT for better business alignment.

How Does the Business (Want to) View IT?

Remember, the business doesn't especially care how a particular result is achieved. The goal is to efficiently move through the supply chain, taking external inputs (raw materials, data, etc.) through a set of processes, including both internal and external providers, to create a product.

Every step of that supply chain is a potential target for replacement. For example, most businesses use external providers for shipping. They will negotiate the best deal they can, and contract for specific business outcomes – the equivalent of service-level agreements (SLAs) to you and me. At the end of the contract, the business will evaluate the performance and value gained from the relationship with the shipping company and decide what to do next. For some businesses, such as "e-tailers", the relationship with the shipper is as close as or closer than other departments within the company. This is a simplistic view, but it can help us understand how the business wants to deal with IT.

Depending on the size and scope of the business, IT can be responsible for building and delivering tens or even hundreds of discrete services that are integral parts of the larger business supply chain. Each of these processes has a value to the business, an availability requirement and, in most cases, regulations around the data associated with the process. It is also very likely that there is a wide variance between these values, requirements and regulations between the different processes.

It becomes very frustrating for the business to have to first restate the requirements repeatedly across the various silos that IT presents to the business, and then to translate those requirements into specifications that are meaningful to the silo they are talking to. Very often, business people are not technology-savvy enough to be able to clearly articulate these requirements. I'm sure you've been involved in conversations between "the IT guys" and the "business guys" that looks more like a meeting at the UN on the day that the translators went on strike, than a clear conversation about translating business requirements into IT processes.

Think about how the business would deal with a shipping company. They are not going to specify how a package should be transported from point A to point B; rather, they're going to contract for timeliness of delivery and cost. It's up to the shipping company to determine whether packages will be delivered by air, rail, truck or on the back of the pack mule. The same is true for IT -- they don't care if the result is achieved by a roomful of accountants with green eyeshades, a TRS-80 or a CICS transaction.

Bridging the Culture Divide

The most important aspect to solving these issues is to decouple, for the business, discussions about platforms and services delivered by IT. IT and the business should get together and clearly describe what services are being delivered and quality of service (QoS) aspects that are required, e.g. performance metrics, SLAs, data encryption and retention requirements. Once these services have been defined and understood, the business and IT should interact based on these definitions and needs, rather than the underlying technology that delivers these services. It now becomes incumbent on IT to determine the most economical and expeditious means for delivering these services to the business while still abiding by the SLAs and QoS requirements.

Getting Down to the Business of IT

Now that the business deliverables have been clearly specified, IT should create cross-disciplinary teams to be the human face of these services for supporting these services. Clearly, each of these business services is comprised of many more, discrete sub-services; a web banking application may touch web servers running on x86, application servers running on RISC, finally connecting with transaction and database managers running on the mainframe. As is always done with complex systems design, these sub-services need to be defined and understood. It is important to catalog them, as well, as many of these services will likely be used across the various business services and should be shared.

To enable these teams to deliver these services with the appropriate flexibility and visibility, the tools used to manage the environment need to support the concept of a "multi-platform service" or "multi-platform application." Rather than a serial process where each platform designs, provisions and deploys its own discrete piece of infrastructure, it would be much more meaningful to deploy the application architecture just like it would be drawn up in something like Microsoft Visio. Architectural requirements such as data encryption would be managed as aspects within the diagram.

Once the service architecture is validated, the underlying tooling would then be able to access pool of resources (including whatever mix of hardware and software platforms that IT chooses to support) made available by the infrastructure team, in a manner much more consistent with a cloud paradigm. Optimizations can be discovered and managed at the infrastructure layer. For example, the distributed sub-services in the banking application that I described earlier might be better served being laid out on a zBX in a zEnterprise ensemble, versus an all-distributed application being implemented in a single blade chassis or on virtual images within a single server.

Along with maintaining the application architecture, monitoring and operational tools also need to align to these business services. The underlying tooling must service technical (system and application) metrics hand-in -hand with business metrics so that both IT and the business can respond to system level or business level anomalies through policy, external automation or manual effort. This is something that becomes much easier when the service is managed as a unit, the instrumentation sources are logically tied together at deployment time, not by the individual towers. This also helps implement a "single source of truth" that is accessed by all disciplines, rather than many of today's silo management tools.

An important side effect of this cloud-like management paradigm is that the sub-services are not necessarily beholden to any particular hardware and software stack. Many of today's sub-services can be placed on multiple platforms as well as potentially "cloud-burst" to an offsite provider. Again, tooling that manages through this business service approach is key to enabling this kind of agility within the data center.

By separating and aligning the "marketing" side of IT to the business, IT can help better serve the business with improved agility and better cost. This alignment can help IT identify areas where they are incurring additional cost for delivering higher QoS than necessary or where they may be able to achieve greater economy by sharing resources across systems that have different workload profiles.

In essence, the data center should be moving towards more of an internal cloud model, where IT markets and sells SaaS capability to the business and the infrastructure team internally brokers IaaS or PaaS with each of the various business service providers. Overall, it feels like the old days where everybody shared the mainframe, but today, there are more platform, implementation and service delivery choices at our disposal to help us usher in the next generation of business service delivery and management.

Scott Fagen is a distinguished engineer reporting to the CA Technologies Architecture Team. As chief architect for the company's portfolio of mainframe technology, he sets platform strategy and leads the team of engineers that sets the technical direction for the development of its mainframe products. Scott joined CA Technologies in 2007 after 21 years of mainframe engineering experience at IBM. Prior to his current role, Scott served as the principal architect for Mainframe 2.0, leading the design and implementation of many facets of that CA Technologies initiative. You can contact the author at scott.fagen@ca.com.

Must Read Articles