A Framework for Platform Usage Models (Part 1 of 3)

Usage model analysis is essential to justify a deployment and quantify its value.

Performing a usage model analysis is essential for an individual or organization to justify a deployment and quantify the value of the deployment. For vendors of IT-related equipment and services, this process is equally important to gain insight into how the product will be bought. Usage analyses can also be applied to strategic technology deployments such as a grid computing system. However, traditional, direct approaches have led to less-than-satisfactory outcomes. We present an alternative approach in this article series.

Let’s start with a hypothetical example.

By recommendation of the CTO, the CIO of a large financial services company decides to deploy the product of a well-known grid computing middleware vendor. The economic justification parameters are impeccable: the company runs deadline-sensitive analytic models that take hours to complete even in the most powerful PC workstations. The software vendor promises that the tasks can be offloaded to company servers. These servers show an 8 percent utilization rate using the latest measurements, prompting questions about the wisdom of current investments in server hardware. Offloading grid jobs onto these servers bring the promise that utilization rates can be raised by at least 20 percentage points with nearly no incremental hardware investment while offloading the workstations now in use.

Fast forward one year. The situation is similar, if not worse. Some older servers have been replaced. The new servers consume less electricity and are twice as powerful. The remote management features of the new servers have reduced maintenance cost of labor.

The bad news is that the grid system has not been adopted with any significance. In fact, the technical staff running the analytics applications is requesting a refresh of their workstations to mirror the server refreshes because they can’t turn around compute jobs fast enough. Meanwhile, the grid infrastructure languishes with little use, and because the new servers are so much more powerful than the ones they replaced (but still mostly deployed as dedicated, single-application servers), utilization rates have actually gone down by a full percentage point, to 7 percent. When queried, the technical staff running the workstations offers valid reasons for continuing to use their workstations to run the analytics programs.

What prevented the initial projections about the proposed grid infrastructure from being realized? The economic analysis was right on the money based on prior experience in similar shops in the industry. However, reporting to the CIO were a number of IT operations offices scattered around the world, not totally trusting of each other, and even competing against each other occasionally. Some of these organizations came from other companies during prior mergers.

Grids in general have the pesky habit of crossing departmental boundaries. Unfortunately, this shop lacked a bill-back mechanism to charge one department for another department’s usage or to credit a donor department. In a round of cost cutting, when a department was given cost targets, grids would be first on the chopping block. The grid functionality was essentially turned off by members of the same IT organization tasked to promote it.

The grid infrastructure was not just sitting unused. IT had been defanged. A post-mortem analysis indicated that all prior implementations by the vendor were confined to a single departmental unit, and that there were no precedents to address cross-organizational issues.

In addition to these challenges, the funds earmarked for application migration to the new grid environment were slashed in another local budget-cutting decision. The reason offered was that since the system would not be able to summon resources from other departments, the application conversion cost would be much higher than the expected benefit from using the grid.

The main issue in this botched technology adoption was the failure to recognize that in any complex transition exercise involving multiple organizations, the exercise carries business, organizational, and technical aspects that must be addressed concurrently. These considerations take place in the context of platforms.

Complex platforms are inherently layered and stacked—that is, they can be defined recursively. For instance, a set of hardware and a network constitutes a platform on which to host an operating system. In turn, and one level up, the OS defines a platform on which to run business applications, and so on.

A common belief about platforms is that they are technical in nature. This is not always the case. Platforms can also encompass organizational and business aspects:As we move up in the abstraction layers, the notion of platform will encompass organizations, as we shall see in the third part of this series.

The layered nature of platforms suggests that each layer can be analyzed independently. The approach allows breaking a hopelessly intractable problem into manageable pieces. In the process, this thought process will also yield significant insight into cross-layer interactions and dynamics in the system.

In the opening example, the usage models at upper layers (the CIO level), characterized by considerations of company-wide investment and ROI, were appropriately addressed, but items of concern to departmental directors and technical end users were left wanting. The transition project was doomed from the beginning.

Next week we’ll explain the method for this layered approach to usage models, which we call the Hierarchical Usage Model. In the third article in this series, we will return to the grid example after having acquired familiarity with the notion of hierarchical usage models.

About the Author

Enrique Castro-Leon an enterprise architect and technology strategist at Intel Solution Services, where he is responsible for incorporating emerging technologies into deployable enterprise IT business solutions.