Rethinking TCO and ROI
TCO and ROI efforts can be misleading -- in part because they tend to over-generalize the needs of particular customers and apply them to the market as a whole.
To many IT professionals, total cost of ownership (TCO) and return on investment (ROI) are no-brainer propositions: the latter is (or should be) a derivative of the former. But perhaps TCO and ROI aren’t quite as straightforward as that.
In fact, says industry veteran Wayne Kernochan, a senior IT advisor with consultancy Illuminata, many TCO and ROI efforts are misleading—in part, he argues, because they tend to over-generalize the needs of a particular set of customers—typically large enterprise customers—and apply them to the market as a whole.
"Many TCO/ROI studies do not speak the language of the IT buyer or of the user in general," he observes. "[T]hey provide a bewildering array of ‘costs’ that are hard to translate into the specific license, development, and operational costs that most users deal with." There’s a further wrinkle here, too, he says: "[T]hey do not clearly connect the needs of the business with the purchase of a particular piece of equipment and … infrastructure software."
He doesn’t reject TCO and ROI as altogether useless. "[I]t is possible to do a meaningful, real-world TCO/ROI study, to figure out which published TCO/ROI results can be relied on, and to use TCO/ROI to make better IT investment decisions. The key … is to focus on depth of per-user research rather than number of interviewees," Kernochan argues.
For starters, he suggests, organizations need to start taking a systemic (as opposed to an isolated) view of their infrastructure investments. Instead of simply calculating TCO or ROI for a specific software investment—e.g., a migration from one relational database management system (RDBMS) to another—organizations should measure the TCO and ROI of the software and hardware ecosystem in which that tool functions. "I believe that we should—where possible—measure the TCO of a solution," he explains. "[W]e collect the costs of one application and the software and hardware infrastructure and people supporting it, but not that portion of [software, hardware, and people] supporting other applications or the rest of a datacenter."
Once they’ve done that much, Kernochan continues, organizations need to break down IT spending into categories that are meaningful to business users. "[F]or any application, you buy what you need, develop the rest, and then maintain the production application. Any of these costs can be zero, but this approach ensures that you will at least consider the general case," he points out.
"[L]ook at any TCO study and you will typically find [a lot of categories]. The key question is: Can you quickly and easily reduce all of these categories to sensible lifecycle components such as license, development, and maintenance?" If you can’t do as much, Kernochan counsels, you’ll have a difficult time figuring out what lower prices will mean, how important development speed is, and how important hiring administrators will be.
There are also other considerations. For example, Kernochan says, not all TCO variables are equally important. Nor, for that matter, do all TCO variables vary by much. "Certain things complicate the TCO without changing the results materially—things like PC hardware costs in a software analysis, training productivity costs, and networking costs. Unless these factors differ materially from solution to solution—and often they do not—considering them doesn’t really advance the analysis. Looking carefully at some TCO studies, you will find that what is left out isn’t clearly spelled out, and the omission may not be justifiable."
IT must also beware of the most-favored-customer dodge: vendors tend to use only their best customer examples—complete with unique (or customer-specific) pricing breaks—as sources for their TCO studies.
"[Y]ou need to remind yourself that much is going on that is not visible: Favored-customer discounts not listed on the vendor’s Web site, ‘free’ advice and consulting based on use of the vendor of other products, and so on. Since in the long run you can often get the same deal from any vendor, it’s often better to base TCO strictly on what each vendor has published," Kernochan comments.
If TCO is more complicated than many folks believe, what does that mean for ROI, which is notoriously tricky to pin down, even under the best of circumstances? Kernochan doesn’t mince words.
"Let’s face it, predicting the stream of revenues from any product—much less a software/hardware ‘solution’—is very iffy," he acknowledges. "[B]usinesses from Wal-Mart to Burlington Coat Factory have demonstrated that by achieving competitive advantage in both customer-facing applications and business-process-improving solutions, enterprises can increase revenues, cut costs, or both—and increase their chances of survival [versus] less IT-savvy rivals." In other words, Kernochan argues, "the game is worth the risk."
Nevertheless, he suggests, organizations should start thinking about ROI in a somewhat different way. "However, the real point of ROI as it can be used today is not predictive but comparative. ROI is not that good at predicting whether you will indeed get a positive return on your IT-solution investment. It can, however, suggest what platforms can deliver better returns than other platforms," he points out. "ROI complements TCO: if it is important to you to achieve competitive advantage, it is also important to consider not only IT expenses but also business opportunities, and to consider those platforms that may cost more, but will deliver a better net result."
Kernochan suggests a new way of calculating ROI: it’s what’s left over after organizations subtract their opportunity costs from their TCO from their revenues.
"The key thing to remember about revenues is that you can’t start generating them until you finish developing the solution. So the biggest difference in revenues today between pieces of software and hardware comes typically not from your application server or dual-core blade server, but from your development toolset and process. The effect of development improvements is growing as the product lifecycle shrinks. Cutting development time from 1.5 years to one year increased revenues by about 10 percent when the product lifecycle was five years and the competitive-advantage window was about four years; now, the same improvement increases revenues by about 17 percent, with the product lifecycle at three years and the competitive-advantage window at two years. In more competitive industries, such as financial services and electronic games, the advantage is even greater."