In-Depth
Mainframes: The Gold Standard of Enterprise Computing Platforms
Mainframes offer scalability, availability, security, and compliance, and are energy efficient to boot. No wonder their popularity is growing.
By Chris O’Malley
When distributed platforms first emerged as an alternative to the mainframe in the 80s, no one could have anticipated how pressures on corporate IT were going to evolve. The growing importance of IT services to the business and the tremendous challenges associated with extension of those services over the Internet have resulted in requirements for scalability, availability, security, compliance, energy efficiency, and adaptability that simply weren’t foreseen in the early days of distributed computing.
In fact, much of the recent history of distributed computing can be viewed as an ongoing attempt to replicate the stability and performance of the mainframe.
Distributed platforms, however, haven’t been able to achieve this functional parity. This is why, despite continued predictions to the contrary, the mainframe remains a preferred platform for mission-critical IT services. It’s also why the mainframe has become a more attractive platform than ever for companies that want to maximize their bang for the IT buck.
What We Didn’t Realize 20 Years Ago
There were plenty of good reasons for corporate IT departments to embrace distributed computing back in the 80s. The key driver was the desire for departmental autonomy. Simply stated, business-line managers were tired of being dependent on traditional application development schedules and wanted more control over their own lines of business.
Although the productivity advantages inherent in PC user interfaces were partly responsible for the growth in distributed computing, the major factor was the desire to have more control over the computing environment and application development.
Much has changed, however, over the past 20 years. A few prescient observers may have foreseen some of those changes, but no one realized just how radically enterprise computing would be transformed.
Issues faced by IT today that weren't anticipated in the early days of distributed computing include:
Skyrocketing TCO. The fragmentation and complexity of distributed environments have made them increasingly expensive to own, so IT organizations now spend an unacceptably large percentage of their budgets just maintaining their existing server infrastructure.
Massive scalability requirements. IT organizations now have to serve more users, retain more data, and support more CPU-intensive applications than anyone ever anticipated. To cope with these requirements using distributed infrastructure, the industry has had to constantly engineer new solutions such as SANs and NAS, clusters and racks, and midrange-like multiprocessor systems.
The intolerability of downtime. Businesses and their customers have essentially become addicted to IT services. From the supply chain to the online store, revenue and productivity depend on the availability of these services. Ssystem “hiccups” and/or scheduled maintenance outages that were once tolerable have now become entirely unacceptable.
Overwhelming security threats. Back in the 80s, no one realized just how extensively enterprise systems would be exposed to the outside world, nor could they have predicted the rise of such a skilled and aggressive hacking community. As a result, IT organizations constantly have to patch their distributed environments and shore up their protection as they discover an endless stream of new vulnerabilities and new exploits.
The rise of compliance. Once upon a time, a company’s IT environment was its own business. This is no longer the case. IT organizations are now highly accountable to regulators, shareholders, customers, and others -- especially regarding issues such as privacy and data retention.
The green factor. When IT organizations first began to acquire distributed infrastructure, they didn’t think in terms of energy consumption because energy was relatively cheap and the total amount of hardware was not that great. Today, those costs are high and the number of devices is tremendous. As a result, IT organizations are being burdened with huge utility bills. They are also creating significant carbon footprints at a time when companies are trying to make those footprints smaller.
These are just a few of the issues that make enterprise computing very different today than it was in 1988 or even 1998. These issues undermine the business case for distributed computing while strengthening the business case for the mainframe.
What We Realize Now
IBM has continued to innovate on the mainframe as distributed computing vendors have struggled to respond to the escalating challenges of corporate IT. Mainframes are more compact and energy efficient than ever before. Evolving tools now allow developers to respond more quickly to emerging business needs. Mainframes can also now be managed in a common manner with existing distributed infrastructure, allowing IT organizations to treat them more like very big servers than as an entirely separate computing domain.
Just as significantly, the attributes that always made the mainframe attractive have now become more compelling than ever. These attributes include:
- Massive scalability
- Unmatched reliability and availability
- Tight systems security
- Low per-MIPS TCO
- Built-in virtualization
- Robust compliance auditing
In other words, the attributes that the IT industry as a whole continues to struggle to implement with distributed infrastructure are already present and mature in the mainframe.
Certainly all of these attributes serve to make the mainframe platform the favorite for storing “bet-the-business” data and to a lesser degree mission-critical applications. However, it’s classic economies of scale that really make it a good business decision to host such data and applications on the mainframe. It takes fewer people to manage this platform, requires demonstrably less energy than a distributed server farm, and offers the maximum opportunity to lower costs through consolidation of software, hardware, data storage, disk, tape, and so on.
One way to conceive of the difference between the mainframe paradigm and the distributed paradigm is to contrast a well-designed skyscraper with several million square feet of floor space with a group of smaller, separate office buildings. The skyscraper can be quite efficient and capable of accommodating different tenants with different needs at different times. The owners of the building only have to buy one piece of land. They only have to deal with the zoning board once. They can design and run a single system each for heating, cooling, security, and power distribution.
The use of many separate buildings is far less efficient. Each site has to be purchased and prepared separately. Each requires an architect, a general contractor, and a separate process of zoning approvals. If the tenant in one building needs a bit more space, the decision must be made whether to renovate the existing building or move to a new one.
Of course, the tremendous advantages offered by the mainframe today in no way mean that companies should scrap their investment in distributed infrastructure. Today’s desktops, laptops, and handheld devices -- if properly managed -- offer tremendous capabilities to end users, especially the application presentation and local functionality. Windows, UNIX, and Linux servers similarly provide a useful middle tier for various types of application logic and processing.
In most cases, the right hosting platform for an application isn’t the mainframe, a midrange server, or a PC, but some combination of these. The right tool for the job isn’t always a hammer. Mainframe-based user interfaces are generally user vicious. The mainframe can rarely be beat as a data-hosting or application-hosting platform for high-volume, transaction-based applications. Furthermore, the mainframe is widely recognized and accepted as the premier platform for hosting bet-your-business data.
The time has come for IT organizations to dispense with the near-religious orthodoxy with which they have focused on distributed computing over the past two decades. Distributed computing is simply too inefficient -- and the pressures on IT organizations are too great -- for companies to forego the tremendous advantages offered by today’s mainframe. The continued proliferation of multiple, disparate moving parts in the corporate IT environment benefits no one. Distributed computing is still trying to grow up to become what the mainframe already is. The sooner IT decision makers fully realize this and act accordingly, the sooner IT will be able to get ahead of the demand curve and deliver its full potential value to the business.
Chris O’Malley is executive vice president and general manager of CA’s mainframe business unit. You can contact the author at Christopher.O’Malley@ca.com.