Three Enterprise Server Trends

Datacenters are leveraging maturity and turbulence in the high-end server market to wring more from enterprise data while stretching IT dollars for additional MIPS.

Faced with almost unlimited data, increasing demand, but limited budgets, IT managers are searching for enterprise-ready datacenter solutions. For some companies, the answer may be supplementing heavy mainframe iron with high-end servers whose architecture can help with the workload at a reasonable price.

In the high-end server market, the usual suspects are familiar: IBM, Sun, SGI and Unisys. One player, Hewlett-Packard, takes a triple-dip with its newer Superdome server and the well-established Tandem Non-Stop and Digital Alphas from its Compaq acquisition. Foreign vendors such as Fujitsu and Siemens play a smaller role in the high-end server game as well.

The worldwide market for high-end server units seems small—just under 16,500 units for 2001—compared to over 4 million sub-$25K servers sold during the same year. But the high-end group accounted for approximately half of the total server dollars spent in 2001, according to analyst firm IDC.

Determining the Definition
For its technology, high-end servers use symmetrical multiprocessors (SMP) or massive parallel processing (MPP) architectures to rival mainframes with their scalability and robustness. With 20-plus years of history, the maturity of most servers almost equals mainframes.

However, the remaining criteria defining high-end servers are less clear. One consensus is the cost of the system, generally pegged on the low side between $400,000 (the designation used in this article) and $1 million. But after adding in Cadillac options—such as 100s-plus gigabytes of RAM and disk storage in the 100s of terabytes—and adding high-performance maintenance contracts, a single-server system can run between $2.5 million and $10 million.

Using performance as the criteria with these evolving machines can be tricky. Computer Associate's VP John Pincomb, pointing to the IBM zSeries lines, notes the difficulty. "Is a z800 series a mainframe or high-end server? The line between the two is very gray and isn't going to get any more black and white. The bottom line, [when] using price, that's a high-end server and, if using performance, it's a mainframe," he observes.

Putting the Servers to Work
If the criteria for these systems are evolving, so too are the tasks for which corporations deploy the boxes, which fall into three categories: Mainframe supplements (the server massages data from the mainframe), mainframe replacements (the server handles the data independently) and server consolidation (high-end servers replace many smaller server boxes).

For example, a major telecommunications company (which asked not to be named) uses Sun Microsystems Inc.'s Sunfire servers to analyze call records from a mainframe to develop new customer products. General Mills Inc. deployed SAP's R3 software for its organization on HP Superdome computers as a less-expensive alternative to the mainframe.

Evanston Northwestern Healthcare, a 720-bed hospital in Chicago, replaced scores of application servers with a pair of IBM eServer p690 systems for its billing, patient record, pharmacy administration, and critical care system. Oracle practiced what it preached by consolidating its 40,000-person e-mail system from 120 different servers onto two HP Superdomes.

Server Trends Worth Watching
Additionally, three trends are rocking the high-end server industry: Vast hardware improvements, the open-source movement and a tough economy. The result: Vendors are interested in producing high-end beasts of burden that companies can deploy into core enterprise tasks for a total cost of ownership (TCO) that's literally a fraction (albeit a healthy one) of a mainframe.

Trend #1: Cost Cutting with Itanium 2
On the hardware side of high-end, the "cheaper, faster" mantra continues. That trend has been aided, for some hardware makers at least, by Intel Corp. This summer's introduction of Intel's Itanium 2, the second member of its 64-bit family (co-designed by HP), is intended to merge the computing power of RISC processors with the pricing of mass-produced CISC processors.

One highly desirable trait of the Itanium 2 is its 64-bit address space and, more specifically, the ability to hold and process huge data sets in up to 17TB of main memory rather than extensively (and slowly) swap to disk or process only samples of the data.

The other attraction is lower cost. How low? The processors themselves range between $1,400 and $4,200. Depending on the number of processors, typically 16, 32, or 64 in the upper-scale computers, a fully loaded system ranges from $350,000 to over $1 million.

But the performance, when used as SMP, MPP or clustered systems, may rival systems costing several times more. By using industry-adopted operating systems and widely available software, these systems offer a total cost of ownership that can beat mainframes.

Dozens of companies, from IBM to Unisys, are promising Itanium 2-based systems. HP is predicted to be the first in the field with an Itanium 2-based Superdome server this month.

Blades and Clusters:
Ready to Shave Big Computing Costs?

When it comes to server computing, more companies are "doing a Gillette" by installing the high-density computing devices known as blades.

Initially adopted under the banner of server consolidation, blades have gained floor-space in some telecom and corporate centers. But to go mainstream, both application and management software needs baking in mature ovens.

Feeling the Edge
A blade is simply a server reduced to its bare essentials. The blade itself is the motherboard holding a processor, such as an Intel Xeon or Itanium 2, an IBM Power4, or a Sun Ultrasparc processor, RAM memory, a high-speed network interface, and associated electronics. The board goes into a rack-mount enclosure that holds some disk storage and networking connections. The rack holds the common resources for the blades such as cabling, power supplies and cooling fans.

Stripping down to the effective minimum saves extra electronics, power supplies, fans, heat, and valuable floor space. Using industry-standard processors makes the hardware and software less expensive. The right management software reduces operational labor costs. When the advantages are combined, blades can deliver dramatic cost saving over a raft of individual server boxes.

Newest blades use dual processors to boost processing power and hot-pluggable architectures allowing blades to be swapped at will. The rack offers a high-speed backplane such as fibre channel and huge, sharable high-speed disk storage extending into the terabyte range.

That newer generation of blade servers will come from traditional server vendors such as HP (via Compaq), Sun, NEC, newcomer RLX Technologies—and even IBM later this fall. According to Mark Melenovsky, research manager at IDC, "every major vendor will have one [blade server] by the end of 2002."

The Benefits of Blades
What's the advantage of a blade over individual servers? Timothy J. Dougherty, program director of IBM's blade server marketing, gives an example. "A CIO I recently called on said it takes about three-plus days to install a new server into the Web farm. Looking at our bladed paradigm, he said adding a new blade takes about 10 minutes."

Another plus is the emerging software that automates the re-provisioning of a blade without an administrator physically touching the server. By downloading a complete software image to the blade almost at will, IT managers can fulfill computing-on-demand, in which a single blade can quickly (and cheaply) be switched between functioning as a Web server, a DNS server, a file server, a firewall, a load-balancer, or any other application. In this role, blades offer significant cost savings and longer life spans than dedicated single-task computer appliances.

IDC's Melenovsky sees current pilot projects pushing blades both overseas and domestically, where these 16 to 216 processor systems and Linux are replacing some heavy Unix iron. Melenovsky also points to newcomer Egenera, a company providing ultra-dense blade servers to the financial market.

Up to the Task?
But can blades take more of the mainframe load? Maybe.

To make the leap to the mainstream, Melenovsky asserts that distributive processing is still on the way and that enterprises won't commit until the command-and-control management software, middleware like J2EE, and major database software for blades mature. "When applications such as Oracle can be hosted on multiple nodes [on a blade server] rather than just one big-iron node (the mainframe)—[that] will be a signal. The market is progressing, but general market adoption won't be until next year (2003)."

Risk-adverse IT departments may want to sit on the sidelines until then. Although Melenovsky sees lots of vendors backed by significant venture capital attacking all segments of this market, he foresees a storm of consolidation through mergers by the big players or smaller players teaming up with deep-pocketed partners. The carnage, however, should season the field into a strong group of capable hardware and software players.

Both 2003 and 2004 should bring denser racks and blades that use four-way processor Intel Itanium 2 or Sun Ultrasparc 4 processors. Meanwhile, enterprises will see declining hardware prices and better management capabilities. If the plan comes together, IT may find blades give the department and the enterprise a real edge.


The Key is Software
The real question for IT managers isn't about hardware but software: When will the application-software soup be ready?

For enterprise pilot and evaluation programs, the time is now, according to Barbara Grimes, public relations manager for Intel's Itanium line. But for IT managers looking for mainstream, ready-to-deploy solutions for large enterprises, the time is more like "soon."

The first round of Itaniums found major deployment in the high-performance-computing segment—to companies designing airplane wings or simulating car crashes. For the Itanium 2, Grimes notes, deployments to mainstream enterprises will depend on the availability of software from the Big Six of application vendors: IBM DB2, Microsoft SQL Server, Oracle 9i, SAP, SAS and i2 (supply-chain management).

The news for SAP deployments, for example, is good. At press time, SAP had Itanium-ready beta and early-developer versions of R/3 in the field, and a production release was scheduled for summer. SAS's base platform of version 9.1 was Itanium 2 available in July; vertical solutions will roll out later in the year.

IBM's new version 8 of DB2 supporting the Itanium entered its public beta this summer and will have its production release by the end of 2002. Oracle 9i release 2 now supports Itanium 2; the 9i Application Sever release 2 is a few quarters away. About 100 additional applications, mostly developed in-house by enterprises, have been ported and are in production.

The remainder of the Big Six won't rush out the door soon. As Intel's Grimes admits, "Enterprise customers realize that software in this class goes through normal production cycles. It's during this cycle when publishers will add Itanium support." In other words, the majors will bring out Itanium-ready versions as their wares are ready. Also following this trend is Computer Associates, which will roll out Itanium 2-ready apps over the next few quarters.

Intel is hardly standing still, having newer Itanium versions (code-named Madison and Deerfield) slated for a 2003 introduction. Monticello, a socket-compatible processor, should follow in 2004, although historically, socket-compatible upgrades have offered dubious value.

However, IT managers shouldn't let the Itanium blindside them from considering other choices. Both Sun and IBM will continue improvements in their Ultrasparc and Power processors. Microsoft's .NET software, along with HP's HP-UX, will be a major force against the Itanium 2.

Clustering, already used by mainframes and a variety of servers, may produce even more stunning results. Oracle's Larry Ellison notes how $364,000 worth of Dell servers (eight servers running 32 Xeon processors each) can have the same transactional power as a pair of IBM z900 mainframes priced over $14 million.

Regardless of dropping hardware costs and impressive performance numbers, enterprises will probably continue to sneeze at the Itanium processors until large-scale applications are available.

Trend #2: Open Source Enhances Server Popularity
Unix (whether AIX, HP-UX, Solaris or another flavor) may be the OS du jour in the high-end market, but for a sure bet on the product with the biggest promise for future impact, put your money on Linux. [See "Big Business Embraces Linux"—Ed.]

Linux's evolution is covering key enterprise points: Improved reliability, better server manageability and built-in code for Beowulf multimode clustering. The vendor partnering, such as the HP-RedHat combination for Linux Advanced Server and IBM-SuSe for versions for the S390 and zSeries, brings better support. The result is turning Linux into a safe harbor for an enterprise's transportable code that runs from the smallest server to the mainframe—with a major sweet spot being the high-end server.

Stephen Josselyn, a research director with IDC, sees Linux gaining traction as industry and corporate development dollars have moved from a hardware emphasis to a software one. More important, Linux represents an important open alternative. "Customers want choice. Linux gives another option other than just Unix and Windows," Josselyn observes.

In fact, Linux is figuring prominently in replacement deployments. Steven Wanless, a long-time Unix observer now product manager at Dell Computer notes, "Linux is mopping up market share from the also-ran Unixes, such as Data General and SGI, as customers are looking to turn systems over and move onto something else." He also sees Linux forcing Solaris, AIX and HP-UX further upmarket from the low- and medium-end servers to a home on the (more expensive) high-end.

IT is finding Linux machines useful for less processor-intensive tasks like business analysis of data already on the mainframe. However, that trend applies to lower-cost clusters as well as high-end servers.

IBM knows that Linux also functions nicely as a front-end partition to data residing on big iron. According to the company, 20 percent of the MIPS on IBM zSeries mainframes shipped in the second quarter of 2002 are running Linux workloads, a 50 percent growth over the first quarter of the year. However, Wanless echos a variety of industry sources in admitting that Linux's enterprise seasoning is new.

IDC's Josselyn sees humor in the Linux evolution. "In the 1980s, everyone was complaining that the mainframe was proprietary, hence the desire for Unix. Now, the same group is thinking that Unix is proprietary and people want to move to Linux." Josselyn cautions IT managers looking hard at Linux to remember that "both the mainframe and Unix are still vibrant markets." In other words, don't give up your day operating system yet.

Companies Mentioned

Egenera Inc.
Marlboro, Mass.

Fujitsu Siemens Computers (Holding) BV.
Maarssen, Netherlands

Hewlett-Packard Co.
Cupertino, Calif.

IBM Corp.
Armonk, N.Y.

Intel Corp.
Santa Clara, Calif.

NEC Solutions (America) Inc.
Sacramento, Calif.

Oracle Corp.
Redwood Shores, Calif.

RLX Technologies Inc.
Houston, Texas

Newtown Square, Pa.
(U.S. region office)

Silicon Graphics Inc.
Mountain View, Calif.

Sun Microsystems Inc.
Santa Clara, Calif.

Unisys Corp.
Blue Bell, Pa.

Trend #3: Patience Pays Off
Forget "shrinking budgets" and start thinking that "pressure to control spending is good." Although almost every IT manager faces a stagnant budget, waiting to spend those IT dollars on new deployments may have a significant reward.

IDC predicts flat revenue growth for the high-end server market through 2006. However, IDC also sees manufacturers continually turning the 30 percent per-year-improvement crank throughout this period. These high-end servers, as well as others systems, will either get less expensive for the same capability or the same number of dollars buys 30 percent additional capability each year.

That means enterprises get more bang for their buck, either in lower prices or more MIPS, when they finally make their deployment commitments.

Servers Can Bewitch
IDC's Josselyn does issue warnings about the evolution of high-end servers. Although high-end servers look attractive for workloads that need high availability and high scalability, the world has been caught up before in trends that didn't pan out, such as the client-server computing paradigm of the 90s.

Josselyn believes IT managers have moved back toward the center and paying more attention to the task than the paradigm. "Managers are taking a closer look at the workload to determine what hardware/software combination works best." That look includes not just hardware cost but development, operational and maintenance costs as well.

Additionally, moving Unix- or Linux-based boxes into the datacenter calls for a different skill set than mainframes. Although Unix and Linux have the required features of a mature OS, their configuration and operation are different than the mainframe, as is the support of the application software under the OS. Josselyn says that finding workers with the right skills is the top challenge for enterprises moving to more high-end servers. However, today's crowded employment market is more conducive for finding skilled personnel.

The Bottom Line
With critical middleware and mainstream applications gradually moving onto distributed platforms, the capability to run applications on a variety of machines, and the ability to fully scale applications vertically or horizontally, high-end servers are taking their place in the datacenter—either supplementing the mainframe or offloading tasks completely.