NT in the Data Center: What Does Microsoft Need to Do?

GartnerGroup looks at the issues that Microsoft faces as it continues its march into the data center and increase its focus on support of high-end applications.

With Windows NT, Microsoft Corp. is in a somewhat unique position of trying to satisfy the requirements of mainstream desktop and notebook users and high-end server applications with a single operating system. By furthering NT’s move into the data center and its use as a high-end application platform, Microsoft faces several challenges. It must first assess the opportunities that will drive more NT into the data center, rationally assess NT's weaknesses in fulfilling data center requirements and then set priorities in terms of plugging the holes in its NT technologies and strategies.

Defining a Data Center Platform
When most enterprises think of data center platforms, they think of IBM System 390s and plug-compatible competitors. In reality, that type of pure mainframe-oriented data center is unusual today. Data centers at most large and midsize enterprises have a mix of mainframe, RISC/Unix and other midrange servers, as well a host of other servers, including many running NT or NetWare. But the strengths of S/390 that endear it to the hearts of data center managers are not strengths of NT, and most Unix variants suffer from similar weaknesses, although usually to a lesser degree.

An optimal data center platform provides strong, flexible capabilities in the following areas:

  • High-end performance
  • Very high availability
  • Workload management
  • Job scheduling
  • Storage management
  • Systems management
  • Upgradability
  • Heterogeneous interoperability
  • High-quality service and support
  • Security
  • Disaster recoverability

NT is weaker with regard to most of these attributes than S/390 and many, if not most, variants of Unix. This is due in some cases to NT's relative immaturity and, in other cases, to the immaturity of the third-party tool market around NT. Yet NT continues to penetrate the data center. This is driven by three major factors:

  • NT servers that have been adopted to satisfy department or area-specific application requirements have then been handed off to the data center to operate and maintain and to contain management costs
  • NT’s low implementation and hardware maintenance costs relative to its more mature competition -- see TPC-C data at www.tpc.org
  • The strong, and still increasing, independent software vendor (ISV) focus on the platform

Drivers Forcing Microsoft to Address Data Center Requirements
While the industry's focus on NT's scalability characteristics during the past two years might indicate that large-scale applications are the primary driver for large NT server sales, server consolidation and application consolidation will be the key drivers for high-end NT server sales between 1999 and 2003 -- a 0.8 probability. GartnerGroup expects that a focus on lowering the total cost of ownership (TCO) of NT server via consolidation will be a key trend in the early part of the next decade. As a result, a majority of NT users in large enterprises will look for methods to consolidate their many NT servers down to fewer boxes. GartnerGroup has spoken to enterprises that have collected more than 100 NT servers in a single data center and others with more than 1,000 corporatewide, with no foreseeable end to the growth in sight. While larger symmetric multiprocessing (SMP) configurations and faster processors and I/O will help users achieve this goal, optimal solutions will only come from Microsoft, in the form of workload management improvements and logical-partitioning capabilities.

Today, most packaged-application vendors demand -- as a condition of support -- or strongly suggest that users deploy a dedicated server or servers for their application. This puts users in an awkward situation as they attempt to consolidate. We expect few application vendors to change this demand/recommendation until Microsoft greatly improves NT's workload management capabilities -- by 2003, 0.8 probability. Most consolidation activity, therefore, will be application consolidation, such as very large file-and-print servers and mail servers.

The Inhibitors
Client feedback indicates that the primary technical inhibitor to NT deployments for critical applications in the data center is unquestionably the availability/reliability of the platform. One year ago, calls to GartnerGroup relating to NT availability and scalability ran at a ratio of about four scalability calls to every one availability call. It is now the reverse. Clearly users do not think NT is robust enough for their most critical applications. Newly deployed enterprise resource planning (ERP) systems and other packaged applications need 24-hour-a-day, seven-day-a-week (24x7) uptime, or 18x6 with very close to zero downtime during the 18 hours each day. Emerging e-commerce applications have similar or often stricter requirements.

NT's high-end scalability continues to be a concern, but GartnerGroup believes about 90 percent of the instances of online transaction processing (OLTP) applications and 60 percent of decision support system (DSS) applications can be satisfied -- in terms of scalability -- by NT today. Often, the barrier to an NT-based application deployment is not NT itself, but rather the tools required to manage a large application. For instance, in many enterprises mail/messaging infrastructures are built on distributed architectures -- multiple distributed mail servers. While that is not a bad thing, many users would like to create larger, centralized mail servers without putting the mail/messaging applications back on S/390. Typically, the limiting factor to doing this on NT is not application performance, it is restore performance.

Enterprises can put 1,000 or more users on a single Exchange server, but it could then take more than eight hours to restore the large mail database in the event of a major server crash or disaster. Also, system-level recovery remains a weak point for NT. Because of NT's Registry, which stores all hardware and software configuration information, restoring a complete NT system image from backup tape onto new hardware requires that the new hardware be identical to the failed system. This does not lend itself well to "quick ship" or "hot site" disaster recovery solutions typically provided by disaster recovery or business resumption vendors.

NT Server can probably satisfy the scalability requirements of greater than 95 percent of intranets, but would satisfy a somewhat smaller number from an availability perspective. For externally accessed Web sites, the availability issue is a more critical factor, with back-end database restart time being the major limiting factor to full NT deployments. Also, since the use of Web commerce is still in its infancy, this is an area where NT may be scalable enough to meet an application's current requirements, but the application requirements could outpace NT's scalability gains during the next few years.

To satisfy the call for server consolidation -- users want to run multiple applications successfully on a single server -- Microsoft must address NT's lack of workload management capabilities. Microsoft is planning to provide process-to-processor affinity functionality, called Job Objects, but this feature will not emerge until Windows 2000. Job Objects will be an application programming interface (API), and user control is not planned until a later release. Several vendors, including Data General, Sequent Computer Systems and Unisys Corp., have announced or delivered large, expensive systems that allow users to apportion a single server into multiple NT server partitions, each running its own copy of the operating system.

Thus far the vendors only support, and have only promised support for, static partitions. Without a comprehensive set of multiserver management tools or dynamic NT partitioning capabilities, this solution does not significantly differ from a rack-mounted, clustered server solution. Dynamic-partitioning capability will not be possible without direct support in the NT operating system, and hence it is up to Microsoft to provide this capability. We do not expect to see this feature in NT Server before 2003 -- a 0.6 probability on delivery in 2003.

Many "Wintel" hardware vendors have told users that they are working on improvements to the NT code. While this is true -- Microsoft is glad to have the hardware vendors play with NT code and offer suggestions back to Microsoft on potential improvements -- Microsoft will not offer specialized versions of NT for particular vendors or for specialized architectures, such as cache coherent nonuniform memory access or ccNUMA, through 2003 -- a 0.8 probability.

Microsoft Selling into the Data Center
Microsoft's business model is not optimized for the high-effort, long-cycle, high-end salesmanship required to sell into the data center. It does, however, have the unique ability to leverage its hardware partners, many of whom are reliant on the success of NT. Microsoft could bolster the efforts of vendors, such as Sequent, Unisys, Data General, that focus on the high end of the market. These vendors face the danger of becoming less relevant and less visible as the Wintel server market consolidates around Compaq Computer, Hewlett-Packard Co., IBM Corp. and Dell Computer. Microsoft could aid these vendors' efforts and its own data center efforts by establishing a special high-end support center at Microsoft. Purchases of high-end systems bought from these vendors could entitle buyers to access this support center.

Alternatively, access to this support center could be a feature of the forthcoming Windows 2000 Datacenter Server version. We expect data-center-oriented features, like workload management and job scheduling, to appear first, and probably exclusively, in the Datacenter version of Windows 2000, although the first release of this version will only add support for more than four processors to the Advanced Server edition.

For Microsoft, the question is not, "Do we bother going after the top 5 percent of applications," it is, "How much of the bottom 95 percent do we lose -- or not get -- if we do not satisfy the needs of the top 5 percent?" Users prefer to minimize supported platforms. Many users want to choose NT because their application vendors are telling them to, because NT implementations are generally less expensive than Unix-based alternatives and because they like the NT development environment. But they continue to evaluate other platforms due to NT's lack of high-end-oriented features. Given the coming emphasis on server consolidation in the midrange of the server market, Microsoft also must consider a post-2000 scenario where a more-consolidated Unix market -- with improved ISV enthusiasm -- on lower-cost IA-64 systems with better high-end features becomes an even stronger competitor to NT in the data center.

Bottom Line
In terms of overall maturity and data center readiness, we believe that NT is about four years behind the major Unix variants. We do not expect NT to "catch up" to Unix during the five-year planning horizon, but we do expect significant improvements in most areas. In the meantime, to ease "NT in the data center" issues, GartnerGroup continues to recommend that enterprises consolidate NT servers by function or service. For example, enterprises can create the largest file-and-print servers possible and the largest mail servers -- considering the risk of single-point-of-failure and restore time limitations -- rather than distributing servers into each department or workgroup. GartnerGroup also recommends that enterprises keep a close eye on third-party tools that fill some of NT's data center functionality and manageability gaps.

Must Read Articles