Windows NT Scalability: The shortcomings of Windows NT 4.0 and the promise of Windows 2000

<P><a href="displayarticle.asp?ID=21799111005AM"><img SRC="archive/1999/news_picts/scalability2.jpg" align="left" border="0" WIDTH="106" HEIGHT="132"></A>Assessing scalability in the marketing-crazed Windows NT world is difficult. To complicate matters, Microsoft simultaneously prescribes Windows NT 4.0 as the solution for enterprise needs of every ilk and champions the appearance of Windows 2000 as an operating system capable of competing with Unix systems and mainframes.</P>

Assessing scalability in the marketing-crazed Windows NT world is difficult. To complicate matters, Microsoft simultaneously prescribes Windows NT 4.0 as the solution for enterprise needs of every ilk and champions the appearance of Windows 2000 as an operating system capable of competing with Unix systems and mainframes.

Exacerbating the situation is conflicting metrics for measuring scalability in the first place. What exactly constitutes scalability for Windows NT 4.0, and will Windows 2000 be measured by the same metrics? These are difficult questions to answer.

This is not a new problem. Through the years different performance metrics were developed to help IT organizations determine the performance or scalability of an operating system or hardware configuration in a given environment. These metrics were often used to compare different machines in a given product line or to gauge disparate machines from other manufacturers.

A frequently cited performance metrics is the TPC-C benchmark from the Transaction Processing Performance Council (TPC, Vendors and analysts concur that the TPC-C benchmark can provide a valuable indication of system performance, but many caution against the practice of placing undue reliance or emphasis on any one benchmark.

"Most people usually talk about [scalability] by referring to TPC-C benchmarks," says David Flawn, vice president of marketing with Unix and NT server vendor Data General Corp. ( "But the benchmark that customers should really pay attention to is how well does a system run their applications, which is often quite a different metric altogether from the TPC-C benchmarks."

"The TPC-C measurement is really only useful for rough hardware measurements and in no way should it be taken as representative of real-world transactions," cautions Neil McDonald, a vice president and research director with the market research firm GartnerGroup (

Determining Scalability Requirements

Larry Gray, product manager for the performance of NetServer systems with Hewlett-Packard Co., says managers can best determine the scalability of NT systems by evaluating performance against known or existing real world requirements. But managers also need to factor in anticipated or potential changes in overall capacity requirements. A scalable system, in Gray’s account, is one that possesses enough power to handle existing tasks, as well as a sufficient reserve to grow with an enterprise or department as future needs arise.

"What [customers] should be asking themselves is ‘Do the systems that I’m looking at have the capacity to grow with my business needs? Do they have the capability to support my applications environment as the different aspects of it grow,’" Gray explains.

David Osborne, president of Micro Modeling Associates (, a firm that specializes in building Windows NT-based financial solutions and providing design development and integration services, concurs. He says plan for the scalability of a system to fluctuate from environment to environment.

"The key to determining scalability is modeling the real world in an environment that makes sense to a client," Osborne affirms. "TPC and other benchmarks are nice, but they’re apples-to-apples type things that don’t really equate to real world situations."

Osborne speaks from experience. Micro Modeling was recently awarded a high-profile contract from the Nasdaq Stock Market Inc. (Nasdaq, to implement a Windows NT-based securities monitoring solution that will replace an existing and highly scalable proprietary solution from Compaq Computer Corp. subsidiary Tandem Computers Inc. ( Osborne says Micro Modeling didn’t win the Nasdaq contract on the basis of its trumpeting impressive and essentially meaningless NT-based TPC-C scores. Instead, Micro Modeling demonstrated a model of a Windows NT securities monitoring environment that was capable of meeting real-world performance stipulations as determined by Nasdaq.


Most industry observers agree that if Microsoft is to continue to successfully push Windows NT into high-profile accounts, the software giant must come to grips with the problem of delivering a scalable, multinode clustering solution on the Windows NT platform. Established vendors, such as Tandem and Stratus Computer Inc. (, have staked their reputations on developing powerful, highly available transaction processing systems that are reliable and scalable precisely because they are clustered.

In September 1997 Microsoft introduced Windows NT-based clustering with version 1.0 of Microsoft Cluster Server (MSCS), which provides simple failover between two nodes. But the company has backed away from its previously aggressive clustering product time line. MSCS 2.0 was scheduled to ship with Windows NT 5.0, now called Windows 2000, in the 1998 timeframe, and to feature support for four-node clusters.

"We looked at Cluster Server for Nasdaq and for other applications. Although it’s a viable platform for departmental applications, especially those based around Exchange and SQL Server, for mission critical environments it’s simply too slow and not scalable," Osborne concludes.

Analysts expect that if a revamped MSCS release appears with Windows 2000, it will likely be a 1.1 release that doesn’t come close to supporting Microsoft’s announced goal of greater-than-two-node clustering.

GartnerGroup’s McDonald is doubtful that a serious multinode clustering solution will emerge from Redmond within the next several years. "Microsoft sorely underestimated the problem of achieving scalable, multinode clustering on NT," he contends. "Cluster Server is way behind, and the original vision that was painted as scalable multinode clustering is not going to happen any time within the next three years, and if it does happen it will not come from the operating system folks but from the SQL team."

Consequently, McDonald points to a shift in Microsoft marketing away from a position that champions horizontal scalability through clustering toward an approach that hypes the advent of scalable, SMP-based Windows NT solutions.

Microsoft’s SMP Gambit

With the debut of Windows NT Server, Enterprise Edition, in September 1997, Microsoft provided for the first time a Windows NT release that was tailored to some extent for greater than four-way SMP configurations. In late 1998, Microsoft dropped a bombshell. The company announced that not only was it re-christening its bread-and-butter Windows NT product line "Windows 2000," but also that Windows 2000 releases would be segmented along the lines of Standard, Advanced Server and Dataserver Server iterations.

"What you’re seeing is Microsoft’s marketing machine exercising some spin control, saying that you really don’t need clustering, that multiprocessor boxes are the key to scalability," McDonald concludes.

Windows NT, Enterprise Edition, ships with a license for four-way SMP right out of the box and includes a special license option for OEMs to provide support for eight-way and greater SMP configurations.

"What type of scalability percentage wise do you get from additional processors," McDonald asks. "Today, with a well-designed, well-written application on NT, you’re going to get reasonable scalability up to the level of four processors. Additional processors may give you some additional power, but it’s certainly not linear."

McDonald and other analysts anticipate that Microsoft’s work to rearchitect and retune the Windows NT kernel for the Windows 2000 release will be a boon for SMP performance. McDonald expects Windows 2000 Advanced Server to be the first Windows NT/2000 operating system capable of demonstrating acceptable levels of near-linear scalability across more than four processors in an SMP configuration.

"It’s mostly a memory-management issue that seems to be the bottleneck with Windows NT 4.0," Flawn says. "The expectation is that [Windows 2000] will improve upon this a little bit and that we’ll probably see very reasonable scaling with eight-way systems."

Windows 2000, Advanced Server, edition will provide support for up to eight processors in an SMP configuration. Microsoft will position Windows 2000, Datacenter Server, as supporting 16 or more processors.

NT on SMP: Four-way and Beyond

Eight-way architectures based on Xeon are only now edging to market, with the first shipments coming from HitachiPC ( Commodity-level eight-way architectures built on technology from Intel Corp.’s subsidiary Corollary Inc. (, originally slated for availability late last fall, are expected in the first half of this year. Other companies, such as Unisys Corp. (, have plans under way for systems that will scale well beyond 16 processors.

With lingering scalability concerns surrounding NT 4.0’s SMP performance, would users be better off waiting for Windows 2000 -- even in the face of standard, high-volume eight-way Xeon servers? Or should users bite the bullet and roll out NT 4.0 on eight-way Xeon hardware as it becomes available?

According to Bryan Cox, product manager for future NetServer systems at HP, the question of whether or not to leverage eight-way Xeon-based architectures on Windows NT 4.0 boils down to the common sense issue of need. "If you’re hitting the wall with your ability to add users or applications or transactions, then an eight-way box is a great buy," Cox contends. "It all has to do with need: If you can improve your service levels and efficiency, then going to eight-processors makes obvious sense."

But for many shops the issue is not about what to buy next, but how to consolidate numerous servers running on older systems that use Pentium and Pentium Pro processors, and ease management and maintenance concerns. Server consolidation is a hot trend in many IT shops these days. Unfortunately, it remains an unrealized dream in most Windows NT environments.

Dan Kusnetzky, director of worldwide operating environments at International Data Corp. (, says server consolidation using Windows NT highlights Windows NT’s scalability limitations.

According to Kusnetzky, Microsoft’s reliance upon a "functional server" approach -- where users are encouraged to deploy a single function, service or application per NT server -- will continue to be a source of problems going forward, regardless of the number of processors under the hood. "If an organization does not wish to adopt Microsoft's functional server approach as the only way to achieve a highly scalable environment, it would be best advised to use Unix, OpenVMS, OS/400, or even OS/390," Kusnetzky maintains.

But no one knows yet whether Windows 2000 will address this nagging shortcoming of Windows NT. Says Kusnetzky, "At this point it’s just not clear that Windows 2000 will make NT scalable enough for an organization's largest or most complex applications."

Must Read Articles