NT or UNIX? What Is Right for Me?
The strengths and weaknesses of each operating system are explored, including a daring prediction of the position of these operating systems in the coming years.
Ever since its launch several years ago, Windows NT, Microsoft's New Technologyoperating system, has been held up to against the venerable tried-and-true UNIX. Both havetheir rabid proponents, whose sparring actions suggest their operating system isthe only solution. This article will explore the questions and answers you mightconsider when reviewing the best environment for you needs. Beginning with a review of thetechnical origins of the operating systems, we will explore the strengths and weaknessesof each operating system and conclude with a prediction of the position of these operatingsystems in the coming years.
What is the "right" operating system for you? It depends on what you aretrying to do! If you want a relatively inexpensive platform to run shrink-wrap software orthe latest office-automation package, then NT is the obvious choice. If you need tosupport thousands of transactions feeding into terabytes of data - all in amission-critical environment, UNIX is the obvious choice over NT. If your situation fitsinto this simple pattern, there is no need to read further. Otherwise, let us learn a bitabout these systems before making an informed decision.
The Origin of NT
Windows NT is a relatively young operating system built upon the lessons learned fromearlier operating systems, such as VMS and Mach. Lead by David Cutler, whose resumeincluded Digital Equipment Corporation's VMS and RSX-11M, the NT design team waschallenged to build the next generating operating system conforming to a range ofstringent constrains, such as software and hardware backward-compatibility. In addition toconforming to earlier Intel chips, NT had to provide Win32 (NT's primary programminginterface), MS-DOS, 16-bit Windows, POSIX 1003.1 and OS/2 APIs. The solution the designteam chose was a variant of the Mach microkernel client/server design.
The Mach design separates many of the typical operating system tasks, such as the filesystem and the memory manager, from the kernel of the operating system. These separatetasks run as stand-alone server processes, fulfilling requests of other server processesor user programs. This design provides portability, ease of maintenance and support fordistributed architectures. For example, various server components could run on separatenodes in a network.
NT did not adopt the microkernel design, but did make use of the client/server model.NT is roughly divided into two parts, the NT executive or kernel portion of the operatingsystem based on a layered design, and a set of "protected subsystem" serversthat run in user mode. These servers provide the operating system for users to run theirapplications. Some of the theoretical advantages of this design include the ability toupgrade operating system environments separately from the operating system itself,thus easing the addition of adding new environments and supporting new hardware platforms.
In summary, Microsoft NT is a 32-bit operating system with multitasking and multi-userfeatures, evolving from VMS and benefiting from the research from the Mach project. Atpresent, NT supports only one interactive session per NT subsystem. The subsystemssupported include: 16-bit Windows, 32-bit Windows NT (a super set of Win32s), POSIX (onlyfor DoD) without a GUI and OS/2 in character mode. NT services are based on the UNIX-styleRemote Procedure Call (RPC) and not OLE. NT applications are based on NT threads, whichdiffer substantially from UNIX-style threads.
The Origin of UNIX
UNIX is a fairly old operating system, initially coming out of AT&T in the late1960s and going through a number of commercializations and owners - including severaldifferent paths of evolution. UNIX has run on everything from Intel 8086-based systems tomulti-thousand-node supercomputers like Thinking Machines Corp. and the likes of thevector-style Cray supercomputers. Software ranges from freeware such as Linux running veryefficiently on old Intel-486 machines within limited memory to UNIX varients including SunMicrosystem's Solaris, IBM's AIX, SGI's IRIX, Hewlett-Packard's HP-UX, Santa CruzOperation's SCO UNIX and even Microsoft's onetime product, XENIX. To earn the label ofUNIX, the X/Open Group is charged with branding systems on a Single UNIX Specification.Although implementations may differ, most variants of UNIX are based on a monolithickernel, with most of the operating system and environment running in kernel mode andproviding system calls to user programs. The UNIX branding is significant, since the UNIXstandard only specifies the operating systems interface and functionality - in otherwords, how the programmer and programs 'see' the environment. How the vendor implementsthis functionality may provide a proprietary benefit of choosing one UNIX vendor overanother.
Scalability denotes the operating system's ability to perform well over a range ofhardware and software configurations, particularly clustered systems and multiprocessorsystems. Multiprocessor computers are systems that include more than one processorconnected via a communication network and share a single memory area. In this arrangement,several applications may run concurrently on a different processor, instead oftime-sharing a single processor. In addition, a single application that is multi-threadedand has no data interdependencies - meaning it has several independent processing tasks(or threads), can execute simultaneously across the processors for increased performance.A typical Web server (HTTP server) often is multi-threaded, performing transaction-basedprocessing against a set of shared data.
Though UNIX was not originally designed for multiprocessing, several vendors haveimplemented UNIX to support hundreds of multiprocessors. Some examples include Sun'sEnterprise systems composed of up to 64 UltraSPARC processors connected via a Ultra PortArchitecture crossbar switch, IBM's RS/6000 SP systems with 512 PowerPC or POWER2 SuperChips processors connected via a SP switch, or SGI's Origin S2MP system with up to 128R10000 processors connected via the CrayLink Interconnect mesh network. This flexibilityof system design underneath the UNIX "brand" creates an environment to supportdatabases larger than 50Gbytes, supporting thousands of users in I/O intensivemission-critical applications.
Windows NT was designed from the outset with multiprocessor computing in mind. Out ofthe box, NT supports up to four processors (the Enterprise edition supports up to eightprocessors). According to the NT Product Manager Gary Sharer, NT is limited to a total of32 processors on all architectures (Intel and DEC Alpha). Unfortunately, the concept ofmultiprocessing for user applications has yet to bear fruit. At present, this author isnot aware of any application obtaining a significant throughput advantage with NT in amultiprocessor environment
Clustering is a technique used to provide fault tolerance and high availability tomission-critical systems. The idea is to connect a number of functionally identicalsystems together in a cluster. Software monitors the systems and should there be ahardware or software failure, the affected systems are "failed over" to otherrunning systems. Thus, you may have two identical Web servers, one running as the primaryserver and the other running as a backup. When the primary fails, the monitoring systemmoves the requests and ongoing processing to the backup processor, redirecting HTTPrequests, rebinding IP addresses, notifying the operator of the failure, documenting theextent of failure, etc.
Most of the UNIX vendors have provided clustering products for some time, includingIBM's HACMP for AIX, DEC's TruCluster for Digital UNIX, SGI's Challenge cluster systemsand Sun's Enterprise Cluster system based on the Scaleable Cluster Interface. Sun supportsthe ability to cluster Internet servers to act in concert as a single system to providehigh availability by routing requests around failed or slowed servers.
Microsoft's cluster solution for NT is code-named "Wolfpack" and was releasedin the summer of 1997 to support 2-node Intel-only clusters. More nodes are promised thisyear, but Microsoft's clustering ability at this time is immature.
Windows NT vs. UNIX Performance
|Windows NT performance excels: ||UNIX performance excels: |
· Low-end servers - 1,000 to 2,000 transactions/min. Some have stretched to the midrange level - 2,000 to 15,000 transactions/min.
· Workgroup/departmental LAN servers
· Low volume Internet servers
· Comfortable support for 200 concurrent users
· Applications within an enterprise such as software distribution or a small database, but not mission-critical applications
· Servers that perform > 15,000 transactions/min.
· 800-1,000 concurrent users
· High volume Internet servers
· Servers for mission-critical applications that need high availability and scalability.
· Commercial and large-scale data processing, including terabyte sized databases.
· Companies that want to migrate from their mainframe or minicomputer strategies.
· Support for fault-tolerant clusters, symmetric multiprocessing and massively parallel processing.
· Managed at a very low level through a character-based interface, making it very easy to a access all administrative functions via properly secured network.
Windows NT was the first operating system from Microsoft with networking integrateddirectly into the operating system. Previous Microsoft systems layered networking as anadd-on. Out of the box, Windows NT supports peer-to-peer networking services such as filecopy, electronic mail and remote printing based in Microsoft's server message block (SMB)protocol and NetBIOS API. Other transports, such as TCP/IP, IPX/SPX, DECnet and AppleTalkmay be loaded or unloaded from the NT system. Windows NT follows the UNIX remote procedurecalls (RPCs).
Most versions of UNIX used TCP/IP as the default transport since its inception morethan 20 years ago. Berkeley UNIX pioneered the socket network interface design, providinga simple and reliable network for network communication. Like NT, a number of othertransports can be loaded into the UNIX kernel (e.g., SMB from Samba). The networkingability found in UNIX gave rise to many of the common functions we currently associatewith the Internet and the Web, including the Network File System (NFS), the Domain NameSystem (DNS), the Simple Mail Transport Protocol (SMTP) and Network News TransportProtocol (NNTP). While the main observation is that UNIX and UNIX development has beeninvolved with client/server computing for a long time and Microsoft NT is relatively newto the network, there is a more fundamental difference - and that is how the operatingsystems use the network.
While NT is a "multi-user" operating system, this is very misleading. A NTserver can validate an authorized user, but once the user is logged into the NT network,all they can do is access files and printers. The average NT user can only runapplications on the computer where they are sitting. Unless their application supports aclient/server architecture, they cannot run their application on multiple NT machines inthe network.. (The third-party Citrix WinFrame product extends the client/server approachto conventional Windows NT applications, much like the X-Window system in UNIX.) When aUNIX user logs into the UNIX environment, they can run any application - limited bytheir security authorization. Microsoft is beginning to address these deficits throughthird-party software like Citrix's WinFrame and multi-user support via their"Hydra" project.
Microsoft's NT server strategy appears to be a simple one of providing the operatingsystem of choice to high volume hardware vendors. By unifying primarily Intel-orientedhardware vendors under one software vendor, Microsoft can afford to continuously improveand leverage a single-source code base through incremental improvements. Unlike thebranding of UNIX by X/Open Group, the Microsoft "standard" is achieved throughsingle-vendor dominance of the underlying software infrastructure. This reduces innovationfrom competition, but increases the richness of functionality due to focus on a singlesoftware environment. Microsoft's business model appears to provide alowest-common-denominator solution without optimizing for any specific market. Thus, theirfrequently released updates and patches appear to target the lower-end users and not theleading-edge enterprise-class users who cannot afford to patch and recover from the"Blue Screen Of Death" crashes on a regular basis.
What Is Right For Me?
So what is the "right" system for you? It depends on what you want to do withthe system. Is the goal to choose and deploy an operating system environment that withadequate performance and availability characteristics of a single application orapplication class, such as office productivity? If moderate expense and using widelyavailable software applications are key drivers in your choice, NT is probably the rightchoice for you. I expect the NT server will the server of choice in low-volume workgroupand departmental levels, especially to support of low intensity software applications,such as office productivity software.
If your goal is to select a very high-end system that can server many diverseapplication requirements or you need a scaleable system to support mission criticalapplications, UNIX is likely to be the better choice. If high performance and reliabilityat very low cost are the driving issues, UNIX in the form of freeware Linux may be theoption of choice. The Web is full of anecdotal reports of applications based on Intel66MHz 486 Linux systems outperforming dual Pentium 166 MHz systems running NT.
NT's Challenging Role Against Other Operating Systems
|Roles ||Top Selection Criteria ||Market Position of NT ||Competitor(s) |
|Messaging ||Lotus/Microsoft software support ||Leader ||OS/2, NetWare |
|File and Print ||Price/performance, manageability ||Strong challenger ||NetWare, OS/2 |
|Web Serving ||Availability, performance, manageability, software ||Low-end leader ||UNIX |
|DSS ||Computer-intensive, and I/O performance, database scalability ||Low-end strong challenger ||UNIX |
|OLTP ||Scalability, availability, service and support, software ||Low-end challenger ||UNIX, AS/400 |
|Web Transaction Processing ||Security, scalability, availability, service and support, software ||Future challenger ||UNIX |
|Batch ||Scripting flexibility, computer-intensive and I/O scalability ||Non-competitive ||S/390, UNIX, AS/400 |
|Mixed Workload ||Task and resource management, plus above ||Non-competitive ||S/390, AS/400, UNIX |
ABOUT THE AUTHOR:
E. Loren Buhle Jr. Ph.D., is a Managing Associate with Coopers & Lybrand L.L.P. intheir National Internet Practice.