Winning with Win2K

The much-anticipated and simultaneously much-dreaded Windows 2000 is here, and IT professionals now have to deal with it. Here is a primer on Windows 2000, its new features, and some reasons to adopt, or wait and see.

Microsoft Windows 2000 is finally upon us. The much-anticipated and simultaneously much-dreaded operating system is here, and information technology professionals now have to deal with it. Anticipated, because of new features that promise to make the implementation and maintenance of networks much simpler. Dreaded, because of the complexity, system requirements and uncertainty that Microsoft’s newest operating system creates.

Here is a primer on Windows 2000, its new features, and some reasons to adopt, or wait and see.

The History and Versions

Of course, Win2K is the successor to Windows NT, Microsoft’s incredibly successful high-end OS that offered some enterprise-level functionality for a fraction of traditional enterprise OS costs. NT was an ambitious project that heralded Microsoft’s move beyond its traditional consumer orientation and fit a basic need as IT organizations began to push services closer to users and farther from traditional data centers.

While the first versions of NT were primarily used as basic file and print servers, it has become a well-rounded OS. Built-in features eventually grew to include remote access services, Web hosting and IP routing. But, of course, NT had a long way to go to catch up with the pack. As Microsoft wanted to become more of a player in the enterprise, its flagship OS had to grow and mature as well and Microsoft crafted plans for a bigger and better "bet the company" operating system.

Eventually, this turned into the various versions of Win2K. Win2K Professional is a replacement for NT Workstation. It’s designed for a computer with at least a Pentium 133 MHz processor, 64 MB of RAM and at least 2 GB of hard disk space. There are now three different versions of the NT Server. Win2K Server requires a minimum of a Pentium 133 MHz processor, 256 MB of RAM and 2 GB of hard disk space. Win2K Server supports machines with up to four processors. Win2K Advanced Server has the same minimum requirements, but supports machines with up to eight processors. Win2K Data Center supports up to 32 processors.

Along the way, Win2K acquired a host of new and powerful features. Here’s a look at some of the more important ones.

Active Directory Service

Active Directory Service (ADS) is the biggest new feature of Win2K, and addresses one of the severe shortcomings of NT in large enterprises. It’s an effort to integrate any and all directory services into a single, unified system and reduce the time required to create and maintain the various directories on your network. At the same time, it’s designed to scale to much larger directory sizes than previously possible under NT.

The improved scalability is possible because of a hierarchical tree structure modeled on the Internet Domain Name Service (DNS). This structure distributes the directory service between multiple servers, each responsible for a particular portion of the namespace. This means that the DNS server responsible for hppro.com doesn’t have to store every name and address on the Internet. It only needs to know the address of the computers in the hppro.com domain and the location of the DNS servers that can answer questions about everything else. Organizations that wish to provide DNS information to the Internet are required to have two DNS servers providing redundancy in the case of a DNS server failure.

The DNS model is markedly different than the domain model currently used by NT. The NT domains are directories primarily used to manage user security. Each domain features a Primary Domain Controller (PDC) that saves security information in the Security Accounts Manager (SAM), a database stored as an encrypted flat file. The SAM contains all the information about the domain: This includes the basic user information, such as account names, passwords and group memberships, and a list of the computers that are members of the domain. Having all this information in a single location helps ease management. However, it also means a single point of failure and a potential performance bottleneck.

Backup Domain Controllers (BDCs) alleviate some of these problems by storing backup copies of the SAM. A computer logging onto the network can be validated by any BDC, easing some of the validation burden. However, the BDC can’t perform all PDC functions and doesn’t completely eliminate the risk from a single point of failure. For instance, if the PDC is down, you may not be able to change a user’s group memberships. And BDCs do not automatically become PDCs. If the PDC is down, you must manually promote the BDC to take over. (This is actually a good idea. Envision a WAN with a PDC in Chicago and BDCs in New York and Los Angeles. If the WAN goes down, both New York and Los Angeles would promote themselves. When the WAN comes back up, you have three PDCs and a large mess.) What’s worse, if you wish to have a regular member server become a domain controller, you have to reinstall NT.

NT domains are also not particularly scalable. Per Microsoft’s guidelines, domains are generally limited to 26,000 users and 250 groups. This limitation may require multiple domains for a single large organization, which may require establishing trust relationships. Using a trust relationship, a trusting domain allows trusted domains access to its resources. Trust relationships can become very complicated very quickly, requiring a lot of maintenance. For instance, if you have four domains completely trusting each other, you need to establish 12 trust relationships.

With Active Directory, the concepts of primary and backup domain controllers and trust relationships go away. Under Active Directory, there are only domain controllers. To make an NT Server computer a domain controller, you need only install and start the Directory Service. Each and every domain controller can be used to update all the directory data, eliminating the problems of downed PDCs. Domain controllers discover other domain controllers on the network, and a technique called multimaster replication is used to propagate changes to the other controllers on the network. Each change in a directory on the domain controller is given an Update Sequence Number, which is something like a time stamp. If a controller is replicating conflicting data from multiple controllers, it can use the Update Sequence Number to decide which is the latest data. Domain controllers have authority over a particular namespace, just like in DNS. Active Directory domain names become like Internet domain names to simplify naming. Currently, NT domain names are limited to 15 alphanumeric characters. Under Active Directory, domain names can be identical to Internet names. For example, foo.com is a valid Active Directory domain name. Currently, an organization might have several domains, such as Sales, Finance and Manufacturing.

To share resources, trust relationships would have to be established between the domains. Under Active Directory, these domains become sales.foo.com, finance.foo.com, etc. The traditional trust relationships are not necessary because the domains are now within the Active Directory hierarchy.

Active Directory should offer a lot of possibilities for organizations to simplify the management of their users’ information. It’s also probably the most complex new feature in Win2K, and if you are planning to implement it, take time to consider how it will work in your organization.

Domain Name Services

Domain Name Services (DNS) have really changed in Win2K. They now incorporate the latest features, as specified in the Internet Engineering Task Force Request for Comments (RFCs). Specifically, dynamic DNS (DDNS) and service resource records (SRRs) are now supported.

DDNS allows the table of hosts to be updated by nodes on the network, rather than through manual maintenance. This solves one of the more annoying problems associated with Dynamic Host Configuration Protocol (DHCP) on NT networks: name resolution. Before, system administrators were forced to create Windows Internet Name Service (WINS) servers so that hosts with dynamically assigned addresses could be reached via name. Implementing WINS meant more work for system administrators to solve what was really a problem specific to NT, not the Internet at large. With DDNS, Win2K can actually update the name server with the appropriate records when the IP address is assigned. Additionally, Win2K DHCP servers will register non-Win2K clients as they assign addresses, so the entire network is reachable via name. This is a simple idea to describe, but difficult to implement and Microsoft has done a good job of recognizing the shortcomings in NT in the area and resolving them (no pun intended).

SRRs allow you to name specific nodes in DNS as providing specific services. For instance, you can create an SRR that defines the Web server for a domain. Simply create an SRR that defines a host name _http._tcp.hppro.com pointing to www.hppro.com. This allows SRR-capable browsers (like Internet Explorer 5.0) to automatically go to www.hppro.com whenever they are pointed to hppro.com. Similar SRRs are available for other standard Internet services, such as the Lightweight Directory Access Protocol.

Additionally, Win2K automatically creates SRRs for Active Directory controllers and legacy domain controllers. One other important feature is that multiple SRRs can be defined for a single service. So, in the above example, a second SRR can be defined for _http._tcp.hppro. com pointing to www2.hppro.com, providing some rudimentary fault tolerance and load balancing.

NTFS 5.0

NTFS, the NT native file system, has some interesting new features in version 5.0, including Distributed Link Tracking, Indexing Service, Encrypted File Service and Distributed File System.

Distributed Link Tracking allows applications to access files that have been moved by tracking the link to the file. The new Indexing Service runs in the background, scanning files and indexing the contents for fast retrieval. This allows some very complex queries to find files and quick location of files. The indexes are accessible from other Win2K machines and Internet Information Server. The Encrypted File Service will automatically encrypt data on an NTFS volume. The encryption makes the volume only readable in the machine that encrypts it. (Of course, sending the file to a FAT volume, such as a floppy disk, defeats the encryption.) The Distributed File Service has been available as an add-in under NT 4.0. It allows the creation of a single directory tree, containing file shares from various volumes or servers. This can simplify some share management for users.

All in all, there are quite a few enhancements to NTFS. None of the features may be a "must have" upgrade on their own, but taken together they can provide a reason to upgrade in more complex environments.

Terminal Services

Win2K incorporates the features of NT 4.0 Terminal Server Edition. Terminal Services allow a user to connect to the Win2K server and run applications on the server as if you were at the console. This is done with a special program called tsclient that can run on any Windows platform.

Terminal Services have been touted as a way to add mainframe-like functionality to Win2K. Basically, the idea is to permit thin computer/network appliance machines to run server-based applications without the hassles of deploying the software at each client. While this is a relatively good intention, it means that servers will have to have more processing power and memory to support the remote users.

Additionally, Terminal Services licensing is not a trivial matter. There are four types of licenses. Client Access Licenses are assigned to individual users and required to access a server. Internet Connector Licenses are limited-use licenses for Web-enabled applications. Built-in licenses are single client access licenses included with Win2K Professional. Temporary licenses are issued dynamically by a server when no other licenses are available. Aside from this general confusion (Microsoft itself seems very confused about how the Internet licenses work), there are costs associated with the additional licenses.

While there is undoubtedly some use for Terminal Services, there has not been a wholesale adoption of this model by IT departments. The strenuous hardware requirements necessary to give users decent performance still seems to be a problem. It seems more likely that Terminal Services will be used for remote management of servers. Since a license is included with Win2K Pro, it seems ideally suited for a network management station.

Disk Quotas

As the amount of network storage continues to grow at a very fast pace, most system administration managers keep wondering why and how to bring it under control. Most networks have at least a few users who insist on saving every byte that crosses their screen. Until Win2K, there was no way to automatically monitor and enforce limits on a user’s disk storage. Disk quotas can be set for individual users on a disk-by-disk basis.

There have been third-party applications that can enforce and monitor quotas on NTFS disks under NT 4.0. With Win2K, you won’t have to spend extra money to get this functionality. The quotas are easy to implement. When permissions are set on a disk or directory structure, you can set the maximum storage limits per user. The quotas are disk-based, not directory-based. If a user has multiple folders on a disk (say, for an application in one share and for their network home directory in another), the total of all folders on the disk is used for enforcing quotas. This may require some individual adjustment to user quota levels, but is not a difficult task. An administrator can even set a warning level so a user receives a pop-up message when nearing their limit. By using the quotas, the system administrator may just save a few dollars on new disks and help users with the discipline of storage management.

Offline Storage

Offline storage permits tape drives to be used in addition to online disk drives for storing data. While mainframe operating systems have offered offline storage for decades, the idea is new to Intel processor-based operating systems.

Offline storage is implemented through a complex rule setting process which permits lesser used files to be automatically removed from online devices (hard disks and RAID drives) to offline devices (tape drives and libraries). The process is done transparent to the user who simply accesses the file as they normally would. The difference is in access time: If a file has been moved to offline storage, it may take significantly longer to retrieve the file.

Some planning is required for a successful implementation. The rule set must be well-planned. For instance, files that have not been accessed in the last 90 days would probably be a good target for offline storage. Additionally, if you are not using a tape autoloader, provisions must be made for manually mounting the tapes at the server.

While offline storage is a nice new feature, the demand may not be too strong. It’s not a substitute for standard backup procedures and may interfere if only a single drive is on the server. And, of course, users will have a hard time understanding why file access is taking so long. With the cost of disk drives continually plummeting and the steep cost of decent tape auto-loaders, offline storage won’t be for everyone.

Internet Connection Sharing & Network Address Translation

While NT 4.0 has the ability to route IP packets, this is not always the best solution for connecting networks to the Internet. With Win2K Server, there are two new features to help connect a network to the Internet: Internet Connection Sharing (ICS) and Network Address Translation (NAT).

ICS is a function of the Dial-up Connections tool. It requires a network interface and some form of remote Internet connection, such as a modem, DSL adapter or cable modem. ICS is enabled on the external connection and the internal connection is automatically configured with a private Class C IP address (192.168.0.1). Other machines on the internal network get addresses from a DHCP service on the ICS server. Once ICS is configured, there aren’t many options. The DNS name services, address range and subnet mask are preconfigured by the ICS service. As internal clients need to access the Internet, the ICS server translates the internal address to a valid external address. Multiple users can be accessing the Internet using a single IP address. The ICS server appears as a single, although busy, node.

What ICS lacks in configurability, it makes up for in ease of use. It’s very simple to activate and you can set up ICS with just a few clicks of the mouse. Even network administrators with no experience in Internet or routing can easily set up a shared connection.

If you need more functionality, you can use NAT. It’s more complicated than ICS, but every aspect is configurable, making it very useful. Like ICS, NAT hides the network addresses of the internal network from the Internet at large and allows internal clients to access the Internet at large. Unlike ICS, you can use many valid external addresses, any internal addresses and you can map specific ports. This is a very useful. For instance, you can put a Web server on your internal network, say at address 192.168.1.1. Normally, this private address would not be accessible to the Internet. With NAT, you can map port 80 with a valid IP address to 192.168.1.1 and everyone can access the internal Web server transparently. There are several reasons to want to do this. The first is security. If the internal network is numbered using private IP addresses, there is a limited risk of directly attacking the machines. Since the private addresses are not in the routing tables of any Internet routers, it is impossible to reach the network directly from a remote location. Another reason is flexibility. To move the Web site from one Web server to another, simply change the NAT mapping.

Final Analysis

There are a lot of other new features. New laptop settings in Win2K Professional are plentiful. A built-in defragmentation feature is included. Support for roaming desktops has been improved. A new printer management system permitting control of print queues remotely via Web browser has been introduced. The list goes on.

So, should you upgrade? The new features are certainly compelling, but at some point, you just may not have a choice. Eventually, NT 4.0 will not be supported, and it will be imperative to get a supported OS. The industry will definitely move to Win2K because the industry always moves to the latest version, eventually.

There are an estimated 30 million lines of code in Win2K and that means a huge number of opportunities for bugs. While the code has gone through a huge and very public testing process, there is still much uncertainty. The best advice may be to go slow. Start with a plan. Where can you adopt Win2K in your environment and use that opportunity to learn? What planning do you need to successfully implement Active Directory? What resources do you need to support new Win2K users?

Don’t make the mistake of upgrading, simply to upgrade. To successfully implement Win2K, you will need some foresight and planning. By examining the new features, you’ll probably find some benefit to your organization. After that, you’ll be ready.

– Ryan Maley is a Microsoft Certified Systems Engineer and the Information Systems Manager for a Midwestern manufacturing company. He can be reached at ryan@maley.org.

Must Read Articles