Maybe Not Mutually Exclusive Goals
In my Sept.6 column, I presented a scenario from a customer that needed to allow limitedpublic write access to Web-based data residing on internal servers. Readersresponded with some good suggestions.
The challengesfacing this customer are typical of many organizations. In this case, each Webserver is a discrete system with its own set of disk drives. This means eachWeb server updates files on a local hard drive when members of the public fillout Web forms. But for proper analysis, the data must be combined into acentral repository. The data also needs to be available immediately and it isnot acceptable to have one copy on the Web servers and another internal copy.
From anapplication perspective, without writing lots of expensive, custom code, thebest alternative is for each Web server to map a drive letter to a shareoffered by the central file servers. This way, each Web server thinks it iswriting to its own hard drive, when, in fact, it is populating data into acentral directory inside the file servers’ disk farm.
Unfortunately,this solution creates gaping security holes, as documented in Microsoftknowledge base article Q179442. To provide this capability, the firewall mustenable TCP and UDP ports 135, 137,138, 139, and all ports above 1024. It mayalso be necessary to enable other ports for name translation and otherservices. In other words, providing this capability renders the corporatefirewall useless.
Based onreader e-mail, the most popular choice is to put the Web servers between twofirewalls. Here is the conventional topology: Put up a heavily restrictedfirewall between the Internet and the Web servers that only enables HTTPtraffic. Set up appropriate DNS entries pointing the publicly visible IPaddress of the Web servers to the outside firewall. Use Network AddressTranslation on the firewall to redirect all HTTP traffic to the real Webservers. Use internal IP addresses for the Web servers such as 192.168.n.n or10.n.n.n that don’t route across the Internet. This way, nobody from theoutside will know the real IP addresses of the Web servers. Even if a bad guydid somehow learn the Web server IP addresses, the only physical path to themis through the firewall, and the only protocol that will hit the Web servers isHTTP.
Next, putup an internal firewall between the Web servers and the internal network, butonly enable traffic from the Web servers’ internal IP addresses. Theintermediate LAN between the two firewalls is called a demilitarized zone(DMZ).
The DMZapproach is not perfect -- no approach is perfect -- because a determined badguy might somehow exploit HTTP to compromise a Web server and ultimately getinside the internal network. Spoofing is also a possibility, but a bad guywould need to work very hard to make this happen.
A fewreaders suggested a very interesting twist on this idea. For the path from theWeb servers to the central file servers, why not use another protocol? If thecentral file servers use Windows NT or Windows 2000, IPX/SPX or even NetBEUImight be a good choice. In my case, the customer was running Advanced Serverfor OpenVMS, and so DECnet could also work.
Thistopology would not use a DMZ. It would use a private LAN that includes only theWeb servers and central file servers. It would require an additional LANadapter in each Web server and central file server, and appropriate switchesand/or hubs and wiring to connect them. In the Web servers, bind TCP/IP to theLAN adapters visible to the firewall, and bind IPX/SPX, NetBEUI, or DECnet tothe inside LAN adapters visible to the file servers. Similarly on the centralfile servers, bind IPX/SPX, NetBEUI, or DECnet to the LAN adapters visible tothe Web servers, and bind TCP/IP to the LAN adapters visible to the corporatenetwork.
Thistopology has lots of advantages. Most important, a bad guy would need intimateinside knowledge of this network to cause problems. He would have to find a wayto route packets through an outside firewall into a Web server and somehowchange network protocols inside that Web server to continue the journey to thecentral file servers.
As always,the biggest area of vulnerability is the Web application. If a poorly designedapplication provides an unsupervised path to critical data, a bad guy somewherewill find it and exploit it. No firewall or clever topology will stop him.
Theweakness of this topology is that it is extremely intrusive. Each Web and fileserver requires a second LAN adapter and hand tuning to bind the appropriateprotocols to the correct adapters.
Thanks forthe suggestions! I enjoyed reading them. --Greg Scott, Microsoft CertifiedSystems Engineer (MCSE), is chief technology officer of Infrasupport Etc. Inc.(Eagan, Minn.). Contact him at email@example.com.