Sticking to the Eight-Second Rule in Web Access

In the old days, when it was just techies and inside employees who were logging on to your system, slowdowns and shutdowns were excusable. Now, with outside users logging on via the Web, a lag in response time of eight seconds or longer is considered the kiss of death.

Back-end "connection cholesterol" is a health hazard to be avoided, and its consequences may be costly. Sluggish Web sites and download times cripple major e-commerce sites at a rate of about $362 million per month, or more than $4 billion per year, according to Zona Research.

"Anyone can run an e-commerce site by setting up a Dell 750 MHz Pentium III or a Sun Enterprise server," says Alf Weaver, Director of the Internet Technology Innovation Center at the University of Virginia. "But that. s only the beginning of what it takes to run a serious e-commerce site."

Consider the systems behind the Nasdaq market, powered by Unisys and other large host systems, which are under intense industry pressure to move everything on onto the Web. Nasdaq is facing fierce competition from electronic communications networks (ECNs), which are cheaper, faster and nimbler at handling trades. Gregor Bailar, Executive Vice President and CIO of Nasdaq, sums up the monumental task Nasdaq faces in boosting online performance. On a peak day, Nasdaq. s Web sites will take more than 35 million hits. "In 1994, we handled 350 million shares of trading a day. Now, we handle 350 million in the first 30 minutes of the morning," he adds. The result, of course, has been "incredible growth in messaging" between Nasdaq and client systems.

How does Nasdaq manage this incredible surge, and keep its response times within the eight-second timeframe? First, a stable of well-trained and knowledgeable IT staff and services. Second, lots of redundancy built into everything, in terms of duplicate servers, connections, storage and power supplies. In a successful e-commerce site, "everything is duplicated, everything. s backed up multiple times [and in] multiple ways," says Weaver. "A serious, e-commerce hosted site needs a lot of infrastructure."

E-commerce sites typically fail for a number of reasons, mainly because they tend to be too unreliable, or even too complex. "When servers crash, IT has to spend hours to decide if the problem is in its Cisco routers, HP servers, Microsoft operating system, Java middleware or ERP application," says Carl D. Howe, an Analyst with Forrester Research.

Analysts agree on some best practices that are needed to ensure highest possible Web site performance:

Keep the user interface as thin as possible. In Web-to-host deployments, developers often have to grapple with the size of the Java or ActiveX applet being downloaded. The same applies to Web pages in general. To keep downloads at a decent speed, Zona recommends keeping the sizes of Web pages between 40 to 50 KBs or less.

Stick to standards. Assume the systems you are currently using may not be around in a few years, says Howe. Successful e-commerce sites typically consist of dozens of big UNIX and NT systems, powered by multiple switches.

Staff (or outsource) well. Most small- to medium-sized organizations simply don. t have what it takes to run a serious e-commerce operation, says Weaver. "If a small- to medium-sized company isn. t already inherently a computer-oriented company, an IT infrastructure is very expensive to acquire," he warns. "It makes much more sense to outsource."

Train heavily. "High-availability systems can. t wait for problems to bounce from the server management group to the network help desk to the applications development team," says Howe.

Diversify. "Multiple systems keep the performance impact of any box failure small," says Forrester. s Howe. "Despite CIOs eager to standardize to reduce maintenance costs, e-commerce leaders use everything from Windows NT/2000 systems to UNIX-based multiprocessors in their e-commerce computer farms. This ensures that threats, such as viruses that infect Windows NT systems or hackers that target servers, can. t bring down every system in the data center. Rabid standardization on single machine types has left whole Web sites vulnerable to OS-specific Internet viruses."