In-Depth

Trim the IT Budget

With the economy still cooling off and corporate profits well on their peaks, dollars for infrastructure improvements can be hard to come by.

IT investment, which many economists credit for the boom in productivity during the last decade, has been one of the areas hardest hit now that earnings have soured. Support for new IT projects has weakened, and some managers face difficult budget choices. Those choices can be made a little less painful, however, by challenging some popular assumptions and using existing resources more intelligently. Rather than simply laying off half the IT department, there may be ways to cut costs drastically and still keep vital projects on pace.

Upgrade Selectively
One way to reduce expenses is by extending the usable life of your equipment and squeezing all of the value you can out of your existing IT infrastructure. In a recent study, International Data Corp. (IDC) found that most companies upgrade their hardware every two to three years. But leaner times may call for a more economical upgrade schedule. Assess your inventory and determine if you actually need to upgrade your hardware. If not, you not only save money on new equipment, you also free up staff from the deployment process and you reduce the steady stream of help desk calls from users unfamiliar with the new machines.

According to the consultants at Compass America, the total cost of operation (TCO) for workstations averages $10,000 the first year, $6,000 the second and $4,000 the third, for a total of $20,000 over three years. Make that system last a fourth year, rather than acquiring new workstations, and the savings are significant: TCO costs drop to $3,000 for that fourth year, as opposed to spending $10K to upgrade and restart the TCO calculations. In a 1,000-workstation enterprise, that works out to a reduction in TCO of $7 million.

Outsourcing on the Upswing

According to Cutter Consortium Senior Consultant and IT Metrics Strategies Editor Michael Mah, as the economy slows down, more companies may look to outsource as a cost-saving measure.

"Companies will look to cutting costs any way they can," says Mah. "If projects can be completed by an outsourcing vendor more quickly and less expensively, that's the route they'll take."

One way to prepare for a possible outsourcing project is to gather your data early. "You need to have a baseline of your own productivity in order to compare numbers an outsourcing vendor is proposing for an upcoming project," Mah says, "After all, you're hiring them on the premise that htey're going to be more efficient than your current organization. If you have no metrics evidence, you may end up hiring a supplier that doesn't achieve your goals."

Another way to avoid unnecessary upgrades is to analyze system and network resources and isolate the causes of overloads. In some cases, a simple policy change may be all you need. For example, students downloading MP3 files overloaded a number of college networks last year. Some campuses undertook expensive bandwidth and hardware upgrades, while others simply outlawed music downloads on campus resources.

Enterprises can also adopt policy-based solutions by eliminating certain types of resource-hogging activities, moving them to off-hours or using switches and routers to enforce policies and keep essential traffic moving. Workstation upgrades can be delayed by moving to a thinner client model. Although some people may resist such changes, they beat issuing pink slips.

As an added benefit, whenever you delay a purchase you can usually get more for your technology dollar, thanks to Moore’s Law. Ten years ago, a megabyte of RAM cost $800; now it’s a dollar. Every six months new processors are released, and the price of old ones drops. Computer hardware purchasing is one of the few things in life where a little procrastination sometimes pays off.

"We recommend acquiring new hardware with an understanding of cost trends," says John Schick, senior consultant at Compass America. "In the data center, for instance, we see hardware costs declining substantially year-to-year. Hence, your upgrade decisions should take into account market conditions."

Make Wise Hardware Choices
If an in-depth analysis shows it really is time to upgrade—what do you really need? Top-of-the-line features might be required by a few high-end users, but most can survive without them. And although there isn’t much cost difference between a 10GB and 40GB hard drive, if the files are being stored on a central server, purchasing the smaller disk may be optimum. Similarly, if software is being loaded remotely, does each computer need a CD-ROM, much less a CD-RW or DVD? What about a sound card? And do users need floppy disks if files are shared centrally and backed up over the network? None of these items adds much to the cost of an individual computer, but together they can mean several hundred dollars per machine. Multiplied by thousands of users, that adds up fast.

The biggest single item in determining cost, however, is the processor. Although Pentium 4s (P4) are wonderful with their 400 MHz bus and 1.5 GHz clock speed, they are expensive and only work with costly RDRAM at this time. (A version that works with SDRAM comes out later this year.)

Whether the P4 merits an immediate upgrade depends on whom you talk to. John Knox, an analyst at Gartner Inc., says, "For high-end and power user requirements, enterprises should immediately consider Pentium 4, as the performance benefits will often outweigh the price premium." The P4 excels at processing 3-D graphics for gaming, design, GIS and CAD applications. High-end users and graphics professionals will probably require the new processor. But for normal businesses applications, the 1.2 GHz AMD Athlon can match or better the P4, and at a lower cost.

Either of these, however, may be more than you need in a normal business environment, where processor speed typically doesn’t create a problem. For such uses, Knox recommends acquiring the slowest Pentium III available and adding more RAM to increase speed. As an alternative, many enterprises are switching to Athlon CPUs. A few years ago this wasn’t an option, as the major distributors only used these chips in consumer PCs. Recently, however, companies such as Micron and Gateway have started producing commercial units tuned to the Athlon processor. That provides the option of paying less for a computer or getting more processing power for the same amount of money.

Network Efficiently
Large-scale system and network management systems, such as HP’s OpenView or IBM’s Tivoli, offer a vast array of tools to monitor and fine-tune networks continuously. The problem is that these systems are complex; they often cost hundreds of thousands or even millions of dollars, depending on the size of your enterprise; and can take an age to implement. To address this problem, a number of simpler, lower-cost products have hit the market.

"While overall infrastructure management spending is growing at 20 percent or more annually," reports analyst firm Meta Group, "our research indicates a spending transitioning towards point infrastructure management tools to drive operational automation. This is a pushback on the high cost and long deployment rate of traditional management tools, and bodes well for niche/point vendors. We also note increased interest in change and asset management tools and believe this will continue during difficult economic times."

IPSwitch produces a $795 network and application mapping/monitoring/reporting application called What’sUp Gold. Agilent Technologies uses What’sUp Gold in conjunction with HP OpenView’s Network Node Manager. "The network device discovery and map diagramming capabilities allow me to customize network maps in a way that’s convenient and useful," says Pat Mahoney, a network engineer with Agilent.

Another useful tool for smaller sites, or medium-sized organizations planning to keep costs low, is Denika by Somix Inc. Priced at $495, it builds on the reporting features of MRTG (Multi Router Traffic Grapher, a popular utilization reporting tool) and What’sUp Gold. Essentially, Denika answers three questions: How much of the resource (network bandwidth, CPU and hard drive) is being consumed? What is the uptime of applications, servers and network devices? What is their response time?

"I use What’sUp Gold to monitor my routers, servers and applications, but I needed something to show me how usage changed over time," says Gareth Williams, systems manager for Thompson Heath & Bond Group, an insurance broker headquartered in the U.K. "Denika is simple to install and configure, and monitors vital elements such as CPU, memory, hard disk and network utilization."

While these applications may be more than sufficient for some sites, others require more comprehensive system and network monitoring. One cost-effective way to add management functionality is Somix’s WebNM, which costs between $20,000 and $40,000. It’s a Web-based enterprise network management suite that offers hardware performance reports; alarms for failing devices, applications and hardware; remote desktop management; and trouble ticketing. As an added bonus, it includes a Webcam that lets IT managers check on the operations center around the clock.

One user is IDEXX Labs, a pharmaceutical company with over 2,000 employees at 20 U.S. locations and 15 countries in Europe and Asia. On the fifth day of installation, employees at a British branch office reported serious latency problems and timeouts. John Deterling, a database administrator, arrived at the office to resolve the situation at 5 a.m. "We checked everything," he says. "The SAP server seemed to be operating fine, and we couldn’t understand why user connections were timing out. At 6:30 a.m. the problem was gone, and we had no idea what happened. And with the problem gone, we had nothing to troubleshoot."

When the installation engineer arrived that morning, he used historical reporting features to pinpoint the cause of the slowdowns—excessive traffic on the link between England and The Netherlands. "We used to experience difficulty determining the causes of latency across our frame relay network," says IDEXX Labs’ IT Director Paul Friedman. "WebNM removed the guesswork."

While these types of toolkits don’t provide the full functionality of the heavy-duty management frameworks, they are gaining an increasing share of the network management market. "By enabling companies to gain efficiency through better control of their technology, these tools address bottom-line results," says Meta Group. "Bottom line: With individual budgets being cut, IT organizations should focus on infrastructure and application management projects that result in shorter ‘mean time to value.’"

Defrag Routinely
A simple but often-overlooked way to avoid unnecessary hardware upgrades is to defragment all hard disks routinely. "Most Windows NT/2000 systems managers, as well as a growing number of users, know that fragmented files cause an overall degradation in system performance," says IDC Analyst Steve Widen. "What is less understood, however, is that effective use of defragmentation technology can produce comparable performance gains to costly system upgrades."

To back up its claims, IDC studied the effects of fragmentation in the enterprise. Tests on Windows NT/2000 workstations and servers running Excel, SQL Server 7.9 and Outlook/Exchange showed performance gains ranging from 19 percent to over 200 percent.

System managers’ experience backs up these numbers. "Strangely enough, NT or 2000 is fragmented even after a clean install," says Kevin L. Reiley of AT&T’s Premises and Desktop Support Division. "After I re-image a machine, as unbelievable as it sounds, fragmentation levels are 40 percent."

Reiley reports that performance on desktops and servers worsens steadily when fragmentation is left unchecked. "The obvious giveaway is the constant thrashing of the hard drive," he continues. "In my experience, fragmentation leads to slow reboots, sluggish operation and a generally slower machine." AT&T uses Diskeeper by Executive Software International to keep its machines performing at their peak.

Analysts at IDC calculated the economic impact of fragmentation. They discovered that many firms accepted the condition as the natural order of things and solve the problem by replacing their machines every two or three years. IDC estimates that by defragmenting these machines regularly instead, to keep them operating smoothly, companies can save $350 per workstation per year. For a firm with a thousand workstations, "over five years that translates into a total of $1,750,000 saved using defragmentation software to increase performance," Widen says, "as compared to exclusively using hardware upgrades as a solution."

IDC also examined the possibility of enterprises harnessing the manual defragmenter that is built into Windows 2000, rather than purchasing a full-featured networkable defragmenter. Unfortunately, the free version does not have any management functions and so cannot be scheduled to run automatically after-hours. IDC’s figures show that moving from machine to machine to defragment manually is time-consuming and considerably more costly in terms of labor. Overall, the analysts reported that the manual defragmenter was three to five times slower and far less thorough than third-party defragmenters.

"Network defragmentation clearly provides cost savings of several magnitudes when compared to manual defragmentation," says Widen. "This applies to both small businesses and global enterprises. When considering the significant impact on TCO, it is difficult to find any argument to position manual over network defragmentation."

Use Your Vendor Contacts
One final suggestion on cutting costs: Vendors are feeling the pinch just as much as the rest of us, and in some cases the vendor community has come up with creative ways to implement projects economically. Government agencies in particular have been successful at striking deals that sometimes involve little or no cash outlay. Cities that want to automate their parking ticket payments processing, for example, are lining up vendors to set up the service free in exchange for service fees.

Orange County, Calif., struck a unique deal last year with Lockheed Martin IMS in Santa Ana, Calif. for the management of a data center. The difference: profit sharing. The shrinking size of computer equipment over the years has freed up about 15,000 square feet of space at the data center. Under the contract, Lockheed uses this empty space to service other clients and will pay the county a minimum of $21 million. County data center employees are also available to work on outside projects for Lockheed during slow times, and the county is reimbursed for their labor.

"With the revenue sharing, we are bringing into the county a significant revenue stream that we would not have been able to do if we had brought the operation in house," says Leo Crawford, the county’s assistant CEO for information and technology. "That will help reduce the cost of providing services to county departments and agencies. Also, there are other opportunities that we hope will lead to even more revenue to the county."

Despite the apparent popularity of large-scale layoffs, dismantling a functional IT department for the sake of economy can be costly. It can take years to replace all the experience that is downsized, and service levels invariably suffer.

Must Read Articles