focus topic: Tuning An AS/400 And Keeping It Tuned
Capacity planning is the way to get from where you are to where you want to be. If you've got 500 happy, productive end users today, can you make 1,000 end users happy next year?
Capacity planning means assessing what requirements will be necessary for growth and adjusting an investment in technology to future organizational needs. In a growing organization diminished performance is normally the first sign that a hardware upgrade is necessary.
This could mean a CPU upgrade; it could mean a faster networking infrastructure; it could mean more or higher speed storage devices; it could even mean a change in business processes. Or, according to Zohar Gilad, VP of product marketing for Mercury Interactive, based in Sunnyvale, Calif., performance improvement could mean no more than a software change.
"It's often possible to make a software change without putting money into hardware." Gilad says. "In fact, sometimes all the money in the world wouldn't buy the performance improvement you're asking for because the problem is in the software, not the hardware."
Capacity planning is a two part process, says Bill Bullen, VP of MB Software and Consulting, based in Camillus, N.Y. First, gain an understanding of your current workload and capacity issues, and then forecast future needs. What can be done to the workload today can have a dramatic impact on growth plans.
"There is an appropriate investment growth rate that can be forecast to accommodate an increase in users, an increase in data volume or an increase in system usage," Bullen says. "However, this investment is substantially less if the underlying performance issues are addressed first."
The normal assumption is that a workload is a given, a constant, and there isn't much that can be done about it. The approach taken too often, he says, is to tune the hardware around the workload, and do capacity planning based upon that. But leaving an underlying software problem uncorrected will only assure that an upgraded system will reach capacity long before it should.
The 80/20 Rule
According to Bullen, there is often an 80/20 rule at work: 80 percent of performance problems are found in 20 percent of the programs. "Usually, it's something like a job doing an open query against four years of history, looking for a piece of information that was entered yesterday," he says. "The obvious solution is to delete three years of history. But because people don't have the tools to look at the job they assume they can't do anything about it."
Canandaigua Wine Company, a subsidiary of Canandaigua Brands, located in Canandaigua, N.Y., one of the leading wineries in the United States, acquired an AS/400 9406 Model 530 RISC box in 1997. By the spring of 1998, Jeff Mele, AS/400 systems specialist, discovered a job that was consuming an inordinate amount of CPU time. "It was an invoicing operation, that ran every day processing orders," Mele says. "It was constantly building indexes on top of files. It took five or six hours to run and the run time was getting dramatically worse."
According to Mele, the invoicing operation delayed the other nightly processing jobs that needed to be done, which would then run into the winery's daily back ups. "Since we're in the liquor business we have certain filing requirements that have to be done in a timely manner, or we face fines and penalties. As the systems administrator, I was faced with constant haranguing to speed it up. "
Using tools from MB Software Mele was able to pin point the application where the problem lay. Once the problem was identified, an experienced programmer at Canandaigua was able to reduce the run time of the invoicing program to about 20 minutes. "I needed something better than IBM's performance tools to look at individual jobs and analyze what was going on," Mele says.
There are a number of tools available, including good ones from IBM, offered with OS/400. According to Walter Schwane, senior programmer for performance tools at the AS/400 Brand in Rochester, Minn., among IBM's tools are the Management Central function offered with OS/400 V4R3 which provides a real time look at currently running jobs and CPU utilization. Another, Performance Monitor, provides more detail over greater periods of time. Performance Management/400 monitors system performance over its lifetime.
The IBM performance monitoring tools are an inexpensive, easy to use aid to help identify possible problem areas, according to Darry Stansbury, president of Spanish Peaks Computer Services, a consulting group based in Trinidad, Colo.
"Performance Monitor, for example, provides transaction reports," Stansbury says. "It ranks programs in terms of numbers of transactions over the duration of the trace period. It gives the program name, the CPU utilization per transaction, and CPU utilization by program. You can identify which programs are candidates for review."
What they don't show is what is causing the problem within the program. Most systems people will have a gut feel for which programs need work for performance reasons, Stansbury says, but greater detail can be provided by other tools. Once a job that needs to be looked at is identified, tools available from MB Software, Mercury Interactive, Macro 4, Open Universal (Montreal) and others can analyze what is causing the problem.
"Knowing where to look is the difficulty," says John Bishop, midrange support engineer for Macro4 Inc. (Parsippany, N.J.). "If a customer finds that a particular application program is consistently using more CPU than he would like it to use, he has looked at it and it can't be altered, and the end users are still complaining about a three second response time, when it should be one and a half seconds, then there's not much more he can do but upgrade the CPU."
Of equal importance to performance tuning is managing disk capacity, according to Spencer Elliott, VP of engineering at MIS Software, based in Oceanside, Calif. "By not managing one, you can influence the other," Elliott says. "If you are not managing the data files and objects on your system that can affect the amount of time you spend doing back ups and saves and stores, it can adversely affect your production schedule."
Just buying more DASD is a short-term solution, Elliott says. Allowing capacity to increase unaddressed only makes the situation worse in the future. "It's as if when your garage gets full, you just buy a new garage instead of cleaning it out," he says.
"A data processing department is responsible for data integrity," Elliott continues. "If you have old data that's not used anymore, then you're not following your responsibility to the company. You need tools that, on a periodic basis, look at the unused objects on disk, show you how fast your DASD is growing and where the growth is occurring."
Canandaigua Wine Company was able to use tools to correct disk problems as well as its performance problems. According to Mele, "We were consuming huge amounts of disk because nobody wanted to manage the data, to decide what needed to be live and what could be archived. In the past we had just thrown more disk at it. A disk analysis tool can provide a look at our growth over all and show me where the biggest consumers are."
For a growing organization a hardware upgrade will be inevitable in the long run. Does that mean software fixes and disk management are only temporary? Yes and no, says Mercury Interactive's Gilad.
"Sometimes these are the only fixes. No hardware solution will help you out," Gilad says. "For example, if you have a problem in a software module on a client, no upgrade to the server will fix it. Plus, the software solution may enable you to postpone an investment decision that will save you money. Why invest now when you can invest later when prices for hardware, CPU and disk have gone down?"
Unfortunately, this approach is too seldom used. "Rarely have I seen an organization tackle bad programs as a focused project," Stansbury adds. "They may put them on a laundry list of projects for their application programmers, behind the other things users have asked for that are given higher priority. They'll say, 'Here are two or three programs I want you to look at when you have a chance.' But they never get a chance."
"In a good capacity planning environment you should monitor transaction levels, response times and CPU utilization at least quarterly," Stansbury says. "That way you'll get an idea of your growth rate, and when a hardware upgrade is unavoidable."
"We're on a growth track," says Canandaigua's Mele. "As the company grows our needs will change. But for now the use of these tools allows us to leverage our investment in technology, and will help us decide when we've exhausted all the other possibilities and an equipment upgrade is necessary. With the rate of change in technology as it is, you could conceivably change every year. But that doesn't make a lot of economic sense."