Block Sizes: What You Don’t Know Will Hurt You
Poor blocking of sequential files can have a significant, yet often hidden, negative impact on batch workloads. Oftentimes a file performs dismally even if the JCL has been coded correctly. This article will examine what causes files to be poorly blocked. It will also show how to quickly and easily identify poorly blocked files and offer suggestions on how to fix them.
In today’s computer center tasks, such as balancing workloads, monitoring response time and fixing immediate storage problems take up much of the time of the performance analyst and storage manager. Often, little attention is paid to the lowly batch world, especially if jobs are completing successfully and no one is complaining. This was the situation in our shop, at least until increasing volume caused the batch window to begin impacting our online availability. An efficiency effort was initiated in an attempt to gain back some batch clock time.
Taking a "top down" approach, we began by examining those jobs that had experienced the largest increase in run time. A few jobs in question seemed to be performing an inordinate amount of EXCPs to sequential disk files. Oftentimes, the file in question was deleted in a subsequent step, so looking at the file on DASD was not possible. The JCL was coded correctly (BLKSIZE=0), but it was suspected that this was being ignored by the program. To quantify how many files were not being correctly blocked several days worth of SMF data was examined to look for inefficient DASD block sizes. SMF data was chosen because it is the most complete source for this information (using VTOC-based information, such as DMS/OS or ISPF 3.4, is incomplete because the VTOC shows only those datasets that exist at the time of the analysis and misses "transient" files). Below is a part of a SAS/MXG program to identify incorrectly blocked files:
SET PDB.TYPE1415 ;
IF DISP=’NEW’ ;
IF DEVICE=’3390’ ;
IF SUBSTR(DSNAME,1,1)=’P’ ; ß Only production files
IF BLKSIZE+LRECL LT OPTBLK OR
BLKSIZE GT OPTBLK ;
PROC SORT; BY DESCENDING OLDEXCPS;
To our surprise, the MXG analysis showed that there were a significant number of data sets that needed attention. Results were sorted by descending EXCPs, so that by working from the top down we could effect the largest change for the least effort. Now, at this point you may be thinking that this situation wouldn’t occur in your shop. My advice is to dump a few days’ SMF and see for yourself. Below is the output from an analysis run.
Figure 1: Output of SAS program that identified poorly blocked files.
Note the block sizes on the report. A block size of seven? What was going on?! It turns out there are two primary causes for BLKSIZE inefficiency: incorrectly coded JCL and programs with no record blocking. Today, with system determined block sizing, it is less common to find unusual or find poor blocking resulting from incorrectly coded JCL than it was in the past. It was puzzling that files that had the worst block sizes have the JCL coded correctly (BLKSIZE=0). It turns out that the problem with these files is the creating program lacks a file blocking statement, always as a result of an oversight by the creating programmer. Omission of the BLOCK CONTAINS 0 RECORDS statement causes a file to be created with no record blocking (i.e. RECFM=F!), a very inefficient sequential processing method. As stated in the VS COBOL II Application Programming Guide:
In a COBOL program, you can establish the size of a physical record with the BLOCK CONTAINS clause. If you do not use this clause, the compiler assumes that the records are not blocked. Blocking QSAM files on disk can enhance processing speed and minimize storage requirements.
This is quite an understatement!
The fix for this problem is very simple and straightforward. For COBOL programs, you simply add a BLOCK CONTAINS 0 RECORDS statement to the File Descriptor section for the errant DD and recompile the program. For EZTRIEVE programs, add a FB(lrecl,blksize) DISK statement in the FILE statement or, better yet, change the system option "BLOCK0" to ‘A’ (use system determined block size).
The results of performing this simple change were amazing. As a sample case I benchmarked the tuning of the following file, created by an EZTRIEVE program.
File name: PBCBSMN.STR9822.PRODB.ITSXREF.SCCFICN
Creating Job Name: PSTR152R Run Frequency: Daily
(number of I/Os)
CPU TIME (minutes)
CLOCK TIME (minutes)
Blocked at half-track (standard)
Figure 2: Performance Improvement caused by correctly blocking Eztrieve file.
Another example: Job PCRB1841 was identified as needing a BLOCK CONTAINS 0 RECORDS statement to be added for file PBCBSMN.CRB9842.STR.CRBDETL. The programmers made the program change and the dramatic results are shown below. It is important to note that the performance improvements were in spite of a 13% increase in volume.
File name: PBCBSMN.CRB9842.STR.CRBREDL
Creating Job Name: PCRB1841 Run Frequency: Monthly
(number of I/Os)
CLOCK TIME (minutes)
Blocked at half-track (standard)
Figure 3: Performance Improvement caused by correctly blocking COBOL file.
Changes which require a program recompile may require the involvement of the application development or support areas. Operators or operations analysts could easily fix incorrectly coded JCL. In our shop we enlisted the assistance of a second shift operations analyst who worked on the problem datasets as time allowed. In two month’s time we reran the report again and were pleased to find far fewer problems. We still are fixing errant block sizes (tape too) as part of our general tuning responsibilities. We recently began using allocation software to force optimum blocking of all test files; this eliminated the problems caused by incorrectly coded JCL.
To proactively address the problem of programs missing a blocking statement we sent out "Tech Tidbit" communication that went out to the programming staff. The response was favorable; most programmers did not know the effect of omitting a blocking statement. Below is the procedure that was used by the analysts to fix datasets that appeared on the block size tuning report.
Before you assume that your shop does not have a hidden problem with poor performance caused by inefficient file block sizes, run a quick analysis. If your analysis shows you have some poorly blocked file start up a block size tuning effort. The result of a BLKSIZE tuning endeavor could be huge savings of CPU and DASD resources in your shop, also significant reductions in batch run times, all for very little effort. Remember, there’s no I/O like no I/O!
About the Author: Wayne Schumack is a Principal Architect for Blue Cross Blue Shield of Minnesota. He has over twenty years experience in data processing. He can be reached via E-mail at Wayne_A_Schumack@bluecrossmn.com.