In-Depth

Block Sizes: What You Don’t Know Will Hurt You

Poor blocking of sequential files can have a significant, yet often hidden, negative impact on batch workloads. Oftentimes a file performs dismally even if the JCL has been coded correctly. This article will examine what causes files to be poorly blocked. It will also show how to quickly and easily identify poorly blocked files and offer suggestions on how to fix them.

In today’s computer center tasks, such as balancing workloads, monitoring response time and fixing immediate storage problems take up much of the time of the performance analyst and storage manager. Often, little attention is paid to the lowly batch world, especially if jobs are completing successfully and no one is complaining. This was the situation in our shop, at least until increasing volume caused the batch window to begin impacting our online availability. An efficiency effort was initiated in an attempt to gain back some batch clock time.

Taking a "top down" approach, we began by examining those jobs that had experienced the largest increase in run time. A few jobs in question seemed to be performing an inordinate amount of EXCPs to sequential disk files. Oftentimes, the file in question was deleted in a subsequent step, so looking at the file on DASD was not possible. The JCL was coded correctly (BLKSIZE=0), but it was suspected that this was being ignored by the program. To quantify how many files were not being correctly blocked several days worth of SMF data was examined to look for inefficient DASD block sizes. SMF data was chosen because it is the most complete source for this information (using VTOC-based information, such as DMS/OS or ISPF 3.4, is incomplete because the VTOC shows only those datasets that exist at the time of the analysis and misses "transient" files). Below is a part of a SAS/MXG program to identify incorrectly blocked files:

SET PDB.TYPE1415 ;

OPTBLK=27998 ;

IF DISP=’NEW’ ;

IF DEVICE=’3390’ ;

IF SUBSTR(DSNAME,1,1)=’P’ ; ß Only production files

IF BLKSIZE+LRECL LT OPTBLK OR

BLKSIZE GT OPTBLK ;

PROC SORT; BY DESCENDING OLDEXCPS;

ETC.…

To our surprise, the MXG analysis showed that there were a significant number of data sets that needed attention. Results were sorted by descending EXCPs, so that by working from the top down we could effect the largest change for the least effort. Now, at this point you may be thinking that this situation wouldn’t occur in your shop. My advice is to dump a few days’ SMF and see for yourself. Below is the output from an analysis run.

JOB

FREQ

DSNAME

DDNAME

RECFM

LRECL

BLKSIZE

NEWBLK

OLD-EXCP

NEW-EXCP

PAIU109C

3

PBCBSMN.AIU9725.PRD.AIUZPF10.TEMPPROD

TEMPPROD

FB

204

204

27948

1205829

8802

PCRB1841

9

PBCBSMN.CRB9842.STR.CRBREDL

CRBREDL

FB

43

43

27993

968436

1494

PMCS1417

2

PBCBSMN.MCS9749.MCS1417.EXTRACT.GXXX

SELECTED

VB

6392

6396

25568

590380

196794

PAIU108A

3

PBCBSMN.AIU9725.PRD.BLSHLDO.NODUPS

ONSF320O

FB

320

320

27840

480189

5520

PSTR152R

4

PBCBSMN.STR9822.PRODB.ITSXREF.SCCFICN

OUTFILE

FB

34

34

27982

415492

508

PAIU108G

3

PBCBSMN.AIU9725.PRD.AIUPZ002.UB92EDTA

OUB92O

FB

192

192

27840

302100

2085

S005411G

3

PBCBSMN.MCS9749.A05411.SELECTED

SELECTED

VB

6392

6396

25568

255567

85191

PCIS187D

3

BCBSM.CIS.CIS38001.TEMP

CIS3800

FB

400

6000

27600

241002

60252

PAIU108A

3

PBCBSMN.AIU9725.PRD.BLSHLDN.NODUPS

ONSF320N

FB

320

320

27840

232731

2676

PPAR192M

1

PBCBSMN.PAR9773.DELETE.PROV

PAY

FB

80

80

27920

204442

586

PSTRD23E

2

BCBSMV.STR.PRODB.GCLOG.BK.GXXXXV00

BKUPFILE

FB

1718

1718

27488

196292

12270

PMCS1433

2

PBCBSMN.MCS9749.MCS.MNMBR.REJECT.GXXX

REJECTO

FB

350

350

27650

180908

2290

PBCS156A

12

PBCBSMN.BCS9939.BCSSCCT

TEMPFIL1

FB

7

7

27993

165204

48

PAIU108R

2

PBCBSMN.AIU9725.PRD.AIUPZ024.OUTPAPER

OUTPAPER

FB

133

133

27930

163412

780

PAIU108R

2

PBCBSMN.AIU9725.PRD.AIUPZ023.VOUTFILE

VOUTFILE

FB

155

155

27900

141588

788

PPAR192M

1

PBCBSMN.PAR9773.PRVFILE.BCBSA.NEW

NEWPROV

FB

800

800

27200

109618

3225

PBKP175A

12

PBCBSMN.SEC9815.SYS1.RACF.BKUP.GXXX0

SYSUT2

F

4096

4096

24576

108000

18000

PAPL186D

2

BCBSM.APL.APLALL

APLALL

FB

391

6256

27761

102722

25682

PSTRE40D

4

PBCBSMN.STR9822.PRODB.UPINPAYD.GXXX

UPINPAYD

FB

100

100

27900

88928

320

Figure 1: Output of SAS program that identified poorly blocked files.

Note the block sizes on the report. A block size of seven? What was going on?! It turns out there are two primary causes for BLKSIZE inefficiency: incorrectly coded JCL and programs with no record blocking. Today, with system determined block sizing, it is less common to find unusual or find poor blocking resulting from incorrectly coded JCL than it was in the past. It was puzzling that files that had the worst block sizes have the JCL coded correctly (BLKSIZE=0). It turns out that the problem with these files is the creating program lacks a file blocking statement, always as a result of an oversight by the creating programmer. Omission of the BLOCK CONTAINS 0 RECORDS statement causes a file to be created with no record blocking (i.e. RECFM=F!), a very inefficient sequential processing method. As stated in the VS COBOL II Application Programming Guide:

In a COBOL program, you can establish the size of a physical record with the BLOCK CONTAINS clause. If you do not use this clause, the compiler assumes that the records are not blocked. Blocking QSAM files on disk can enhance processing speed and minimize storage requirements.

This is quite an understatement!

The fix for this problem is very simple and straightforward. For COBOL programs, you simply add a BLOCK CONTAINS 0 RECORDS statement to the File Descriptor section for the errant DD and recompile the program. For EZTRIEVE programs, add a FB(lrecl,blksize) DISK statement in the FILE statement or, better yet, change the system option "BLOCK0" to ‘A’ (use system determined block size).

The results of performing this simple change were amazing. As a sample case I benchmarked the tuning of the following file, created by an EZTRIEVE program.

File name: PBCBSMN.STR9822.PRODB.ITSXREF.SCCFICN

Creating Job Name: PSTR152R Run Frequency: Daily

SUMMARY

 

SPACE (cyls)

EXCPS

(number of I/Os)

CPU TIME (minutes)

CLOCK TIME (minutes)

Unblocked (LRECL=34)

741

920,000

1.2

30.7

Blocked at half-track (standard)

37

10,632

.14

3.5

Savings

(95% reduction)

(99% reduction)

(88% reduction)

(95% reduction)

Figure 2: Performance Improvement caused by correctly blocking Eztrieve file.

Another example: Job PCRB1841 was identified as needing a BLOCK CONTAINS 0 RECORDS statement to be added for file PBCBSMN.CRB9842.STR.CRBDETL. The programmers made the program change and the dramatic results are shown below. It is important to note that the performance improvements were in spite of a 13% increase in volume.

File name: PBCBSMN.CRB9842.STR.CRBREDL

Creating Job Name: PCRB1841 Run Frequency: Monthly

SUMMARY

 

SPACE (cyls)

EXCPS

(number of I/Os)

# Records

CLOCK TIME (minutes)

Unblocked (LRECL=43

2588

4,142,000

1107944

149.2

Blocked at half-track (standard)

183

1,061,000

1250200

73.3

Savings

(93% reduction)

(74% reduction)

(13% increase)

(51% reduction)

Figure 3: Performance Improvement caused by correctly blocking COBOL file.

Changes which require a program recompile may require the involvement of the application development or support areas. Operators or operations analysts could easily fix incorrectly coded JCL. In our shop we enlisted the assistance of a second shift operations analyst who worked on the problem datasets as time allowed. In two month’s time we reran the report again and were pleased to find far fewer problems. We still are fixing errant block sizes (tape too) as part of our general tuning responsibilities. We recently began using allocation software to force optimum blocking of all test files; this eliminated the problems caused by incorrectly coded JCL.

To proactively address the problem of programs missing a blocking statement we sent out "Tech Tidbit" communication that went out to the programming staff. The response was favorable; most programmers did not know the effect of omitting a blocking statement. Below is the procedure that was used by the analysts to fix datasets that appeared on the block size tuning report.

Before you assume that your shop does not have a hidden problem with poor performance caused by inefficient file block sizes, run a quick analysis. If your analysis shows you have some poorly blocked file start up a block size tuning effort. The result of a BLKSIZE tuning endeavor could be huge savings of CPU and DASD resources in your shop, also significant reductions in batch run times, all for very little effort. Remember, there’s no I/O like no I/O!

About the Author: Wayne Schumack is a Principal Architect for Blue Cross Blue Shield of Minnesota. He has over twenty years experience in data processing. He can be reached via E-mail at [email protected].

Must Read Articles