In-Depth

A Case for Dynamically Allocated Memory

Few issues with regard to application development have divided programmers more than the utilization of dynamically allocated memory. Dynamically allocated memory is a technique which causes mainframe application developers to draw a line in the sand. If you’re a mainframe developer, you’re on one side of that line or another. Generally, within a mainframe shop, there is a specific policy regarding the use of dynamically allocated memory and pointers. Such constructs are either status quo or abhorred by the entire populus of programmers in today’s mainframe world.

This article looks at the technique of dynamically allocated memory. What exactly is the advantage of using this construct? Is it a real advantage or just one of the many myths surrounding application development? Is the use of dynamically allocated memory just an attempt by programmers to be "cool" or are there genuine cost benefits to be realized with regard to application maintainability?

The Issue, Experience

How many readers of this magazine have maintained large scale, legacy COBOL, IMS, DB2 and/or VSAM, and CICS systems? These systems typically involve a batch and online component. Many are used on a seven-days-per-week, 24 hours-per-day basis. And these systems must be routinely repaired and enhanced due to new requirements from the user community.

Question: Given that you are responsible for caring for such a system, how many times have you walked into work, before getting your first cup of coffee, only to hear about the latest "Sev1" difficulty which involves exceeding the dimension of a COBOL array or table? Worse yet, how many times have you been beeped at 2 a.m. because a new version of a DB2 table now contains more rows than the number of elements in the application’s COBOL tables?

This type of defect occurs on a regular basis within these COBOL legacy systems, and the fix can be fairly involved. Generally, the use of a COBOL table spans many modules in an application system. When an OC4 occurs, due to the exceeding of a dimension of an array, the maintainer must find every place in the legacy system where that table is declared and used, and increase the dimension by a sufficient amount. What that sufficient amount should be is often unclear. All such modules must be tested. If the data table causing the difficulty is central to the application, dozens and dozens of modules must be modified. Bottom line: The exceeding of a dimension of a table can be a very costly maintenance item. And such a problem always seems to occur at the wrong time. How many more times are legacy COBOL systems going to be repaired because of this difficulty? How much does the constant repair of this defect cost the typical application maintenance budget? Is it ever realistically possible to determine a maximum table size that does not waste memory, yet is large enough to hold additional data items during the lifetime of a legacy system? Doesn’t your experience with this issue indicate that these legacy systems run on forever? Doesn’t the exceeding of array dimension crop up inevitably and repeatedly over time? And isn’t it a pain to repair?

The Theoretical Advantage of Dynamic Allocation

If a COBOL table is dimensioned at 1000 items in order to hold an anticipated number of items at the present time, and considering the future growth in the number of items, then the difference between the actual number of items (* size of table item) and the maximum dimension of the table (1000 * size of table item) is wasted memory. Such wasted memory has implications with regard to paging by the operating system. The bigger the declared area for the table, the more pages needed to hold the table, the more potential page swaps and the more CPU usage for paging. Oversized tables waste memory and the processing of such tables can degrade overall system performance. A dynamically allocated area, on the other hand, uses just the amount of memory necessary to hold all data items. A dynamically allocated area is thus efficient with regard to memory usage and operating system paging.

One more subtle advantage of dynamic allocation: The total amount of memory required by a routine at a given point in time may be less than an equivalent routine with statically allocated memory. Example 1 (see page 76) shows the two subroutines statically utilizing 1000 item tables. The total memory required to run the code in Example 1 is 2000 * the size of a table element. Example 2 dynamically allocates the first table, uses the first table, then frees the memory. Afterwards, the second table is created and used. The most memory employed at any one time by the program in Example 2 is 1000 * the size of a table element. Hence, routine 2 uses half as much memory as routine 1. The operating system benefits as a result.

The Real Advantage of Dynamic Allocation

The greatest benefit of dynamically allocated memory, from a legacy system maintenance perspective, is code that never has to be modified. The technique is best illustrated by example. If the memory requirement due to the routine exceeds the amount of memory available in the region, the programmer increases region size and reruns the module. The size of the table grows with the size of the data. The table is dimensioned at the exact size of the number of data items to be stored. The routine wastes no memory as shown in Example 3.

Note that table1 in Example 3 has been allocated but never freed. Typically, table1 would be freed (if necessary) in the calling routine. If a table is allocated and not freed, an application can run out of memory (the program ABENDs with an 80A). Any table which is allocated and not freed during the course of application execution is freed by the operating system when the application completes. However, a good rule to live by is: If you allocate memory, you should free the memory. Such a policy ensures against running out of memory.

Memory Models and DSECTs

To understand dynamic memory allocation fully, one must understand what goes on behind the scenes when a program allocates memory. The concepts are best understood by considering the Pl/i program memory model. COBOL, C, Assembler and FORTRAN programs follow similar principles with regard to memory allocation.

Any job running on MVS is allotted a region to run within. If you specify REGION=100K, MVS waits until 100K is available and then gives it to your job for the duration of execution. Within that 100K, a Pl/I program is laid out as follows:

a) The 100K is divided into three separate areas: A, B and C.

b) The Pl/i program (load module) is placed into area A.

c) The size of areas B and C equals (100K - size of Pl/i program) / 2.

d) Based, controlled (dynamically allocated) and automatic variables are allocated from memory in area B.

e) File I/O buffers and other OS constructs reside in area C.

When you ALLOCATE a P/li variable, the data type is declared CONTROLLED and the memory is obtained from area B. If you ask for more memory than is presently available in area B, you receive an 80A ABEND. Note that an increase of region size to alleviate the problem must consider the fact that the amount of memory in area B is a function of program (load module) size, not just a straight increase in region size. For the sake of a complete discussion, static variables are allocated within the load module and reside in area A.

The question which remains is how do you associate a data description with the memory obtained from the operating system? You do this by associating a pointer variable with a data description or DSECT. DSECT is an Assembler definition; a data description waiting to be associated with an area of memory. The area of memory is obtained with an ALLOCATE or GETMAIN statement. A pointer variable must be given the address of the allocated memory in order to make the fields of the DSECT accessible to the rest of the program. In Pl/i, you do the following:

In Example 4 (see page 76), MYSTRUCT is simply a data description. It has no memory. When the ALLOCATE statement is executed, the operating system yields an area of memory of size 20 from area B, and places the address of the memory in APTR. In this way, the field descriptions for FIELD1 and FIELD2 are "overlaid" on the memory, or thereby defined. If you have another DSECT (BASED structure), you can lay it over the allocated memory also. In fact, if you had a key field in the first few positions in several DSECTs, you could decide at run time which type of structure you’re dealing with and take appropriate action.

Language Specifics

Pl/i was designed for applications involving dynamically allocated memory. In terms of real-world applications, true Pl/i programs apply dynamic memory allocation in a liberal fashion. But these constructs are doable in other mainframe languages.

In 370 Assembler, one calls the GETMAIN macro to obtain memory dynamically. You specify the size of each item and the number of items to allocate. GETMAIN indicates to the user whether the allocation was successful with a return code in Register 15. Once again, the address of the allocated memory is associated with a DSECT which allows fields to be defined upon the area of memory allocation.

In COBOL/CICS, one can call a GETMAIN function which does the same thing as the assembler routine just described. You pass a pointer variable to the GETMAIN routine and it populates the pointer variable with the address of the allocated memory.

In a pure COBOL context, one must associate a data description with an area of memory. This is analogous to REDEFINES in COBOL. With a REDEFINES, more than one data description is associated with a single area of memory. Each of the data items (DSECTs) is a structure. To write true dynamic allocation routines in COBOL, one must drop into Assembler to perform the dynamic memory allocation. Such an Assembler subroutine would take a pointer variable to the area of memory allocated, the size of an individual item and the number of such items as arguments.

The address of the arguments is contained in Register 1, in the Assembler. The Assembler subroutine would basically plug in the size and number of items into the Assembler GETMAIN macro, execute the macro, check for any errors and return the address of the allocated area in the passed pointer variable, along with a return code which the calling COBOL program can check to ensure a successful allocation. The pointer variable in COBOL would be used in a SET ADDRESS COBOL instruction to define the link between the memory allocated by the Assembler routine and whatever structure (or table of structures) is desired. The structure(s) would be placed in the LINKAGE section of the COBOL program.

The C language follows the same ideas. C is used on MVS in a number of shops which produce commercial software products for the MVS platform. C is also the language of choice for applications which will run on the mainframe and other platforms such as DOS, OS/2, AIX, etc. C is a relatively transportable language. Within C, one calls malloc to allocate memory and free to return memory to the operating system, malloc returns a pointer to the memory allocated and that pointer can be of a certain type. Then, one can iterate through a set of allocated items by simply coding ptr++. C keeps track of pointer types.

A struct mystruct pointer, displayed in Example 4, would be incremented by 20 bytes each time the ++ operator is executed. Programmers can pass through a set of dynamically allocated, contiguous items by the use of a pointer variable and the ++ operator.

Take a Stand

The advantages of using dynamic memory allocation to reduce maintenance costs are clear, and can result in more efficient applications with regard to operating system overhead. Pl/i and C dynamic memory allocation constructs are readily available in these languages and routinely used to create dynamically extensible aggregates within applications. Simply put, you won’t be beeped in the middle of the morning because you’ve exceeded the array dimension of the system. The risks of employing dynamic memory constructs are minimal if you understand how the constructs operate.

On the open-systems side, C++ and Java routinely allocate and deallocate memory. Dynamic memory allocation has even shown up in the latest releases of FORTRAN 90. Given this state of affairs, the COBOL programming language seems like the odd man out. And legacy COBOL applications would benefit most by the introduction of some dynamic memory allocation constructs. Isn’t it about time that COBOL joined the rest of the world with regard to this issue? If you answer yes to this question, contact your local COBOL compiler writer and let him know how you feel. Eventually, the majority of the world’s COBOL programmers may come to ALLOCATE and FREE any time they please. And we can finally forget about those beepers going off at 2 a.m. due to "table dimension exceeded."

About the Author:

Dick Brodine has been a teacher, writer and software developer on mainframes and other platforms for 21 years. He can be reached at vendrpb@us.ibm.com.

Must Read Articles