In-Depth

Accessing OS/390 Data from Windows Applications: Using an Authorized Subsystem as a Server

The ability to transparently access data in realtime between mainframes and distributed systems has been an elusive goal of the IT community for years.

This situation is not likely to change. The mainframe’s performance, stability, security and scalability make it the most popular method for storing both historical and current data. IDC estimates that 70 percent of all business data is stored in OS/390 mainframes. Fully 75 percent of that (about 50 percent of the total) is in Virtual Storage Access Method (VSAM), representing 25 years of accumulated legacy data.

Getting realtime access to that information in a familiar, easy-to-use form is becoming more and more important to companies doing business on "Internet time," with its requirements for high levels of availability and for rapid retrieval of current information. The explosive growth of dotcoms, the movement of brick-and-mortar retailers online and the use of Internet/intranet technologies to conduct e-business are fueling this phenomenon.

The challenge is that this growth is based on open systems platforms, typically Windows NT or UNIX. Open systems, with their inherent support for TCP/IP networking, low cost and ease of implementation, are excellent choices for deploying solutions using distributed architectures.

But they fall short where the mainframe excels – in the centralized storage, management and dissemination of data.

The industry has been trying to marry the mainframe and distributed-systems environments for the past decade. Various offerings have been developed and presented as that holiest of grails, Universal Data Access.

Unfortunately, realtime Universal Data Access has been unattainable. Different methodologies achieve part of the goal, but each fails in several important criteria. The access is not instantaneous. The system moves the data from its core database, or the solutions are expensive, proprietary, time-consuming or resource intensive.

Clearly, a new approach is needed. This article describes such an approach, an enabling technology that uses an authorized OS/390 subsystem as the engine for high-performance access. Such a subsystem would interface at the operating system level to directly read stored data at the file or record level. The solution would operate transparently, taking native data, interpreting it and presenting it to distributed systems in industry-standard formats.

To understand why this would be the most effective way to achieve realtime data access between mainframes and open systems environments, it helps to contrast the proposed technology with present solutions.

Present Solutions

The current set of data-access solutions includes such diverse strategies as simple file extract, replication, SQL-based relational database operations, server-resident transformation engines and Internet-specific FTP solutions. Additionally, a number of announced "architectures" serve as marketing umbrellas for hardware storage sales, promising seamless integration and Universal Data Storage.

Unfortunately, these "solutions" do not always live up to their promises. A case in point is middleware. Presently, most organizations use middleware software to move, convert and/or translate information between software programs. The principal forms of middleware are database middleware, distributed transaction processing, remote procedure calls (RPCs) and message-oriented middleware (MOM). All of these solutions have their place. However, for various reasons, none of them (alone or together) completely meet the challenge of Universal Data Access.

One reason is that, for the sake of simplicity, all of these methods operate at the higher levels of the OSI model. The higher the level of abstraction, the more "wrapping" of the functionality required, and the slower the process. True realtime Universal Data Access requires that the process forego the higher levels of abstraction and get closer to the data store, where speed is maximized.

Another reason is the relative maturity of the technologies. As a technology evolves, it inevitably becomes encumbered by interface requirements, protocols, standards and conditions that affect its capabilities and flexibility.

Also, most of these solutions are concerned about the structure of the data, rather than its content. True Universal Data Access requires a technology that focuses on the information itself, not on its format.

Emerging Enabling Technology

Ideally, what is needed is a solution that allows an enterprise to easily implement new open systems applications, instantly access multiple data sources throughout the organization, and present integrated realtime data to the application.

Such a solution would allow organizations to share data among all enterprise platforms – OS/390 mainframes, Windows, UNIX and Linux machines – with zero latency. This technology would provide transparent, realtime access to data wherever it resides, on mainframe storage, in server file systems, or on desktops.

In such an environment, an organization could receive realtime data from the mainframe and use that data in an off-the-shelf application, without the time-consuming and costly process of migrating or replicating that data. Source data could come from any type of platform and could come from multiple sources simultaneously. The data would be transformed and transported, but not moved from its storage location. The enabling technology would provide just the specific data records or fields required by the requesting application, integrating data from various sources if necessary.

This type of solution would have to support the major advantages mainframes have given businesses for years. The method would have to be secure. It would have to maintain the integrity of the data by keeping it in one place (not moving it) and keeping it in its native form (not modifying it). The method would have to be cost-effective, relatively easy to implement, and scalable to accommodate growth.

Subsystem Requirements

Using an OS/390 authorized subsystem as a server could meet these criteria. After all, it is only reasonable that the best way to access mainframe data is by using mainframe technology. An authorized subsystem would interface at the operating system level to directly read the stored data at the file or record level. Such an approach would meet the seven essential requirements of the mainframe environment.

24x7 Availability. The mainframe is known for its reliability and availability. An industrial-strength subsystem poses no threat to the computer’s bullet-proof operation.

Read-Only Environment. The information would be interpreted, transported and seamlessly mapped for the program’s use. The data, however, would stay on the host mainframe system. In this read-only environment, data integrity would be guaranteed.

Security. The majority of the processing would be done on the mainframe. Access would be controlled by the OS/390 operating system, with all its attendant benefits. Existing System Authorization Facility (SAF) mainframe processes, such as Resource Access Control Facility (RACF), would guarantee the integrity and security of the corporate data, without regard to the accessing platform.

Operator Control. The solution would support the traditional role of mainframe systems management. Storage administration, security, performance management and capacity-planning procedures and methodologies would not be affected. The subsystem could be dynamically installed without requiring an IPL of the system. The technology would operate concurrently with other S/390 applications, such as Customer Information Control System (CICS), DB2 and Information Management System (IMS).

Data Sharing. Data could be shared, regardless of its source. Normal data-sharing conventions would be supported. IBM applications would be able to access server-based storage using standard IBM access methods and procedures. Conversely, Windows and UNIX users could use their familiar interfaces to access and manipulate data, regardless of its source.

Scalability. A scalable architecture would allow the system to expand from multiple clients accessing a single mainframe through a single server to a number of inter-networked clients, servers and mainframes. The technology would have to meet virtually any need in the enterprise environment, from simple departmental data sharing to an enterprisewide data-sharing solution. To provide this flexibility, the technology would need to be scalable in the number of threads on the mainframe, as well as in the number of servers, clients and access methods.

Performance. By using mainframe-processing power to perform the bulk of the operation, the enabling technology would virtually guarantee instantaneous, realtime access to data. Because a subsystem isolates the direct-access operations, it has no impact on other operations. The requests would not go through other mainframe applications and not be subject to queuing delays.

The access would be synchronous and direct, resulting in zero latency. The technology would talk directly to the data store, using no intervening processes that would slow down the operation.

Elements of the New Technology

To use the subsystem to achieve Universal Data Access, a multitiered system – using the mainframe, servers and clients – is envisioned for maximum effectiveness and scalability.

Agents. Each tier of the system would run a software agent. Each agent would be aware of the platform hardware, operating system and application interface characteristics.

The agents would abstract the underlying operating system and hardware, and be responsible for ensuring security, providing communication, acquiring and transforming data, and presenting that data to the application in a ready-to-use format. The agents would enable new applications to have realtime access to existing data from all servers running agents, wherever they are located on the system. The agents would be able to simultaneously access data from multiple files (or multiple computers), and integrate that data with information from other data sources.

Each agent would be capable of being a data provider (the needed data resides in its local storage) or a data consumer (the needed data resides somewhere else). This would allow a distributed client to directly access data located on a mainframe, and also allow the mainframe to access data located on a client.

Templates. User-defined templates would be used to give instructions to the agents, informing the agents of the data’s location, the targeted records and fields and the desired format. The reusable templates would be stored on the server and could be used by any client that connects to the server. System administrators or application programmers could create and distribute the transportable, easy-to-operate templates.

Bridge Interfaces. The emerging solution would need to adhere to the newest and most technologically advanced architectures and standards, including HTML, Dynamic HTML, XML and Java. The Windows NT Server components of the solution would need to be represented as OLE DB and ActiveX Data Objects (ADO), which represent Microsoft’s newest implementations of component software developed using the Component Object Model (COM). (This allows numerous Windows programs, including all Microsoft Office products, to directly view data without further manipulation.)

Bridge interfaces would be used at the client to assure adherence to standards. These interfaces would convert the data to a standard that the client application could use. Because of this support for standards, a proprietary API would not be needed.

The interconnection among the three tiers (mainframe, client and server) would be via a standard TCP/IP network. Using TCP/IP would enable the use of standard connections such as Ethernet, Fast Ethernet, Gigabit Ethernet or Remote Access Service (RAS).

Such a system could operate in virtually any open systems environment, including Windows, UNIX or Linux.

Operation. To fully appreciate how desirable this approach would be, it helps to contrast it with the present process, which is basically:

• A procedure is defined to move data off the mainframe.

• The data is moved to an extract file.

• The extract is moved by FTP, emulation or physically to the new platform.

• The data, now off the mainframe, is backed up locally.

• An application now translates, transforms or formats the data for client use.

• The client application uses the data which, by now, can easily be out of date.

The proposed solution would operate like this:

• One-time templates are created, defining how the system will interpret the data.

• The client loads and executes the template.

• The data is interpreted and presented to the client application.

• The client application uses realtime data.

In short, this transparent, straightforward process would deliver realtime information instantaneously to the requesting application in industry-standard data formats. At no time would the source data be moved, replicated or in any other fashion, jeopardized.

Applications. This type of enabling technology would be appropriate for any application where high-performance, fast access to realtime data is important, such as e-business, office automation, and imaging and modeling.

In e-business applications, such a system could be used to connect back-up or Web servers to a mainframe. The solution could be used to look up inventory, check customer records, validate credit, check bids or track orders.

The solution could automate the report process, provide data in realtime, and allow standard office tools (instead of proprietary and custom-built report generators) to view data.

The enabling technology could make the use of large data files such as images, animation or complex modeling more convenient. The large data files would remain on the mainframe, which has the horsepower and storage capacity to handle them.

Benefits

The benefits of such a technology would include:

• High-performance access to realtime data throughout the enterprise.

• The process would operate with zero latency. There would be true realtime access with millisecond speed and no queuing delays.

• The data would remain at the mainframe. Problems of data replication, migration and synchronization would be eliminated.

• Security and data integrity would be supported. There would be a single data source and a single recovery point in the event of failure.

• Storage system costs and storage management costs would be lowered. Applications could be developed much faster. Existing applications could be easily adapted to industry-standard interfaces.

• Operator efficiency would increase; the processing window would open up because fewer processes would be required.

If Universal Data Access is ever to become a reality, a new approach to accessing and providing information is needed. Using an authorized OS/390 subsystem as a high-performance, data-access server represents just such a new approach. The subsystem could represent a highly optimized method for realtime data access between mainframe environments and open systems. This approach would capitalize on the mainframe’s strengths, making it even more valuable in the enterprise environment.

The result would be an organization that could enjoy the benefits of instantaneous access to realtime data, and thus could more effectively use its corporate data for e-business requirements.

About the Author: A.A. "Pete" Johnson is the Director of Software Architecture for Xbridge Systems (Sunnyvale, Calif.), managing the development for the Cross Platform Bridge product.

Must Read Articles