In-Depth
Going Mobile: Introducing Your Mainframe to Modern Tools
With the proper process controls, you can now allow access for any class of device and "future proof" yourself from what comes next.
By Ron Nunan
As major IT trends go, we're facing one of the biggest and most challenging trends mass-requirement computing has faced since Windows replaced DOS. Mobile computing is completely changing the way business is getting done. As enterprise embraces the rapid adoption and subsequent expansion of Internet-based computing -- the emerging cloud -- agile companies are seeing the wisdom in adopting mobile computing to help accelerate transactions, outsource disruptive technology, and make significant strides in customer service.
Gartner, in its Predicts 2012 research report echoes this trend and claims the IT landscape is changing at a dramatic rate, with cloud and mobility driving the change. According to the report, a modern workforce expects "to get access to personal, work, business applications and data from any device, anytime and anywhere." Upper management is quick to adopt these new "smart" platforms, in the form of tablets, latest-generation laptops, smartphones, and other devices. Managers realize the potential for increased productivity and reduced costs but also understand that the ramifications to IT systems complicates mobile adoption. In this emerging bring-your-own-device world, functionality, access, and security become daunting problems.
IT is now charged with finding a way to bridge the gap between the new technologies this new mobile computing offers and the older mainframe technologies that were originally designed for a particular set of tasks but were never designed for (or built for upgrading to) this new wave. If your enterprise secures critical data on a mainframe system or utilizes legacy host-based applications, you're already well aware of the long-standing obstacles to integration. Still, with executive-level adoption and support for mobile's huge potential increase in productivity and service, IT is in a bind to find a solution.
Obviously, the big question is: "How?"
Forward-thinking IT departments will look to address this legacy inventory of applications not by rewriting or making wholesale changes but by employing wrapper-style services and middleware. Until now, IT has been scrambling to find a successful solution to each application problem arising from mobile computing, but this is quickly changing into a demand to work with all applications on all systems without regard to the device.
Enterprises are at various stages of coping with the need, but with no single answer, IT is struggling to bring their operational capabilities up to speed now to meet demand.
For a department that can't simply ignore the problem, there are emerging solutions that will help those devices gain access to mainframes, AS400s, VAXs, or other host systems. Solutions such as Attachmate's Verastream (full disclosure: I work for Attachmate), IBM's HATS/Broker, and Rocket's Seagull focus on putting the needed changes into the middle tier and don't require any direct modifications to any of the existing critical enterprise applications or systems. Hence, the associated risks of these solutions are low, allowing them to be put in place quickly.
This middle tier will run on external, PC, or Unix servers that interface with the host applications through standard protocols, ranging from standard terminal protocols to direct memory access. Though the standard protocols aren't from the applications, they are from the systems that run them. This is a unique aspect for access to older, non-flexible applications.
These middleware solutions can talk with the existing applications through the APIs that are offered by the system. This is the required ability that allows you to take advantage of truly legacy applications that were never intended for reuse -- system-level access allows you to take control of applications that don't offer API control themselves. Access can be through shared memory ( a method that can intercept and invoke application transactions); with some application types it can use UI exit points in the logic such as CICS Bridge Exits; or -- as a catch-all -- access can be through direct programmatic control of the UI using the terminal protocols that are available to users.
In any circumstance, the solutions need to provide comprehensive toolkits so you can define the level of control through the access method and then create callable routines that act as subset services. Once this is defined, the better solutions not only offer the ability to publish these routines as consumable services, but they let you compose them into higher more business focused routines using standard middleware constructs like BPEL.
Although you may think that providing composition services is an unnecessary layer for a legacy access tool to offer, the better solutions ensure you have control over basic orchestration of the exposed legacy routines. The simple reason is that exposing truly granular services, which are actually discrete asynchronous components derived from an unknowing and often synchronous set of application processes, is risky. With good orchestration controls, defined by those who understand the needs of the application they are manipulating, safe use of services that take advantage of the applications is possible.
In the end, the processes built with the access and composition tools will be available through standard service access -- you essentially normalize access to what are really proprietary sub-routines built from your enterprise applications. With the proper process controls, you can now allow access for any class of device and "future proof" yourself from what comes next.
Although it might seem that a solution of this magnitude would be prohibitively grand and expensive, it won't be. The new world will include the ability to integrate, expose, and control access to host applications without risk, leaving the applications untouched. The right solution will enable your host, mainframe or not, to become a back-end to your own data integration cloud. The solution must minimize implementation time, reduce risk and upfront costs, and improve long-term ROI by being highly adaptable and reusable.
Ron Nunan is a chief strategist and senior product manager for the Attachmate Corporation's mainframe integration products, playing a central role in the company's product development and applied technology solutions. He also has worked in the financial/accounting software industry and was a systems specialist, providing architectural support on enterprise class IBM-centric solutions. He can be reached at ron.nunan@attachmate.com. To read more about mainframe integration, visit Ron's blog, Application Integration.