To the untrained eye, a typical e-business environment may appear suspicously similar to the more traditional IT environment: the rows of gleaming servers, the databases and associated buisness and financial applications, the array of management tools...
There is a simple way to tell the difference between the two: More often than not, the e-commerce environment is the one with the IT manager tearing his hair out.
The National Association of Securities Dealers (NASD), parent company of NASD Regulation and the NASDAQ and AMEX stock markets, recently implemented an internet-based regulatory application (Web CRD(SM)) providing broker/ dealers and regulators with online access to reports from a database containing records on more than 5,500 broker-dealers and 600,000 brokers. The reports can be accessed electronically through the end-user's browser. The system currently generates and distributes an average of 500 reports per day, many of them running to thousands of pages in length. Yet, while IT can predict general usage patterns (i.e. more reports will be requested during trading hours than late at night) it must be prepared to handle requests far in excess of its average capacity, any time of the day or night.
E-business environments, like those above, exponentially increase the volume of data that must be handled by the IT infrastructure. In the course of processing, aggregating, analyzing, and reporting on transactional data, the Internet ad company adds the equivalent of six million new rows to its Oracle database every day, 365 days a year. As the company grows, that number will only increase. Further complications arise when the immense volumes of data donÕt arrive at predictable times or in predictable quantities. To the contrary, they may arrive all at once, packing the vicious punch of a "data tsunami."
Such data tsunamis can be random events. At other times, companies unwittingly bring them upon themselves by undertaking a major Web-oriented marketing initiative and failing to clue in IT. Such sudden influxes of data can easily overwhelm even robust, high-performance, high-capacity environments. They also complicate the task of scheduling and automating processes, because companies don't know when the data will be arriving and when the batch processing will need to begin.
For many years, IT administrators have lamented the shrinking of the "batch window." In the Internet environment, this window shrinks to zero, as e-businesses must keep their doors open all day, every day, and provide immediate service to customers. There are no "off hours." Total availability means that IT must find new ways of handling the ever-fluctuating processing load, so that machines do not become overloaded, diminishing response time.
Because customers have come to expect total, 24x7 availability with reasonable response times, a system that is available but unresponsive in a timely manner is little better than one that has crashed. And when their expectations are not met, end-users need only click to be connected with a competitor. CFOs at Princeton eCom check their site at 2:00 a.m. to ensure that their own customers are getting the service they promised, and Princeton eCom contracted to deliver.
The unpredictability of processing in the e-business environment has led some to argue that "there's no such thing as scheduling in the online world." To the contrary, effective scheduling offers perhaps the greatest opportunity to "backend-proof" the IT environment and to achieve increased efficiency through automation.
Until recently, scheduling has come down to a more-or-less "static" ordering of predictable data processing events: generating invoices every Tuesday at 2:00 a.m. The e-business environment is far more dynamic, with events. Yet total on-line processing is simply not feasible for the vast majority of e-businesses. The solution is dynamic workflow automation, software enabling the system to intelligently handle scheduling "on the fly" in accordance with pre-established business rules.
The NASD employs such a system for its Web-based reporting application, collecting incoming requests and distributing workload based on availability of processing and other criteria established by central IT. The system also serves as a "regulator valve" in the case of a sudden influx of report requests, ensuring that a data tsunami will not overwhelm the system.
Dynamic workflow automation also has the potential to increase operational efficiency. This requires using automation tools that can interact with the data and output from the application, which would include being able to read application data directly from an applicationÕs underlying RDBMS.
For example, a major hotel chain with an online reservation system has found that running a particular batch job on fewer than 100 reservation requests results in inefficient processing - yet it can have no way of knowing when customers will make hotel reservations. Their solution involves establishing a "trigger" to automatically run the job when a sufficient number of reservations have come in (subject to availability of processing resources and other dependencies).
Another example of dynamic automation is having tools monitor the output data files or reports and check for conditions. For instance, if the debits and credits don't balance on the ledger reports, a tool should be able to ascertain that directly and take corrective action as opposed to having to wait for a manual review of the data and the attendant delay that it implies.
Faced with exponentially-increasing volumes of data and the need to ensure continuous availability in the face of unpredictable demands, e-businesses must plan ahead to ensure robust, efficient workflow. A tool for dynamic workflow automation can assist in this process, maximizing the efficiency of computing resources and serving as a single point of integration between an e-commerce application, back-end ERP and financial applications, and report delivery tools.
- Phil Sheridan is vice president of marketing at AppWorx (Bellevue, Wash.).