OOPs and Legacy Data
I read your article, "OOP Hits the Mainstream (page 30) in [June’s Enterprise Systems] with great interest. Do you plan a follow-up? I had hoped you would elaborate on the following: "Object-oriented programming is helping integrate legacy and new applications, says Mike Howard, chief technology officer for emWare."
This is, of course, a huge issue in the world of legacy systems. Thanks.
Dave L. Jones
You hit on a key point regarding OOP: its ability to help integrate legacy data. We’ll delve deeper into that topic in upcoming issues.
Hold That Tiger!
Phil Britt and Tom Moore did a good job of spotlighting the importance of managing the enterprise printing environment in their article, "Enterprise Printing: Taming the Paper Tiger," (June ES, page 38). They made excellent points regarding the volume of paper reports organizations produce each year and the cost of printing those reports.
I wholeheartedly agree with their suggestion that the predictions of a paperless office will probably not become reality. And they’re right that many companies have yet to grasp the full cost of printing. I know this because I often talk to people who have not considered the total costs associated with their organizations’ printing.
Britt and Moore mentioned a few of the solutions available today to help organizations tame their "tiger." They mentioned solutions available from printer manufacturers and software vendors that address specific aspects of the overall problem. There are hundreds of niche solutions available today, and many are good at what they do.
Those niche solutions, however, won’t lead an organization to take the necessary global view of its printing system.
Any organization that attempts to tame the paper tiger by focusing solely on this hardware solution or that software product is trying to tame the beast by working only on its paws, its tail, or maybe its eyes. That’s no way to tame any animal, and it’s definitely not the way for a company to manage its printing problems. Rather than focus on individual parts, we advise companies to step back and take a global view, an enterprise-wide view, of their computing and printing systems. Taking that step helps companies begin to develop a strategy for managing output across the enterprise.
Why do they need a strategy? Without a strategy, organizations tend to implement a software solution in one division and a hardware solution in another, and in that process they create islands of automation. Worse, they often create incompatible islands of automation.
A sound strategy for managing print and nonprint output securely across the enterprise must consider the sources of output—all the Unix hosts, OS/390 mainframes, AS/400 systems and others—as well as the destinations for that output, including fax servers, e-mail addresses and Web browsers, as well as network and channel-attached printers. Only by developing an enterprise-wide strategy for managing output can an organization hope to truly tame its paper tiger.
Senior Vice President, Product Marketing
Levi, Ray & Shoup Inc.
Define the Enterprise
I’ve been asked to put together an enterprise solution for a client environment. Since it’s a government client, I must give a definition of what an enterprise information systems environment is.
Has Enterprise Systems ever put together a definition of what an enterprise IS environment is, and can I get that definition so I can quote it with attribution in my proposal?
zOS Senior Systems Engineer
In its simplest terms, Enterprise Systems defines an enterprise as follows: Multiple integrated networks running on various platforms and operating systems in more than one physical location. By definition, enterprise systems tend to be large, distributed, heterogeneous and complex.
Of course, because of their size and critical importance to today’s business, successful enterprises also include such complexities as data warehousing and analysis, information- and application-sharing, knowledge management, middleware and security products, complex software for data, message and application sharing, and tools like customer relationship management, enterprise application integration, supply chain management, and much more.
The True Value of SAN
After reading Jon Toigo’s column in the April issue ("SAN April Foolery," page 18) I have several disagreements with his basic assumptions. He takes the stance that SAN technology is being deployed for the business reasoning of "save money, reduce risk or make money."
Although these might be the PowerPoint bullets used for convincing upper management to let loose the purse strings, the true value of current SAN technology lies in the consolidation of DASD arrays and reduction of costly ports on those arrays.
The sales pitch that a SAN would allow greater data sharing was the work of the marketing analysts, not the technicians, since the ability for sharing data can’t be expanded past the existing limitations of the operating systems you’re working under. When the user community can convince Window, Sun, HP, Mac and Linux designers that only a common standard will do, then we can discuss true data sharing across platforms. Until that day, we are forced to analyze each separately.
I agree with the assertion that the "Holy Grail" of SAN technology has yet to be realized. But if you invest wisely and don’t skimp on the DASD array side of the equation, a high-availability SAN environment can provide clustered environments [the needed] maximum bandwidth without wasting valuable fiber ports on the arrays.
In short, true SAN designs are not practical and do not benefit business, but SAN devices are providing multiplexing style solutions that can easily reduce costs in new or consolidated environments.
Matthew W. Pennington
Union Carbide Corporation
First, I do endorse the value of a true SAN as a mechanism for storage consolidation and optimization. It’s a still-maturing approach that offers considerable promise in the future. You hit the nail on the head when you said my column offered a criticism of the "marketecture" around SAN, rather than its fundamental architectural premise.
Many of the gaps between marketing and reality have to do with the use of Fibre Channel as an interconnect. The protocol was pressed into service as the "only" _interconnect capable of delivering the speeds and feeds required for SAN. The deficits of the protocol derive from the fact that Fibre Channel was never intended to be a network, but only a fabric of point-to-point connections.
Provisioning was never made (and in fact was deliberately excluded from the protocol) for "IP stack-like" functions such as management and security services. FCIA and SNIA are frantically trying to work these capabilities back into the design of the protocol to make it more of a network.
Until they do, management and security—not to mention virtualization—requires a kludge of proprietary third-party products.
I must disagree that the key to SAN salvation is wise array investment simply. (Otherwise I would work for EMC!) A SAN is ultimately a leveler of technology—a "commoditizer." Most server vendors are wary about it because it guts server platforms of 60 percent to 80 percent of their sales value (the storage components). Most large-scale array vendors are wary about it because a true SAN would allow all arrays to be treated simply as boxes of hard disks—Joe’s JBODs would perform just as well as XYZ’s expensive high-end array.
We are really looking, as you correctly observed, for a SAN to virtualize the array controller, eliminating the pricey, proprietary nature of current storage equipment and enabling smart guys like you to configure all storage as an array.
Jon William Toigo
...And a Cup of IBM
Your article "Java Anyone?" (June ES, page 18) fails to mention IBM. Its WebSphere environment, with the use of J2EE for Java for the IBM mainframe, has been successful for a number of companies. There are several software companies offering software to access mainframe applications using the WebSphere and Java environment. There is also Linux/390 for use as a Web server.
Bruce E. Högman
Preventing Data Theft
Regarding Sam Albert’s column in the June issue ("When Firewalls Go Up in Smoke," page 50): The quote from me had an incorrect word that changed its meaning. Here’s how it should have read:
"[Protegrity has] evaluated countless unauthorized penetrations of firewalls and concluded, virtually without exception, that our solution would have prevented the theft of data."
Peter Nilsson, Senior Vice President
Enterprise Systems welcomes ideas and suggestions from readers. We may edit your comments for clarity and length. Reach us at firstname.lastname@example.org.