Predicting Your Network's Future
Is Neural Network Technology Unnerving IT Managers?
Every once in a while a change in conventional technology comes along which strikes anerve. Computer Associates' (CA; Islandia, N.Y.) latest predictive network managementphilosophy, "neural network technology," may be just such a change.
CA makes use of neural network technology in its new Windows NT server-based networkagents, branded Neugents. When combined with Unicenter TNG, it studies the targetenvironment, becomes familiar with its daily behavior, identifies when the system is goingout of its normal operating state and notifies network administrators. And, in a breakfrom traditional rules-based network management tools, does it without human intervention.
A typical prediction of a problem issued by Neugents may look like: "At 11:00,Unicenter TNG Neugents predict a 95% probability that server AB232 will run out of virtualmemory in approximately 45 minutes." It goes on to itemize all the monitoredvariables that led it to that conclusion.
With these capabilities in mind, is it time to scrap network management technologiessuch as HP OpenView in favor of this nouveau nervous system? Steven Foote, senior vicepresident of market research firm Hurwitz Group touts Neugents' low overhead by stating" ... large-scale implementation [of rule-sets] leads to loading up a system with alarge number of policies and creates a maintenance effort in maintaining all theindividual policies for each system." He adds, "[the] use of neural networkingtechnology for predicting system and application state changes provides an attractive,possibly compelling, alternative."
Maryville Data Systems (St. Louis, Mo.) is an enterprise integration management firmthat specializes in enterprise systems management and technology training. Charlie Henke,Maryville's solutions marketing manager says, "From an integration point of view,[Neugents] fits well into a proactive approach to infrastructure planning. [Its]predictive capabilities are complementary to the business-critical environments that wedesign."
CHANGE IS NOT GOOD
"It's a really good idea if they can make it work," says Jasmine Noel, ananalyst with D.H. Brown Associates, Inc. (Port Chester, N.Y.). Noel explains that networkmanagement is made up of too many pieces other than servers and any predictive techniqueneeds to take all of them into consideration. Handling change in the network may also be astumbling block. The more change, the more time Neugents will need to learn what a normalstate is. Enough change and it may never learn. "A stable network with just PCchanges will work fine," she says.
The nature of Neugent's predictions also troubles Noel. "Its point is to predictthe immediate future." Knowing that a server is about to run out of a criticalresource in the next hour may not be good enough. "CA doesn't want to talk aboutpredictive capacity planning. They see it as a separate issue. Their main focus is on nothaving operators write rules and crash prevention."
The proposed rollout of Neugents will include HP-UX, Sun Solaris and IBM AIX during thefirst half of 1999 with the intention of including the entire Unicenter TNG product lineby year-end 1999.
A MODEL NETWORK
HP's answer to the question of predictive network management has come from its recentalliance with MIL 3, Inc. (Washington, D.C.).
Steve Johnson is MIL 3's director of marketing and business development. Notsurprisingly, he sees trouble ahead for the future of neural network technology.
"[Neugents'] initial focus is on systems, primarily NT, with no support for thenetwork/application infrastructure." As to network change, "What about newpatterns [of behavior]. It takes time to understand a new condition. And, it's not able torecognize arbitrary change." On the claim of artificial intelligence at its base,"How many real-world AI solutions are in production today? Not many." Onpredictability, "Output is based on data from the past -- not the future. Oncedetected, it's generally already too late."
In summary, Johnson says, "[Nugent] doesn't take into account theinterdependencies of multi-level protocol stacks involved in every networked applicationtransaction. Even when a signature or pattern is learned, it's a theoretical impossibilityfor the behavior of the pattern to remain the same as the application environment loadincreases."
HP's product to emerge from the pairing with MIL 3 is HP OpenView Service Simulator6.0, which automatically integrates network topology and network traffic information withprojected network service loads. IT managers can then use this information to runsimulations of their environment against service levels to determine if current resourceswill support future requirements.
But, Service Simulator 6.0 notwithstanding, OpenView has been in the business ofpredictive network management for some time with the combi- nation of HP MeasureWare'listening' agents and HP PerfView monitoring tool, says Ed Gillis, one of HP's solutionsspecialist for OpenView. "Neugents works very well in a black and white environment.But, network performance is all gray."
Gillis adds that HP believes there is value in following the traditional rules-basedmethodology in using pre-defined metrics. He explains that OpenView users can customizetheir library of rules and assign different weights to different variables according totheir own specifications. For instance, he says an operator may want to place a higherpriority on a particular server's CPU utilization than on another's disk usage.
Like Noel, he also expressed concern in the limited warning time afforded by CA'sneural network technology. "It's one hour notification vs. six months to get moreCPUs, more disk, more memory, etc."
So is neural network technology and Neugents the long-term answer to network nervousdisorder or just an ultimately sputtering synapse in the network body topology?
Once more from Hurwitz Group's analysis: "With business applications becoming morecomplex and mission-critical ... it's more necessary than ever for IT staff to proactivelypredict and address performance." The questions still to be answered are: How muchnotice is enough notice and how much control should operators have in determining what tomonitor?