Systems Management Frustrations in 2011 Will Lead to 2012 Improvements
IT is demanding that the tools it uses provide higher levels of automation. What automation will we see in 2012?
By Travis Greene, Service Management Strategist, NetIQ
In 2011, the discipline of systems management and monitoring has continued to experience pressure from management to use fewer resources. As a mature technology, there is an expectation that systems management will require fewer personnel to maintain it over time, and this expectation has been accelerated over the past year by an economy that prevents IT organizations from staffing to the levels they would prefer. As personnel change positions or depart for other jobs, those roles are not being replaced. In turn, staff who remain are being asked to take on additional roles.
In these situations, tools are typically expected to provide higher levels of automation to take up the slack. Although automation has been used to provide better and faster services to business customers for decades, it will be more routinely applied to systems management in 2012, specifically to address the three specific challenges that IT administrators faced in 2011.
In 2011: Monitoring gaps developed
With fewer staff to monitor and maintain systems and applications, gaps in coverage and maintenance appeared, exposing IT organizations to the risk of outages. These gaps manifested themselves in several ways:
- The pace of change was accelerated by virtualization. This haste, while beneficial to the completion of projects, often led to monitoring becoming an afterthought.
- There was a false sense of security that monitoring policies are accurately in place. This often led to chaos when outages occurred, as various management teams use their tools to claim absolution.
- Outages became more painful and frequent. Highly publicized outages from service providers such as RIM, Salesforce.com and Amazon are just the tip of the iceberg. Having accurate monitoring may not prevent change-induced problems, but certainly would reduce the troubleshooting time and therefore the total amount of downtime.
In 2012: Monitoring gaps will be closed with automation
Systems management tools that automatically discover and apply proper monitoring according to policy will be in greater demand. The pace of change is only going to increase, so the tools must be able to keep up by themselves. This will require detection of changes, which trigger the deployment of monitoring according to pre-determined policies with the flexibility of designated exceptions.
In 2011: Qualified systems management staff were harder to find
As fewer experts were employed in the discipline of systems management, skills have become more difficult to replace, and those who do provide systems management support are less-skilled than previous generations. This skill deficiency resulted in:
- Improper reaction to problems. When events or outages were identified, overworked and less-experienced staff were likely to compound problems with improper reactions.
- Too much information. Many tools made it easy to set up monitoring and then overwhelm administrators with a flood of information that was quickly ignored.
- Inability to build custom monitoring. The custom code that many business applications are built on requires custom monitoring. Who was building and maintaining this?
In 2012: Automation of routine but complex tasks will be prioritized
A need to reduce the risk of human error will result in a greater degree of tasks that are handled by IT process automation engines. These processes are usually in the heads of administrators and getting them on paper is a first step, but more must be done to automate the steps where possible to improve efficiency. Putting more control of monitoring into the hands of the administrators who manage specific systems and applications will also be necessary to ensure they get the right information without meaningless noise drowning out the important data and events.
In 2011: Information was condensed onto fewer screens and reports
With systems management specialists taking on more roles and becoming less specialized, there was a need to visualize data and events across multiple domains (such as networks, servers, applications, and storage) in a consolidated format across physical, virtual, and cloud environments. This centralized information enabled other management disciplines such as:
- Capacity Management: Identifying unused resources (physical or virtual) that can be better utilized, and potential for capacity bottlenecks based on growth trends, particularly with storage.
- Problem Management: Identifying trends of problems that are the underlying cause of one or more incidents. Centralized reporting is particularly useful for seeing the big picture trend for proactive problem management.
- Service Level Management: Service performance and availability is often measured from an end-user perspective, but service components often have their own operational level objectives that must be met. Seeing all of these in a single report helps to identify areas that are under-performing.
In 2012: Management solutions will address the cloud from a single console
As management tools improve their breadth of coverage from physical to virtual machines, the cloud is the next great frontier. Cloud providers need to do a better job of exposing performance data, but what is out there will be usable to manage in a unified way. Bringing together management information into a unified view across physical, virtual, and cloud infrastructure will help decision makers understand where resources are really needed.
A Final Word
While 2011 was a challenging year for many systems management professionals, expect to see these challenges addressed in the year ahead. Improvements in automation and consolidation of information will enable the remaining dedicated systems management personnel to get a better grip on the tasks at hand.
Travis Greene is a service management strategist at NetIQ. You can contact the author at