A new survey by CA Technologies, Mainframe as a Mainstay of the Enterprise 2012, reveals just how important the mainframe remains (and will continue to be) in IT’s computing strategy.
For example, when asked where their company’s corporate data resided, 59 percent of U.S. respondents said on the mainframe, 27 percent said the data was distributed, and 14 percent said it was in the cloud (using a SaaS vendor). Percentages were similar for all respondents: 54 percent said data resided on the mainframe, 29 percent said it was distributed, and 17 percent said it was in the cloud.
CA asked Decipher Research to poll 623 IT professionals from 11 countries (251 from the U.S., 372 from 10 primarily European and Asian countries) in August about how IT investments will change over the next 18 months: 30 percent of U.S. respondents said spending on mainframe hardware would increase, 49 percent say it would remain the same. Mainframe software also showed strength: 51 percent of U.S. respondents said they planned to increase spending, and 35 percent said spending would remain constant. More than a third (37 percent) said they’d be spending more on mainframe-related services; only 16 percent said they plan to spend less.
What part does the mainframe play in IT’s strategy? U.S. respondents said “The mainframe is a highly strategic part of our current and future IT strategy and will expand in use and growth” (29 percent) or “The mainframe is strategic to our current and future IT strategy, and we will maintain current use with limited growth” (48 percent). The percentages were similar for the respondents as a whole: 32 percent and 49 percent, respectively.
The mainframe is also key for cloud initiatives. More than half (52 percent) of U.S. respondents and 59 percent of all respondents said that they were “evaluating or planning to deploy new tools that enable the rapid deployment and cost-efficient provisioning of private and hybrid cloud services on the mainframe platform.”
There’s also good news for IT staff: 27 percent of U.S. respondents said they’d be spending more on staff in the next 18 months (the same percentage as all respondents overall), and 53 percent said HR for IT would remain the same (50 percent for all respondents).
Those hoping to hire staff will be looking for employees with both “mainframe and distributed skill sets.” For example, 84 to 89 percent “of organizations will be implementing a hybrid, cross-platform management model, with a shared budget, staffing and, leadership.”
The report warns that “There are indications of trouble in this area, albeit with a silver lining.” More than a quarter (27 percent) of U.S. respondents and 29 percent of all respondents believe their organization “will face a shortage of critical mainframe skills in just one to three years,” and another 28 percent (27 percent globally) “say the same about the next four to five years.” This problem was pointed out in last year’s Mainframe as a Mainstay report, which noted that although workers entering the workforce today were “likely the most tech-savvy ... history, with little memory of a pre-Internet world, the available skills didn’t necessarily translate to the enterprise infrastructure.”
I asked Mark Combs, distinguished senior vice president for the mainframe at CA Technologies about his reaction to the survey results.
“In the past, enterprises have made it clear they are concerned about the transitioning workforce, but they weren’t very pragmatic about it. Now, enterprises are facing up to the problem and have a handle on it, implementing hybrid groups to help turn over institutional knowledge as older workers approach retirement.”
Combs added that “Despite this staff transition, 98 percent of survey participants said their organization is prepared to ensure mainframe operations will continue.”
The mainframe’s importance didn’t surprise him. “Enterprises recognize that mainframes can handle massive amounts of data and do it efficiently. IT is facing massive volumes of data -- and the mainframe is equipped to handle it.”
What will the next survey reveal? Combs notes that “five years ago there was no ‘cloud’ in the results. Technology develops and changes all the time, and I expect that next year we’ll see more about mobile computing.” He also expects to see more interest in not just collecting big data but in using it to target their audience and improve the user experience.
Combs also anticipates greater attention to more complex application management end to end in hybrid environments. “IT found it difficult to analyze such environments, but CA tools are looking at applications from end to end to see where problems are. I expect adoption of such management tools to grow.”
You can view an analysis by CA Technologies of the survey results here (short registration required).
-- James E. Powell
Editorial Director, ESJ
0 comments
Zscaler, a secure cloud gateway solutions provider, has released results of analysis from its research arm, ThreatLabZ. It’s not pretty.
The study reveals that “up to 10 percent of mobile apps expose user passwords and login names, 25 percent expose personally identifiable information, and 40 percent communicate with third parties.”
The data was gathered using the company’s Zscaler Application Profiler (ZAP), a free end-user Web site tool for assessing mobile apps for security risks.
According to a news release from the company, “there are over one million mobile applications, and more than 1,500 new apps being released every week. Users who download these apps, even from trusted sources, assume security measures are built in.”
The ThreatLabZ team’s examination of “hundreds” of applications found that “many popular apps leave user names and passwords unencrypted, while others are insecurely sharing personal information -- such as names, e-mail addresses, and phone numbers -- as well as communicating with third parties, including advertisers.”
Zscaler’s online tool, Application Profiler, lets users enter the URL for any Apple iTunes and Google Play app to receive an evaluation of the security and privacy risks the application poses. The company also provides an overall risk score. That’s the big picture, but ZAP can get more personal: the company says “users can also use ZAP to scan traffic from an app installed on their device to see whether their own data is being exposed.”
The application uses crowdsourcing to build its database: queries of apps not already in the database trigger analysis by ThreatLabZ.
-- James E. Powell
Editorial Director, ESJ
11 comments
It came as somewhat of a surprise to many that after 25 months of Bureau of Labor Statistics (BLS) reports showing IT job growth, September’s report “reveals a net loss of 1,700 jobs across four industry job segments commonly associated with IT professionals -- the first monthly decline since August 2010 ... not associated with a labor strike or similar temporary market anomaly.” That analysis, from Foote Partners LLC, includes news that “1,800 jobs were lost in Management and Technical Consulting Services following 17 consecutive months of job growth during which 95,400 jobs were added in this segment to the national economy (5,612 per month average).”
That wasn’t the only grim segment. In the Information industry job category, Telecommunications and Data Processing, Hosting and Related Services segments lost a total of 2,800 jobs in September adding to the 15,000 additional jobs lost in these segments between January to August of this year and 41,800 more lost in 2011.
The news was better in another industry segment in the same category (Professional and Technical Services) -- Computer Systems Design/Related Services -- jobs increased (by 2,900), extending the streak to 24 consecutive months.
Foote also highlights that 2,400 jobs were lost in Data Processing, Hosting and Related Services, “the largest single monthly decline in 40 consecutive BLS reports going back to June 2009.”
According to David Foote, CEO of the IT analyst firm:
I couldn’t help but expect the reaction to this month’s BLS employment report would be mainly political, with the presidential election just 31 days away. And sure enough that is what is dominating the news cycle. But for me it was the stunning sudden reversal in IT job expansion. Overall national employment numbers were disappointing July and August but for IT they were spectacular, the greatest monthly job gains in five years.
The company has been keeping a close eye on IT labor trends for over 15 years.
His report isn’t all doom and gloom; he’s optimistic about the future:
At the jobs level, I wouldn’t be surprised if this turned back around soon because the truth is that the many of the IT job segments in the government jobs reports, in particular those in IT services, have been on strong and sustained growth runs for nearly two years. There is absolutely no structural shift taking place to account for these September job losses; if it were structural you would see it playing out slowly over many months, not in one sudden market burp. The fact is that companies are actively searching for talent and hiring for the future, though with considerable selectivity. They are making investments in new people not simply filling gaps at the project level: They’re building for the future, putting a premium on versatility and multi-dimensionally skilled individuals who can grow and develop within the company culture.
Foote also points to economic uncertainty and the barrage of “tax cut” plans put forth in the U.S. presidential election. In addition,
...nearly three quarters of U.S. businesses are on a calendar fiscal year and they just entered into their 4th quarter starting October 1st. It’s perfectly normal for them to step back in September and recheck their budgets and hiring plans for the rest of the year and the beginning of the next fiscal year. This year we [are] noting two important differences in our interviews with CIOs and others responsible for IT hiring: employers were feeling the nervousness just described and consequently acting more deliberately, including placing temporarily freezes on IT job requisitions in September. We are now hearing that they have just lifted those freezes and they’re back to active searches on all requisitions.
So it appears this September reversal may have been a result of temporary suspension of hiring during the customary “step back and recheck our thinking” phase. Foote Partners expects a resumption in the October and November BLS employment reports of the overall robust hiring trend for IT professionals that our firm has been tracking in the BLS reports for two years running.
You can read more here.
-- James E. Powell
Editorial Director, ESJ
1 comments
Serena Software just released the results of a short survey of attendees of the Agile 2012 conference held recently in Dallas, Texas. The company wanted to know the extent of agile adoption, how agile is being used, and what challenges developers are facing.
As this graphic of the overall results indicates, adoption is just under half of enterprises surveyed, but those projects aren’t enjoying great reviews. Only 52 percent of customers say they’re happy with their agile projects; 15 percent say they’re unhappy and 28 percent are neutral. That’s not an overwhelming endorsement for taking the agile approach.
The biggest challenge involves communication with the user community -- a challenge that has faced developers for decades. According to Serena, four of the top five agile roadblocks involve “working with other teams and customers.”
The survey results are available at no cost; no registration is required for access.
-- James E. Powell
Editorial Director, ESJ
0 comments
Two years ago, when I was writing stories about outsourcing, my focus was on application development -- should an enterprise hire programmers in India, Russia, or South America to get applications built more quickly?
My, how times have changed. Now when you mention outsourcing, you’re more likely talking about outsourcing data center functionality, including the new push for cloud and enterprise agility.
That’s part of the focus of a new study of 550 IT executives around the world about current and planned IT environments. Savvis (a global managed hosting and colocation provider with expertise in outsourced IT and cloud services) issued its third annual report, Fast-Forward to 2012: 550 Global IT Execs Share Their IT Outsourcing Strategies, and as you might expect, cloud is grabbing an “ever-larger mindshare of IT leaders as a way to accomplish” business agility and ensuring IT can keep up with technology.
Most interesting is that the survey found executives are shifting their focus from cost cutting to business strategy. As the report points out,
Specifically, over the next 12 months, organizations told us that they need their IT departments to facilitate three critical goals: first, increase collaboration both internally and throughout the extended enterprise; second, enable operational efficiencies at all organizational levels and all departments; and finally, achieve a competitive advantage in the marketplace.
Other good news: IT budgets are becoming less limiting. In last year’s survey, the company found that budgets were less restrictive than in 2010. This year, the trend continues, with only one in four organizations saying they faced IT budget constraints, though that figure varies by country.
That’s not to say that all IT purchases were wise ones. “Looking back over the past year’s IT decisions and activities raised some regrets. CIOs and VPs of IT reported that they made mistakes when spending their IT dollars. Specifically, most admitted that purchasing their own IT assets turned out to be a mistake. In retrospect, outsourcing IT infrastructure would have been the wiser choice.”
OK, sure, Savvis is in the outsourcing business, so that IT sentiment is music to their ears. However, other findings are in line with studies from other organizations -- or just reflect common sense. One of the highly touted benefits of outsourcing is that it makes more money available for other strategic initiatives. According to the survey, leading the pack of those initiatives are “increasing collaborations across the organization” and “gaining efficiencies throughout the company.”
Savvis also found that, on average, most organizations are outsourcing “slightly more than a quarter of their IT infrastructure,” with savings estimated at about 26 percent of their IT budgets -- that’s after factoring in the cost of the outsourced service. With numbers like these, it’s no wonder that organizations surveyed say that within five years at least 40 percent of their infrastructure will be outsourced. The smaller their global revenues, the more likely an organization will be outsourcing IT tasks at a higher rate.
Why isn’t outsourcing growing faster? For most enterprises, it’s because contractual obligations are getting in their way. (In the UK, the company culture is the biggest problem.)
Organizations are most likely to be outsourcing test and development, Web site operations, and backup and disaster recovery tasks. Asked to name, which applications are most popular in the cloud, respondents overwhelmingly chose e-mail and intranet tasks. Of the 38 percent of respondents investigating outsourcing, infrastructure for big data analytics is at the top of their evaluation list.
I asked Brian Stillwell, senior director, global solutions management ITO at Savvis what he thought about the growth of outsourcing in five years. What percent of an IT’s infrastructure did he expect to see, and what aspect of that infrastructure would move to the outsourcer first?
“Globally, we expect outsourcing to increase 15 percent over the next five years as organizations move 40 percent of their infrastructure to service providers. We think U.S. trends will parallel this global movement, with 50 percent of American enterprise infrastructure outsourced in the next five years.
“Test and development, Web applications, and back-up/recovery are primary drivers of today’s outsourcing growth. Looking ahead, we expect continued strong growth in back-up recovery services, as well as big data analytics and regional applications, pushing the trend upward.”
Of course, outsourcing isn’t just about money -- there’s also the need to stay competitive. Half of the organizations surveyed said they’re outsourcing in order to improve agility -- that’s according to 59 percent of those in the U.S. down to 50 percent for those in the UK. In Germany, outsourcing is being driven by the lack of “qualified IT personnel.”
As more organizations outsource more of their infrastructure, will the drivers today be the same drivers and five years?
“The need for competitive, agile performance will always be there,” Stillwell said. “However, as the focus shifts from declining legacy maintenance, we expect to see more businesses leveraging cloud and managed service to drive efficiency, collaboration, and strategies that foster growth and innovation.”
Executives and “business decision makers” in United States, the United Kingdom, Germany, Japan, Hong Kong, and Singapore participated in the study. The full report is available here; a short registration is required for access.
-- James E. Powell
Editorial Director, ESJ
0 comments
BMC’s yearly mainframe survey takes a look at the growth and pain points for IT professionals working with the hardware.
Three key issues stand out in Mainframe is Poised to Fuel the Future of Business:
- The mainframe "continues to be a critical business tool." Translation: the mainframe isn’t dead. Advantages cited remained pretty much the same since last year’s study: high availability, security strengths, throughput, and superior centralized data servers are key to its popularity and explains why "90 percent of global respondents view mainframes as a long-term business solution."
- Over the last year, IT’s top priorities haven’t changed much -- they’re still reducing IT costs, disaster recovery, and application modernization, in that order. What’s new is that business/IT alignment has moved from seventh place to fourth since last year’s survey.
- When asked about areas of growth, MIPS, specialty engine adoption, and combating outages topped the list. For example, 15 percent of those surveyed believe MIPS have grown more than 10 percent in the last year; 44 percent put that figure at between one and 10 percent. Of these two groups, 31 percent attribute the growth to legacy and new applications, 19 percent say it’s due to legacy applications alone, and 9 percent say it’s due to new apps.
BMC’s survey veered into other areas, too. For example, the survey asked about respondents’ "specific priorities and objectives for application modernization." Almost half (46 percent) say they want to extend legacy code in applications using SOA or Web services with little re-architecting, and 50 percent want to "increase flexibility and agility of core apps." Coming in third, at 39 percent, is the desire to "reduce the cost of legacy application support."
Of course, the mainframe isn’t perfect: 39 percent of those surveyed said they had one or more unplanned production mainframe outages in the last year. Hardware failure accounted for 31 percent of those outages, followed by system software failure (30 percent), in-house application failure (28 percent), and a failed change process (22 percent).
The full report of the survey of 1,264 respondents in the Americas, EMDA, and Asia-Pacific regions is available here; registration is required.
-- James E. Powell
Editorial Director, ESJ
0 comments
Clemens Pfeiffer, Power Assure’s CTO, posted on his company’s site a list of 10 steps IT and facility managers can take to “improve efficiency without compromising reliability.” Here’s a sampling of the first three.
Tip #1. Reducing power consumption for cooling
In a typical data center, roughly half of available power available is used by equipment; the other half is devoted to cooling. Pfeiffer says a good bit of this cooling power can be eliminated by correcting “cooling inefficiencies, upgrading the cooling system to allow for variable cooling, and/or making greater use of outside air.” For further savings, right-sizing your UPS and power distribution equipment.
Tip #2: Consolidate and virtualize
One of the biggest benefits of virtualization, as we all know, is that servers can run more efficiently. Forget idle servers or underused servers. According to Pfeiffer, “Virtualizing the servers can increase overall utilization from around 10% (typical of dedicated servers) to between 20% and 30% and over 50% with more dynamic management systems.” Another benefit: regaining precious rack space.
Tip #3: Calibrate and monitor cold aisle temperatures
Peiffer suggests you adopt “a hot/cold aisle configuration and increase the cold aisle inlet temperatures to 80.6°F (27°C) as recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).” He admits that hot/cold aisle configurations “can create hot spots that waste power and cause outages,” so you’ll have to balance your equipment -- experiment, measure, and recalibrate. “Continuously monitoring the cold aisle temperature maximizes cooling efficiency and minimizes problems.”
To read the complete list, visit http://www.powerassure.com/press/pr166-10-tips-to-improve-data-center-efficiency.
-- James E. Powell
Editorial Director, ESJ
0 comments
SolarWinds conducted a survey with Network World about the state of disaster preparedness among IT professionals. Some of those results are, as you’d expect, disturbing.
For instance, one quarter (25 percent) of respondents don’t have a disaster preparedness and response plan for their business or their IT department. Worse, only slightly more than half (57 percent) say it’s “very likely” or “certain” that they’ll create one in the next year -- a figure that should be 100 percent. As to “certainty,” we all know that one’s best intentions (when it comes to non-revenue-generating planning) can be put on hold when higher-priority (read: income-producing) tasks come along. At least 9 percent of respondents are honest enough to say they don’t know if their enterprise will be creating a plan in the next year (they probably won’t).
The reason may stem from the attitude of senior management about disaster preparedness and their willingness to put their money where they enterprise’s income is. Seven percent said upper management was unconcerned, saying “They appear willing to gamble and hope that a disaster never occurs.” Exactly a third say management gives disaster recovery (DR) a low priority. “They display some concern for being able to recover from a disaster, but dedicated funding for IT investments is scarce and projects/processes are tackled on an ad hoc basis.” Only a dismal 2 percent said DR was a top priority, and that “projects are carefully planned, funded, and executed, and people/budget resources are plentiful.” Boy, are they lucky.
I asked Sanjay Castelino, VP and market leader at SolarWinds, if that was a fair assessment.
“As environments are becoming more complex, budgets are being cut, and staffing levels are decreased, IT professionals are being asked to take on more and more responsibility,” said Castelino. “They are often responsible for networks, security, storage, virtualization, and much more. Given their workload, we weren’t surprised that many haven’t made time to plan and prepare for disasters.”
Of respondents with plans, only 44 percent update their plans yearly. Perhaps their IT environment is different than everyone else’s, but I can’t begin to imagine a company that can remember everything that has changed in just one year, but then you have another 15 percent of enterprises that update their plans once every two years. Given staff turnover and the highly fluid IT environment, that spells disaster itself. Worst of all, almost 8 percent never update their plans -- good luck to those enterprises.
Your plan is only good if it works, and here, too, IT isn’t doing a particularly commendable job. According to the survey, 22 percent never “significantly” test their plan, 12 percent test it once every two years, and 23 percent test it once a year. Only a little more than one in ten enterprises (12.8 percent) perform two tests a year.
Let’s face it: disasters are real and not as rare as you might hope. For example, 27 percent of respondents have experienced a disaster that kept them working from their office.
I asked Castelino if there were any surprises among the survey results.
“We were not surprised to find that in many cases it took a disaster actually occurring for companies to take disaster preparedness planning seriously. One IT professional said, ‘Given that we have already seen one incident, we have adopted cautious and diligent attention to prevention and management of similar incidents in the future. Backup facilities have been upgraded and a new data center has been constructed,’” said Castelino.
“What we found most interesting was the ways that respondents are incorporating technology into their disaster preparedness plans. The most commonly included technologies were virtualization, VPNs, offsite backup and storage, remote access, and the use of company-supported mobile devices.”
When asked about their confidence in their enterprise’s ability to recover in “a reasonable amount of time” (between 12 and 18 hours) in case of a “significant disaster,” 17.3 percent are certain they can, 28 percent remain “very confident,” and 24 percent are “somewhat” confident. Nearly one in three (29 percent) are “not at all confident.”
Sure, the survey isn’t scientific, but it presents thought-provoking information about real-world practices that should have every C-level executive cringing. If (down)time is money, IT is putting their enterprises at great financial risk.
The full results are available here at no charge and no registration required.
-- James E. Powell
Editorial Director, ESJ
0 comments