Security in the University, Part 1 of 2

Ohio State University’s CISO Charles Morrow-Jones explains how the distributive, collaborative environment of a university affects security.

Charles Morrow-Jones is Director, CyberSecurity for the Ohio State University in Columbus, Ohio. He is an 18-year veteran at the institution serving in various information-technology administrative roles since 1990 and is a graduate of OSU. He has taught information technology courses at the University of Colorado, Boulder.

Ohio State, Columbus has the highest student enrollment of any university campus with 51,818 students. The university employs almost 32,200 people and includes its academic hospital, the Ohio State University Medical Center, which has its own IT staff and CISO.

Mr. Jones spoke with Enterprise Strategies’ Chris DeVoney, an IT specialist within the University of Washington’s School of Medicine, about security issues in educational environments, which bear a striking resemblance to security issues faced by enterprise and government IT.

ESJ: OSU has the usual university environment, student dormitories, and a medical center.

CMJ: And cows. OSU also has a presence in all 88 counties though the Ohio Extension Agency which are our computing responsibility.

How is a university environment different than that of a “normal” enterprise?

The most salient [issue] is the high level of decentralization, including IT policy and governance. At Ohio State, and I think we are typical, we have eighteen colleges, such as the College of Education and the College of Business. [Editor’s note: college and school are interchangeable terms for educational subdivisions within a university.]

Each college is semi-autonomous, so unlike a business operation where I can force standards, each college has its own choice of hardware, operating systems, and software. Security is a coordinating effort rather than one of enforcing standards.

The second big difference is the presence of tenured faculty members, [whom] are given secure appointments to ensure academic freedom. They are extremely difficult to terminate, even for non-compliance with policies that they believe either don’t apply to them or require too much effort.

We had a minor breach in which a tenured faculty member had a laptop computer containing student information stolen. Our Board of Trustees is comprised mostly of businessmen, and in talking to the board’s audit committee, one of their questions was who would be terminated for this breach. The answer was “probably no one.” It took them a while to understand that difference between private-sector employee and tenured faculty.

Third, the whole security policy process is negotiated. The developed policy may be weaker than I would like, but the quid pro quo is that people, all the way up though faculty and upper administration, sign off on a policy they believe is appropriate for the entire university. The whole process is much more collaborative. Most people in the private sector would find it a frustrating way to work.

How do you keep a handle on activities normally associated with students, such as file sharing or insecure computing, and keep them and the rest of the campus secure?

Our residence halls have their own information technology folks and we work closely together. A couple of years ago, that IT group opted for tighter control of the dorm environment and implemented Cisco’s Clean Access. We are seeing far fewer problems caused by students that simply don’t take adequate care of their computers.

The staff in the residence halls is very aggressive with students reported as copyrighted violators. They ask for the immediate removal of the material and then educate the student about the potential pitfalls of illicit sharing. The second offense can result a temporary disconnection from the dorm network, and a third offense can result in a permanent disconnection. Last time I talked with them about this issue, nobody had gone past a second offense.

What about other computers on campus, including those owned by the university and those brought in by visitors or others?

We are starting to form a policy to address those issues. One issue concerns “restricted information” such as student names and Social Security numbers—information that, if exposed, can result in sending notifications to the person and taking other appropriate actions.

A policy effective in January, 2007 prohibits placing any restricted information on non-University-owned devices. The problem is we have no right to examine a personally owned machine, so by restricting such data to machines we control, we avoid the agony in case of a compromise. Another piece in progress is the standard demanding encryption for restricted information on university equipment.

A university has normal business operations, such as finance and human resources. Are these elements the same or different than any other business?

They are some differences. One radically different thing is the assumption of academic freedom. That assumption at OSU means I cannot look at content. I can look at e-mail headers, but I can’t look at the content of that message even though it might help diagnosis a problem. We can look at the metafile elements of a file system but the content is off limits. It doesn’t mean we can’t solve problems, but we tap-dance them a different way.

One thing that has caused some changes in that arena is HIPAA. With the Medical Center’s HIPAA responsibilities to ensure we are not leaking patient information, their IT staff and security office is evaluating intrusion detection products that do examine content. So that’s why that whole dynamic may change. For now there is little support to look at content outside the medical arena.

I presume the university has both central business systems and each college or group has their own systems, correct?

There are enterprise systems that are housed centrally and are what you would expect—financial systems and human resources systems. We are currently converting from one student system to another. These systems are used throughout the university and in the various departments and colleges. And all of this is a PeopleSoft environment. One of my charges is that we adequately secure the enterprise systems. My staff and I spend time with the folks who operate those central systems to make sure they are in compliance with all the various laws and regulatory obligations, for example, the PCI standard from the payment card industry.

Changes Ahead

When you look at the enterprise security environment, what are your concerns, what things have you addressed, and what do you see as the future in that area?

One of the things we are seeing differently is a change in attack mode.

For the last several years, the primary attack mode was at the operating system level and almost exclusively Microsoft. You know—the “flaw of the week” club. But I will say that Microsoft has done an impressive job of improving the overall security of their environment.

With the improved security, the hacker community moved to something easier to attack: the applications level. In particular, we are seeing a whole lot of attacks trying to get at sensitive information through flawed code in Web servers.

Three or four years ago, the attacks were largely for attacks’ sake. Now the attacks attempt to acquire information that has value, like credit card information or name, Social Security number, and birth date—the precursor information used to steal identities and get credit cards, driver’s licenses, or other credentials.

We have been seeing a flood of bad guys trying to attack through Web servers that are attached to databases that have sensitive information on them. Because somebody did not pay real close attention to coding that Web page in a secure way, the bad guys get in.

Lately we have been seeing a lot of break-ins through the database interface from the Web server. They enter through an SQL injection and essentially go on a fishing trip.

This is where the distributed environment factors in. I don’t control who can put up a Web page. Some group can hire a $5.50-an-hour undergraduate to code the Web page. Up goes a flawed Web page written by someone who is not as experienced, not paying as much attention. Attackers can get in though SQL injection, cross-site scripting, or a number of other Web page vulnerabilities—and they do get in.

For example, there was a Web site that had staff information (it doesn’t now) but was a target for a successful attack earlier this year. One day last week, we still saw 25 different IP addresses trying SQL injections on that site and those addresses were scattered all over the eastern hemisphere. So we are not only seeing a lot of probing for sensitive data, but it’s pretty clear those guys are all talking to each other.

The battle at the application level is a tougher fight because all kinds of talent levels are developing not just Web applications but all applications. Application teams get time-line constraints, so, for example, the people doing our PeopleSoft student system are under a lot of pressure to bring the project in on time and on budget. When you get situations like that, often security is an afterthought. It’s a grim picture.

Given that any group within the university can put up a Web page, what solution will keep information safe?

One thing we offering is the curative approach: if people actually understand that they’re causing a problem, we can bring them up to a level of education that they will develop secure code.

One of our current endeavors is bringing SANS to campus to teach a two-day course on proper coding of Web servers and Web pages. I think that will help us overcome some percentage of the currently flawed pages. Also, one of our proposed policies requires that developed Web pages must be reviewed by someone who attended this SANS class before the page is deployed.

What systems got hit in the breach?

It was one of the distributed department systems. One of the other things that had been going on for years is that departments get feeds of the institutional data. A department will say “I need HR data for XYZ,” the feed starts, and it was never reviewed because that was the easy thing to do. It was a departmental system that had been getting an HR feed over the years which included names, Social Security numbers, and birth dates.

One positive outcome of the breach is the HR group, for example, reviewed every outbound feed they provide. Now they know where every feed goes, what information is being fed, and negotiate with each department to feed only the minimum elements required.

- - -

In Part 2 of our interview, to be published next week, we’ll explore security costs, best practices, and how educational institutions and corporations can work together.