In-Depth

Top 10 Data Center Dangers

Many organizations skimp on maintaining health data centers, putting security at risk

When it comes to information security, how secure is the place where must of it is stored—your data center?

According to infrastructure consulting company Infrastructure Development Corp., many organizations skimp on the time and budget needed to maintain healthy data centers.

“Many enterprises fail to pay close attention to their data centers’ physical infrastructure, updating them at most once every five years,” says Steve Ritzi, director of marketing at Infrastructure Development. As a result, too often IT infrastructures are “in serious danger of exceeding temperature specifications, or failing to meet cabling and power supply requirements.” The end result can be equipment failure, and thus loss of data.

To highlight the dangers of poor data center management, Infrastructure Development compiled a list of the top 10 dangers it found in data centers in 2004:

1. Lack of well-managed cabling. New cable may be deployed in older data centers without removing pre-existing cable. “If these bundles are running beneath a raised floor, the accumulation of cable can dramatically reduce the efficiency of cooling systems,” says Ritzi. Other common problems include overly bent cables, freestanding cables, and unlabeled cables.

2. Uninterrupted Power Supply (UPS) confusion. Buyer beware; organizations often don’t choose the right UPS for their environment. Part of the problem is confusion over the four types of available UPS products: “standby, line interactive, true online double conversion, [and] ferro resonant,” says Ritzi. Which to buy? Companies often just pick the least expensive one, he notes. Choosing the wrong UPS can compromise data center effectiveness.

3. Contaminated slab or sub-floor. Plenums beneath raised floors need regular cleaning or galvanic “gray whiskers” may propagate, contaminating the data center, says Ritzi. Should they be sucked into the air-circulation system, they may also cause microscopic short-circuits.

4. Lack of fire-suppression systems. “Many businesses are unaware of modern, mission-critical fire-suppression systems, including specifically manufactured computer-room fire extinguishers,” says Ritzi, which are easier to maintain and less costly than earlier models. Other companies simply deactivate their existing fire-suppression systems for operational or cost-related reasons.

5. Data-center overheating. Older data centers have difficulty handling new computing equipment, especially as new generations of equipment run hotter than old ones. “Often, dated facilities are designed for mainframe equipment, and not intended to be used for high density PC-based or blade server technology,” says Ritzi.

6. Poor electrical grounding techniques. The Institute of Electrical and Electronics Engineers recommends data centers maintain a grounding resistance of two to five ohms. Yet many data centers exceed the recommended limit. “Keeping grounding grid resistance to a minimum, in accordance with sound engineering principals, can prevent almost all transient voltage surges,” Ritzi observes.

7. Poorly maintained standbys. Standby generators and automatic transfer switches handle power distribution. Yet if they’re not properly maintained, “they will fail when needed most,” Ritzi warns. “Typical maintenance procedures are simple and inexpensive, including items such as regular oil changes, filter replacements, hose and belt replacements, and test running the generator at least one hour per week.”

8. Power-distribution systems. Current power-distribution systems often fall below standards. For example, many don’t have emergency power-off switches. Oftentimes, data center administrators don’t recalculate a rack’s total amperage when adding new equipment, which can overload circuits.

9. Raised-floor systems. Air bleeding through poorly sealed flooring can result in the loss of up to 25 percent of the pressure created to cool equipment cabinets,” Ritzi says. At the same time, too-large cable bundles may block airflow, compromising even the best cooling systems, and resulting in hotspots.

10. Data center management. Regardless of design, “uptime is ultimately dependant on the people working in the center,” notes Ritzi. Many data center failures are “human error” problems relating to “poor training or enforcement of operating protocols.”

About the Author

Mathew Schwartz is a Contributing Editor for Enterprise Systems and is its Security Strategies column, as well as being a long-time contributor to the company's print publications. Mr. Schwartz is also a security and technology freelance writer.

Must Read Articles