Q&A: Adopting a Modular Data Center Strategy
Benefits, challenges, and best practices for incorporating a modular data center into your environment.
Modular data centers provide a controlled and standardized environment that can benefit your data center’s growth. We explore the benefits of a modular design, examine the costs, discuss when it’s an appropriate strategy for an enterprise, and the best ways to transition to a modular design with Aaron Peterson, senior vice president of product management at IO, a company that designs, engineers, and delivers modular data center solutions.
Enterprise Strategies: We frequently see the terms containerized, pod, and modular data centers used interchangeably. Are they the same thing?
Aaron Peterson: Although used interchangeably, no, they are not the same thing. Containerized can be used to describe a higher-level strategy to create a controlled environment for compute capability. The traditional data center has a container -- a building -- but the challenge is that each and every container is unique. The traditional data center lacked standardization and was essentially a construction-driven process.
Pod is a term that may represent an actual container or a standard facility capability, such as electrical or cooling capability. Modular is a higher-level strategy where you have standard units of capability that can be deployed as needed. Early modular solutions included leveraging ISO shipping containers as self-contained data centers in a box. Newer innovations have evolved data center capacity further into purpose-built modular components that can be easily integrated into existing facilities. These purpose-built modular components offer several advantages, including higher capability and better access. The combination of purpose-built standard containerized technology deployed in a modular format is the future for compute platforms.
We have learned that by creating a controlled and standardized unit, you can significantly reduce the time and cost for a data center build-out. In addition to a reduction in time and cost, we have also seen an opportunity to better monitor and control a standardized environment. For example, it is much easier to provide real-time visibility and control of a standardized environment through the use of software. Furthermore, the software can enable intelligent control, and detect, report, and fix an internal problem before it becomes a critical issue. The use of standardized compute containers is a big piece of bringing the data center and IT stack together.
When should an enterprise consider a modular data center over a traditional facility-based approach? What are the benefits of such a modular design and how do the costs compare?
An enterprise should set a modular data center approach as its default option for all new data center capacity delivery for existing or new facility housing requirements. A modular data center should be considered whenever the enterprise needs to minimize capital expenditure, decrease operating costs, and reduce timeframes.
A modular data center can be thinly provisioned, meaning that you only need to build what you need at the time and then expand when needed. This can significantly reduce capital expenditures. Modular data centers can be expanded quickly, with some providers being able to build and deliver additional capability in as little as 90 days. In addition to the reduced capital expenditure and operating costs, typical modular deployments cost less to deploy. A traditional facility-based approach requires site prep, engineering, mechanical, and architecture design work. Modular data centers use standardized configurations that can be manufactured in a matter of weeks.
Innovative firms adopting and deploying modular data centers have benefited from quality and quantity improvements across KPI’s of speed to capacity; cost to acquire, operate and maintain; and efficiency and performance in terms of use of space, power, cooling, and labor. Modular data centers offer a step function level of change ranging from 20 to 80 percent improvement across such KPI’s.
What are the key criteria that a data center manager should use when evaluating modular data centers?
The biggest point here is to make sure you get a solution that fits your current and future IT needs. Key questions you should ask include: Is the solution integrated and tailorable? Can it be thinly provisioned units of capacity, both mechanical, electrical and plumbing (MEP), and whitespace? Is it a scalable platform? Can it grow with your needs? Is the solution an integrated hardware and software product that gives you full control of the compute environment from the generator all the way to the physical and virtual IT stack? Will it accommodate your required IT hardware, such as x86, HPC, and more? Is the solution energy efficient with verifiable real-time low power usage effectiveness (PUE)? Is the solution concurrently maintainable?
When an enterprise adds a modular data center, can it co-exist with an existing facility or does the organization need to take a “rip and replace” approach?
Organizations certainly can exploit the capabilities of a modular data center approach in an existing facility. Firms should think in terms of a plug-and-play approach that immediately delivers benefits of more with less, faster, and with less risk. There is no need to “rip and replace.”
What role does data center infrastructure management (DCIM) software play in helping enterprises manage their modular data center environments? Can the same DCIM software manage both modular and real estate-based data centers?
The first item I would like to talk about here is the term data center infrastructure management. Infrastructure management is absolutely critical in the data center world but so is management of the IT stack. Typically, this has been done by different resources using different tools with little or no integration between the two. The latest versions of DCIM software allow you to monitor data center infrastructure as well as monitor critical IT functions -- and make intelligent decisions based upon the true work being performed.
DCIM should provide telemetry and instrumentation to give you a full and complete view of the data center. It should also house and provide easy access of this data in a structured format which will allow you to do consistent analysis to drive optimization. This structured data should also allow you to perform simulations to help you drive continual improvement.
Intelligent control is the ultimate goal of DCIM, which combines the data center stack with the IT stack through automated software that makes intelligent decisions. For example, the temperature of a traditional data center is statically controlled at a fixed point to ensure all equipment can function appropriately. With intelligent control, real-time CPU temperatures can be monitored and intelligent decisions can be made, such as cooling can be provisioned precisely and accurately based upon the real-time need of the CPU not on the data center as a whole. The same DCIM software should intelligently control both modular and real estate-based data centers.
Do you see DCIM software as operating standalone or should enterprises be looking to integrate it with other applications, such as IT management and ticketing software, which manage data center assets?
As I mentioned, I believe very strongly in the integration of DCIM and the IT stack. Firms should think of the data center as the factory and IT as the supply chain; when integrated, they support the value of chain of the business. The approach should be “user/event click all the way to the CPU” operating in an automated monitoring, self-adjusting, self-healing manner.
Energy costs are the top operational expense of data centers. How can modular data center hardware and software help enterprises reduce energy requirements?
Energy is certainly a key, variable, operational expense of a data center. Though space, asset depreciation, and labor in many cases are a much larger percentage of a firm’s data center operating costs. Best-in-class modular data center approaches that include integrated software and hardware can directly attribute to large-scale energy cost reduction. For instance, the ability to have intelligent control can significantly reduce energy costs through accurate and precise provisioning. In addition to reduced energy, modular data centers can have a positive impact on many KPI’s including space, depreciation, labor, and capital leverage.
What mistakes do enterprises make in transitioning to a modular data center approach? What best practices can you recommend to avoid these problems?
Traditionally, data center capacity delivery has been one of the poorest performing return-on-assets components of the IT operation of an enterprise. This is not due to data center and facilities personnel’s traditional approach to capacity design and build. Rather, it is a situation of disconnected requirements, decisions, and actions between business application users and data center capacity personnel within the enterprise.
The discipline and effort to invest in mapping and linking the application service profile requirements with IT infrastructure and data center capacity deployment approaches are key to realizing an optimal delivery and fulfillment model. For example, many large organizations operate paired data centers in expensive metro regions. The default selection by the application user is to select the very best resiliency and capacity fulfillment model, including fully synchronous replication of applications and data; operating hot/hot with rack/row level separation from other applications deployed in fully maintained, concurrent resiliency of data center capacity.
This disconnected and default deployment approach results in tremendous over-provisioning; stranded capacity; wasted space, power and cooling; increased labor; increased taxes; duplicative capital costs; funding costs; depreciation; and carrying and operating costs.
Firms should think big and start small. They should deploy thinly provisioned modular data center capacity units, migrate workloads onto new infrastructure in an iterative “plug and play” capacity-refresh approach. Companies should leverage intelligent control, monitoring, and management of old and new capacity in a global operating system to gain objective, data-driven understanding of consumption and performance behavior of existing runtime. They should drive a top-down/bottom-up linkage of service requirements translated to demand modeling and tradeoffs that link to granular service units of data center capacity delivery options -- with these options enabled with a modular data center and integrated DCIM approach. This approach would put a firm on the path towards continuous data center portfolio optimization that is business aligned with maximum efficiency.