qualitative analysis of cooling architectures for data

WHITE PAPER #30
QUALITATIVE ANALYSIS OF
COOLING ARCHITECTURES FOR
DATA CENTERS
EDITOR:
Bob Blough, Emerson Network Power
CONTRIBUTORS:
John Bean, APC by Schneider Electric
Robb Jones, Chatsworth Products Inc.
Mike Patterson, Intel
Rich Jones, Rich Jones Consulting
Rob Salvatore, Sungard
PAGE 2
Executive Summary
Many different cooling architectures exist today that can be used to cool data centers. Each of these
architectures has its own advantages and disadvantages that can have a major impact on energy efficiency as
well as on other aspects of the facility. This white paper offers a qualitative comparison among the popular
architectures available today, providing readers with insight to help determine the cooling architecture that
most appropriately fits their data center strategy.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 3
Table of Contents
I. Introduction ......................................................................................................................................................... 3 II. Assumptions........................................................................................................................................................ 6 III. Definitions ........................................................................................................................................................... 6 IV. Airflow Management Strategies ......................................................................................................................... 8 V. Equipment Placement Strategies .................................................................................................................... 15 VI. Heat Rejection Strategies................................................................................................................................. 23 VII. Conclusion ......................................................................................................................................................... 29 VIII. References ........................................................................................................................................................ 29 IX. I.
About The Green Grid ....................................................................................................................................... 30 Introduction
The Green Grid (TGG) is a global consortium of companies, government agencies, and educational
institutions dedicated to advancing energy efficiency in data centers and business computing
ecosystems. The Thermal Management Working Group within TGG seeks to provide the industry with
increased awareness of data center fundamental relationships, particularly when those relationships
can be used to increase energy efficiency and reduce a data center’s total cost of ownership (TCO).
Of particular benefit for energy efficiency is a proper understanding of data center cooling
architectures.
This white paper covers the most common cooling architectures used for data center applications and
provides a qualitative assessment of each architecture. Similar qualitative information for common
power configurations can be found in TGG White Paper #4, Qualitative Analysis of Power Distribution
Configurations for Data Centers.1
In order to better present a qualitative comparison of the commonly deployed cooling architectures,
they have been organized into three groups. These are comprised of three different and distinct
attributes to choose from, but, in a number of instances, two or more can be used together to achieve
highly energy-efficient results.
The three cooling architecture strategy groups are:
ƒ
1
Airflow management strategies. Taking into account your anticipated equipment airflow
http://thegreengrid.com/~/media/WhitePapers/TGG_Qualitative_Analysis.ashx?lang=en
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 4
requirements, delivery methods, and room layout, how do you want to manage airflow?
ƒ
−
Open
−
Partially contained
−
Contained
Equipment placement strategies. Based on the physical room layout and building
infrastructure, where is the optimum location(s) to place the cooling equipment?
ƒ
−
Cabinet
−
Row
−
Perimeter
−
Rooftop/building exterior
Heat rejection strategies. Considering the maximum equipment densities and projected
overall cooling requirements, what are the viable options for rejecting heat?
−
Chilled water
−
Direct expansion (DX) refrigeration system
−
Economization
The following qualitative analysis parameters will be discussed for each of the cooling architectures:
ƒ
Highlights
ƒ
Advantages and disadvantages
ƒ
Future outlook
Each cooling architecture’s advantages and disadvantages are presented in a table that contains the following
five key parameters:
ƒ
Current usage and availability (IT/data center type, new versus retrofit, climate, and location)
ƒ
Energy efficiency (as compared to open baseline configuration and power usage effectiveness [PUE™])
ƒ
Reliability
ƒ
Equipment (TCO and capacity range)
ƒ
Standardization and acceptance (standards compliance, sales growth trend, and installed
base)
The discussion below is strictly qualitative; any quantitative discussion is beyond the scope of this
document. As a follow-on to this publication, The Green Grid may develop a quantitative data center
cooling architecture white paper that provides further numerical evidence to support the distinctions
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 5
offered here.
A common question in the discussion of cooling architectures is that of air cooling versus liquid
cooling. In practice, data centers generally use not one or the other but a combination of both.
Consider that—for the majority of cases—the individual IT components are air-cooled and, somewhere
in the thermal management system, a liquid will be used to remove the heat from the air through
some form of heat exchanger. For example, such a heat transfer may be designed to occur at the rack
level—with a liquid-cooled rack or a rear-door heat exchanger—or at the room level in a perimeter
computer room air conditioner (CRAC). In both cases, air and a liquid are both used as a heat transfer
fluid. Therefore, this paper does not include a specific discussion comparing air versus liquid cooling
because the decision is generally not which coolant to use but rather where to have the air/liquid heat
transfer occur.
There are, however, limiting cases where this is not true. One extreme example is a full air-side
economizer architecture (discussed later in this paper), where filtered outdoor air is used as the
cooling medium and no liquid is used at all. The other end of the spectrum is deploying liquid cooling
all the way into the IT equipment, where the components are liquid-cooled themselves. Often the
incorrect assumption is made that if the CPU is liquid-cooled, then the “cooling problem is solved,” but
in most typical servers, the CPU only represents somewhere between 25% and 40% of the server heat
generation. So if liquid cooling within the IT equipment only cools the CPU, the remainder of the load
must still be cooled, often through standard air cooling. Fully liquid-cooled systems can be very efficient
if fully integrated into the building’s cooling system, but their cost and complexity limit them to specialty
applications and are outside of the scope of this paper.
As IT equipment’s level of manageability and controls implementation increases, another important
topic will be the dynamic nature of IT and rack-level power consumption, workload placement, and
specific localized cooling requirements. This is best demonstrated with an example. Consider a typical
legacy, medium-density data center that has added server virtualization. Rather than low and
distributed power consumption, it now will be much more dynamic. Instead of the vast majority of
servers ramping between idle and some low utilization, the server pool may experience much higher
levels of variation. Plus, server idle power is dropping rapidly as manufacturers have added focus on
the low end of the power/operations scale. In the extreme case, the workload that had been lightly
distributed across the floor may now be concentrated in a smaller number of virtualized servers
running at much higher utilization and higher heat densities, with other servers (or racks of servers) in
a sleep state. In addition, depending on the specifics of the workload scheduler, that high-density
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 6
workload could routinely move across the data center. The challenge for thermal management will be
addressing/supporting the localized hot spots while not wasting cooling on the racks that are in a very
low idle or sleep state. The issue affects all of the cooling architectures discussed below; the main
point is to consider the dynamic nature of the load when selecting cooling architectures.
II.
Assumptions
The intended audiences for this publication are IT professionals, facility engineers, and CxOs who have a basic
working knowledge of popular cooling architectures. The paper’s content is meant to be used in the initial
evaluation of data center cooling equipment and focuses on the cooling delivery path to the IT equipment. It
does not include cooling loads outside of the data center (e.g., the remainder of the building).
III.
Definitions
ƒ
Blanking panel. A metal or plastic plate mounted on unused IT equipment spaces in a cabinet to
restrict bypass airflow. It can also be called a filler plate.
ƒ
Bypass. Cooled airflow returning to air conditioning units without passing through IT equipment.
ƒ
Cabinet. An enclosure for housing IT equipment, a cabinet is also sometimes referred to as a rack. It
must be properly configured in order to contain and separate supply and return airflow streams.
ƒ
Computational fluid dynamics (CFD). A scientific software tool used in the analysis of data center
airflow scenarios.
ƒ
Close-coupled cooling. Cooling architecture that is installed adjacent to server racks. It reduces the
airflow distance and minimizes the mixing of supply and return air.
ƒ
Computer room air conditioner (CRAC). Unit that uses a compressor to mechanically cool air.
ƒ
Computer room air handler (CRAH). Unit that uses chilled water to cool air.
ƒ
Delta T . Delta temperature, the difference between the inlet (return) and outlet (supply) air
temperatures of air conditioning equipment.
ƒ
Direct expansion (DX) unitary system. A refrigeration system where the evaporator is in direct contact
with the air stream, so the cooling coil of the air-side loop is also the evaporator of the refrigeration
loop.
ƒ
Economizer. Cooling technologies that take advantage of favorable outdoor conditions to provide
partial or full cooling without energy use of a refrigeration cycle. Economizers are divided into two
fundamental categories:
−
Air-side systems. Systems that may use direct fresh air blown into the data center with hot air
extracted and discharged back outdoors, or they may use an air-to-air heat exchanger. With the
air-to-air heat exchanger, cooler outdoor air is used to partially or fully cool the interior data center
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 7
air. Air-side systems may be enhanced with either direct or indirect evaporative cooling, extending
their operating range.
−
Water-side systems. These systems remove heat from the chilled water loop by a heat exchange
process with outdoor air. Typically, there may be a heat exchanger that is piped in series with the
chilled water return and chiller as well as and piped either in series or in parallel with the cooling
tower and chiller condenser water circuit. When the cooling tower water loop is cooler than the
return chilled water, it is used to partially or fully cool the chilled water, thus reducing or
eliminating demand on the chiller refrigeration cycle.
ƒ
Installed base. The number of commercial installations in operation.
ƒ
Kilowatts (kW). A measurement of cooling capacity. (3.516 kilowatts = one ton)
ƒ
Packaged DX system. This system features components of the DX unitary system refrigeration loop
(evaporator, compressor, condenser, expansion device, and even some unit controls) that are factory
assembled, tested, and packaged together.
ƒ
Power usage effectiveness (PUE). A measure of data center energy efficiency calculated by dividing
the total data center energy consumption by the energy consumption of the IT computing equipment.
ƒ
Rack. A metal structure consisting of one or more pairs of vertical mounting rails for securing and
organizing IT equipment, a rack is also sometimes referred to as a cabinet. Racks are more often open
structures without sheet metal sides or a top, whereas cabinets feature partially or fully enclosed
sides and top to improve security and airflow characteristics.
ƒ
Recirculation (or air short-circuiting). Hot return air that is allowed to mix with cold supply air,
resulting in inefficient diluted warm air entering IT equipment.
ƒ
Retrofit. The process of upgrading an existing system’s performance, a retrofit is also known as a
brown field installation. This is the opposite of a new (green field) installation.
ƒ
Return air. The heated air returning to air conditioning equipment.
ƒ
Ride through. A term to describe the time required for the secondary/backup cooling system to fully
replace the primary cooling system in handling cooling demands.
ƒ
Sales growth trend. A sales trend that is measured at regular intervals, typically month-to-month or
year-to-year, to indicate increasing, flat, or declining sales revenue trends.
ƒ
Sensible cooling. The removal of heat that causes a change in temperature without any change in
moisture content.
ƒ
Standards compliance. A system’s ability to comply with recognized government and industry best
practices and minimum requirements.
ƒ
Supply air. The cooled airflow discharged from air conditioning equipment.
ƒ
Total cost of ownership (TCO). A financial decision-making analysis tool that captures the total
installed and operating costs.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 8
IV.
Airflow Management Strategies
STRATEGY 1A: OPEN AIRFLOW MANAGEMENT
Highlights:
ƒ
This traditional strategy consists of cabinets whose arrangement has not been optimized for cooling.
The cabinets have been placed within a room where no intentional airflow management is deployed to
reduce bypass and recirculation between supply and return airflows.
ƒ
The significant mixing of cold and hot air streams results in the least efficient of all cooling strategies.
ƒ
This is the traditional configuration in use today and will be the basis for all other comparisons.
ƒ
This configuration may be acceptable for existing, small, and/or low heat density data centers where
other configurations may not be cost justifiable.
ƒ
Open airflow management is not recommended for new or retrofit installations.
Table 1. Open airflow management advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This is the legacy configuration still
widely used today.
− Open airflow management
presents usage concerns related
to the high cost of maintenance
and operation.
− This configuration does not
incorporate any airflow
management methodology, so
there are no related equipment
availability concerns.
Efficiency
− Installation costs are below
average because no additional
equipment is installed to manage
and separate intake and exhaust
airflows.
− There may be a low first-time cost
for adding load to available white
space.
− This configuration is not
optimized for efficiency.
− Multiple uncontrolled mixing of
supply and return airflow paths
results in it being the least
efficient cooling strategy.
− Studies show that most cooling
configurations of this type are
significantly over-deployed to
overcome their inefficiencies.
− Adding redundant equipment to
increase capacity is very
expensive and does not
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 9
guarantee effectiveness.
Reliability
− Extensive industry experience has
found that equipment functions
reliably and maintenance is
consistent with other strategies.
− The inefficiency of this approach
does provide some inherent
redundancy.
− This strategy is very susceptible
to hot spots.
− Open airflow management is
problematic with mixed heat
loads.
Equipment architectures
− This strategy may be acceptable
for small, low-density (<3 kW per
cabinet) applications.
− The amount and relative size of
the cooling equipment in this
configuration increases TCO and
requires more floor space.
Standardization and acceptance
− Because this is the traditional lowdensity architecture, by default it is
the standard baseline
configuration.
− As knowledge grows within the
user community, this
configuration is becoming
unacceptable for all retrofit and
new installations.
Future Outlook:
The vast majority of existing data centers built prior to 2004 use the open airflow management
strategy, which results in underutilized cooling capacity, inadequate redundancy, and the worst energy
efficiency of any cooling architecture. As more organizations become aware of the significant energy
savings that can be gained by employing any of the other strategies described in this paper, it is
unlikely that this strategy will continue to be used. This configuration may be acceptable for existing,
small, rapidly changing, and/or low heat density data centers where other configurations are not cost
justifiable, but this architecture should be avoided for new designs.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 10
STRATEGY 1B: PARTIALLY CONTAINED AIRFLOW MANAGEMENT (E.G., HOT AISLE/COLD
AISLE)
Highlights:
ƒ
This airflow management strategy consists of some intentional partially contained airflow
management comprised of any one of a combination of airflow management techniques. However, no
complete segregation, isolation, or containment between either source and/or return airflows exists in
this configuration.
ƒ
Most commonly deployed is the hot aisle/cold aisle methodology, but partially contained airflow
management also consists of one or more of the following components: return air (plenum) ceilings,
supply air raised floors, patch panels, cable ingress grommets, enclosed cabinets and barriers, and
internal configuration of cabinets, including blanking panels, air dams, and cable management to
minimize airflow obstructions.
ƒ
This configuration has a moderate TCO, is cost-effective to install and maintain, can be used for new
or retrofit applications in low- to medium-density (<10 kW per cabinet) environments, and is widely
used in many data centers.
ƒ
This configuration is commonly used in conjunction with perimeter room cooling, where other
infrastructure considerations have a significant impact on energy efficiency and should be given
serious consideration, including room density, size and layout along with proper quantity, location, and
sizing of perimeter CRAC/CRAH units, ducting, and vents.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 11
Table 2. Partially contained airflow management advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This configuration is commonly
deployed globally.
− Unless this configuration is
consistently and thoroughly
implemented throughout the
data center, results will vary
dramatically.
− This configuration is possible
without any major changes to
infrastructure.
− Components are readily available
from many suppliers.
− It is relatively easy to
upgrade/retrofit.
− Some implementation issues
may exist due to interference
between return ducting and
cable trays or other existing
infrastructure.
− Some combination of partially
contained airflow management
components should be used in all
new construction.
Efficiency
− Partially contained airflow
management offers improved
efficiencies over an open
configuration.
− This strategy offers a moderate
TCO.
− Because the cooling is not fully
efficient, this configuration
employs extra cooling units to
overcome mixing, thereby
resulting in lower-than-optimum
efficiency.
− As heat density increases, a
cooling density limitation may
occur.
− CFD analysis may be required to
achieve higher equipment
densities and to optimize
efficiency.
Reliability
− This configuration has no active
components, resulting in a low
probability of failures.
− As heat loads increase, the data
center is more susceptible to hot
spots.
Equipment architectures
− The equipment is common and has
a large installed base.
− Airflow may be limited by
perforated access floor tiles.
− Many manufacturers make
equipment for this configuration.
− It can be difficult to reconfigure
the cooling system to
accommodate room or IT load
changes.
− Both procurement and installation
are cost-effective.
− Many combinations of components
can be used to achieve cost and
efficiency objectives.
Standardization and acceptance
− In barrier-contained
configurations, the return side
area can exceed 100°F, causing
potential operator safety
concerns.
− Many facility managers,
consultants, and IT operators are
familiar with this strategy.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 12
Future Outlook:
Many existing data centers use this strategy and, depending upon the amount of recirculation and/or bypass,
will suffer from capacity limitations, inadequate redundancy, and energy inefficiencies. Because it is so
commonly used, not too complex, modular, easy, and cost-effective to implement and maintain, this
configuration is likely to remain popular well into the future. Many equipment manufacturers and industry
groups are striving to increase efficiencies and educate data center owners/operators on ways to use these
products as efficiently as possible; however, as cooling densities increase above 10 kW per cabinet, other
strategies will have to be deployed.
STRATEGY 1C: CONTAINED AIRFLOW MANAGEMENT
Highlights:
ƒ
Containment is achieved by using any one airflow management technique or a combination of them
that results in complete isolation and segregation between the cooling source supply and the heatgenerating equipment return airflows.
ƒ
A contained airflow management configuration consists of a combination of one or more of the
following: chimney cabinets ducted to return air plenum spaces, raised-floor supply ducted
underneath enclosed cabinets, overhead ducting to provide cool air/remove warm air, cold- and hotaisle barriers, cable ingress grommets, and internal configuration of cabinets, including blanking
panels, air dams, and cable management to minimize airflow obstructions.
ƒ
This passive containment strategy, when used with popular active architectures, can result in extreme
energy efficiency because without recirculation and bypass, 100% of the supply air is provided as
intake for the IT equipment.
ƒ
This architecture is cost-effective to install and maintain, resulting in a low TCO, and it can be used for
new or retrofit applications.
ƒ
When properly controlled, this architecture is extremely robust, and it can handle variations from rack
to rack, as well as poor room or under-floor airflow distribution and transients. The IT equipment will
generally take the air it needs from the cold containment or cold room (and exhaust it into the hotaisle containment). Containment schemes are ideal for supporting the increasingly dynamic workload
challenges found in today’s virtualized data centers.
ƒ
Cold-aisle containment can be fed from above or from underneath the floor. Hot-aisle containment
and/or chimney cabinets typically direct the hot air to local cooling or a ducted return overhead. Both
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 13
cold-aisle containment and hot-aisle containment/chimney cabinets have advantages. The specific
nature of the subject data center will generally dictate which choice is right.
−
Cold-aisle advantages include:
•
Simplest implementation in a retrofit, particularly in a typical raised-floor data center with hot
aisle/cold aisle
−
•
Easier to ensure a consistent IT equipment inlet temperature
•
Somewhat simpler integration with outdoor air economizer schemes
•
Quicker response in the event of a fire (prevention/gas suppression system operation)
Hot-aisle advantages include:
•
More of the data center’s open space is at a more tolerable working temperature
•
Larger cold air thermal mass provided for a greater buffer/longer ride-through time during a
cooling system upset
•
Somewhat more efficient in practice (with less than 100% isolation and some leakage cold to
hot side)
−
(Note that the efficiency difference between the two is small compared to the significant
difference between containment and other cooling airflow strategies such as hot aisle/cold aisle.
The best path is to choose the containment method that fits best with your existing or planned
infrastructure.)
ƒ
This architecture is capable of cooling heat load densities in excess of 30 kW per cabinet.
ƒ
A contained configuration may enable airflow management without utilizing the raised-floor plenum for
air distribution, potentially lowering facility costs by eliminating the raised-floor architecture.
Table 3. Contained airflow management advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This strategy is in common
practice today.
− The perception is that this
strategy only works for low to
medium densities.
− Many manufacturers offer either
contained supply or return
cabinets.
Efficiency
− Isolating the airflow paths
minimizes the cooling system
losses, achieving the highest level
of airflow efficiency.
− Containment allows less mixing,
higher set points, more cooling
capacity, higher densities, and
higher A/C Delta T efficiencies.
− Barriers generally have to be
custom built in order to fit a
particular room.
− This configuration requires more
planned system engineering.
− The CRAC/CRAH Delta T
between supply and return air
may need to be managed at
extreme densities and at high
utilization levels.
− Other infrastructure
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 14
− Higher return temperatures allow
for higher evaporator
temperatures and higher
compressor coefficient of
performance (COP).
− This strategy has a low installation
and maintenance TCO.
considerations— such as room
density, size and layout, and
proper quantity, location, and
sizing of air conditioning units,
ducting, and vents—affect energy
efficiency .
− It eliminates perforated access
floor tile airflow limitations,
depending on the architecture.
− When contained airflow
management is used with
economization, more free cooling
hours are available.
− Contained airflow management
minimizes temperature variance
across server inlets, allowing room
supply temperature to be
increased.
Reliability
− Using non-fan-assisted return
airflow ducting results in the
highest reliability of all strategies .
− Ducted systems employing fans
add a potential point of failure,
decreasing reliability consistent
with other fan-powered
strategies.
Equipment architectures
− Passive ducted chimney (return
air) cabinets can be used.
− Fire sprinkler systems may need
to be relocated to avoid
obstruction from chimney
cabinets and barriers.
− Fan-assisted ducted (return air)
chimney cabinets can be used.
− Supply air can be contained and
delivered via raised floor through
openings in cabinets’ bases.
− Physical barriers such as curtains
and solid panels containing cold
aisles can be used.
− Physical barriers such as curtains
and solid panels containing hot
aisles can be used.
− Chimney cabinets require deeper
footprints to create rear air
ducts.
− Barriers typically need to be
customized to fit unique spaces.
− Barriers may be viewed as
obtrusive and unattractive.
− This configuration eliminates the
need for CFD airflow analysis.
− Row-based cooling solutions can
be used with either hot- or coldaisle barriers.
Standardization and acceptance
− This configuration complies with all
current standards.
− There is growing acceptance of
this configuration by the industry.
− Contained airflow management
requires no maintenance or user
− Containing cold supply (versus
hot return) air reduces the
thermal cold air mass available
during ride-through conditions.
− In barrier-contained
configurations, the return side
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 15
training.
area can exceed 100°F.
− The payback period may be
longer due to higher capital
expense requirements, as
compared with a partially
contained solution. If used
without raised floor, CAPEX may
be lower.
Future Outlook:
A fully contained passive cooling strategy, when used in conjunction with any active cooling systems mentioned
in this paper, allows those systems to operate at a very high efficiency level because all of the cooled supply air
is being delivered to the intake of the IT equipment. The potential efficiency gains, high reliability, and low
install and maintenance TCO make this architecture comparable to the best of other architectures. Contained
airflow management is a newer strategy, is currently used in a number of installations globally, is readily
available from a number of leading manufacturers, and will continue to be an economical choice for many
cooling applications for the foreseeable future.
V.
Equipment Placement Strategies
STRATEGY 2A: COOLING EQUIPMENT – CABINET
Highlights:
ƒ
Cabinet, in-rack, or closed-loop cooling equipment is wholly contained within a cabinet or is
immediately adjacent to and dedicated to a cabinet.
ƒ
Cabinet architecture provides high-density spot cooling by augmenting and increasing
capacity/redundancy of room-based perimeter cooling systems.
ƒ
It features shorter, more predictable airflow paths, allowing high utilization of rated air conditioner
capacity. It also augments CRAC fan power requirements, which increases efficiency, depending upon
the room’s architecture.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 16
ƒ
This strategy allows for variable cooling capacity and redundancy by cabinet.
ƒ
Because airflow distances are shortest and well-defined with this strategy, airflows are not affected by
room constraints or installation variables, resulting in high air conditioning capacity utilization.
ƒ
This architecture handles power or equipment densities of up to 50 kW per cabinet.
Table 4. Cabinet cooling equipment advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This configuration is readily
available, with equipment from a
number of leading manufacturers.
− This configuration may be limited
to applications where there is
ultra-high density by specific
cabinet with low- to mediumdensity room-cooling capacity.
− The system provides a flexible,
modular deployment for use at the
cabinet level.
Efficiency
− Cabinet cooling equipment closely
couples heat removal with the
heat-generation source to
minimize or eliminate mixing.
− This configuration is not as
efficient in large-scale
deployments when compared
with other strategies.
− The airflow is completely contained
within the cabinet.
− Cabinet cooling equipment
requires additional rack-level
electrical and cooling
infrastructure.
− This configuration may result in
stranded capacity when the
cooling resource is used by
single rack.
Reliability
− Risk to IT equipment from a single
cooling failure is localized based
upon the many-to-many
architecture.
− Many rack cooling systems are
not capable of redundancy, thus
limiting resiliency.
− As with any active cooling, there
is the possibility of liquid leaks.
− In-cabinet cooling requires
additional piping, which creates
additional potential leakage
points.
− Due to the increased number of
air conditioner units and related
fans, reliability will be
compromised if the system is not
properly maintained.
− Redundancy requires a
CRAC/CRAH system.
Equipment architectures
− Modular and scalable design
allows equipment density to vary in
size, up to cooling capacity.
− This architecture is immune to
room effects and reconfiguration;
− Each cabinet requires a
dedicated air conditioning unit
along with associated chilled
water or refrigerant piping,
resulting in high total costs for
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 17
rack layout can be completely
arbitrary.
− Heat is captured at the rear of the
rack, minimizing mixing with cool
air.
− This configuration is capable of
handling extreme densities.
− The architecture provides the
flexibility to locate IT equipment in
rooms and spaces that were not
intended for such equipment.
installation and maintenance
when compared with other
strategies.
− There have been some concerns
over increased potential for
chilled water or refrigerant
leakage.
− The cooling capacity cannot be
shared with other cabinets.
− If sized such that high-density
cooling can be applied only where
needed, this architecture could
have a lower TCO than those that
provide high-density cooling
everywhere (including areas where
it’s not necessary).
Standardization and acceptance
− Standardized solutions can be
operated with minimal user
training.
Future Outlook:
Cabinet, in-rack, or closed-loop cooling equipment design allows for dedicated, highly contained
cooling at the individual cabinet level. This type of architecture allows for extremely high heat density
cooling of IT equipment, independent of the ambient room conditions. As a result, it provides the
greatest flexibility for cabinet locations. The modular, rack-oriented architecture is the most flexible, is
fast to implement, and achieves extreme cooling densities, but at the cost of efficiency. Cabinet
cooling is a newer architecture that is currently used in a number of installations. It is readily available
from multiple leading manufacturers and will continue to be the popular choice for most extreme
cooling density applications for the foreseeable future.
STRATEGY 2B: COOLING EQUIPMENT – ROW
Highlights:
ƒ
Row-based cooling is an air distribution approach in which the air conditioners are dedicated to a
specific row of cabinets.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 18
ƒ
Configurations consist of row-based air conditioners installed between cabinets or rear-door and
overhead heat exchangers.
ƒ
This strategy features shorter, more predictable airflow paths with less mixing, which allows high
utilization of rated air conditioner capacity.
ƒ
It also allows for variable cooling capacity and redundancy by row.
ƒ
This architecture uses either refrigerant or chilled water cooling coils. Both require remote heat
rejection.
ƒ
It provides high-density spot cooling by augmenting and increasing capacity/redundancy of noncontained, room-based perimeter cooling systems, thus increasing efficiency.
ƒ
Row-oriented architecture handles heat densities of up to 30 kW per cabinet.
Table 5. Row-based cooling equipment advantages and disadvantages
Advantages
Disadvantages
Current usage and availability
− This strategy is widely used,
particularly in medium-sized and
high-density applications, and it is
available from a number of leading
manufacturers.
− This strategy may not be ideal for
very small, low-density server
spaces or the very largest
deployments.
Efficiency
− Airflow paths are shorter than
comparable room configurations,
which improves efficiencies.
− This configuration has shown
diminishing efficiency returns for
larger, higher-resiliency or lowerdensity applications.
− Significant efficiency
improvements have been shown,
especially when deployed in highrecirculation/low-containment
environments.
− When used with a containment
strategy, even greater efficiency
can be achieved.
− Most row-based cooling systems,
by their nature, minimize mixing.
However, airflows may not be
completely contained and
independent of the room
environment, and the extent of
mixing of row and room air
streams will reduce efficiency.
− This strategy requires additional
row-level electrical and cooling
infrastructure.
Reliability
− Row-based cooling has proven to
be as reliable as other active
cooling architectures.
− Additional row air conditioning
units can be added as needed to
meet resiliency requirements.
− When compared with room
architecture, row cooling
requires additional piping,
creating additional potential
leakage points.
− The increased number of air
conditioner units and related
fans, compressors, pumps,
valves, controls, etc. creates
more potential failure points.
− Redundancy requires a
CRAC/CRAH system.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 19
Equipment architectures
− The system provides a flexible,
modular deployment that is easily
scaled with equipment of various
sizes.
− Organizations can add cooling
capacity to their existing systems.
− Cooling capacity is well-defined
and can be shared with the row.
− This strategy is excellent for
retrofitting existing installations.
− Overhead heat exchangers can be
deployed over any manufacturer’s
cabinets.
− Row-based configurations
require additional floor space so
rows can accommodate air
conditioner units. This increases
compute costs per square foot,
which is an important
consideration in most data
centers.
− This strategy is not as flexible
where heat loads vary from one
end of the floor or row to
another.
− Row-based configurations can
require proprietary cabinets.
− Rear-door exchangers must be
designed to match host cabinets.
Standardization and acceptance
− This architecture has wide
acceptance and significant global
deployment.
− Standardized solutions can be
operated with minimal user
training.
Future Outlook:
Row-based cooling is an ideal solution for high-density configurations where additional capacity is
needed to eliminate hot spots while improving overall energy efficiency in open or partial airflow
management environments. It does so by bringing the heat transfer closer to the IT equipment source
near the cabinet. Moving the air conditioner closer to the cabinet ensures a more precise delivery of
supply air and a more immediate capture of exhaust air. The modular, row-oriented architecture
provides many of the flexibility, speed, and density advantages of the rack-oriented approach, but
with a cost similar to room-oriented architecture. Row cooling is a newer architecture, is commonly
used today, is readily available from a number of leading manufacturers, and will continue to be a
popular choice for most applications for the foreseeable future.
STRATEGY 2C: COOLING EQUIPMENT – PERIMETER
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 20
Highlights:
ƒ
Perimeter or room-oriented architecture consists of one or more air conditioners placed around the
perimeter of the data center to supply cool air via a system of plenums, ducts, dampers, vents, etc.
This architecture must be designed and built into a building’s infrastructure.
ƒ
Additional efficiencies are possible when used in conjunction with partial or full-containment airflow
strategies such as hot/cold aisles with raised floors to distribute air and drop-ceiling plenums for
return air.
ƒ
A CRAC contains an internal compressor, fans, and a coil and uses the direct expansion of refrigerant
to remove heat whereas CRAHs contain only fans and a cooling coil and typically use chilled water to
remove heat.
ƒ
Both CRACs and CRAHs utilize outdoor heat rejection sources; CRACs use remote condensers, while
CRAHs traditionally use chiller systems.
Table 6. Perimeter cooling equipment advantages and disadvantages
Advantages
Disadvantages
Current usage and availability
− This configuration is widely
accepted and is available from
many well-known suppliers.
− This configuration may not be
ideal for small or very high–
density environments.
Efficiency
− In partial containment
environments, floor tiles can be
quickly and easily relocated to
optimize cool air distribution.
− High heat densities require
minimization of recirculation or
bypass air or full containment.
− Very efficient high heat densities
are possible when used in
conjunction with robust
containment and economization.
− Unless a plenum solution is
used, the heat return is not
dispersed evenly.
− Sub-floor obstructions can affect
static pressure.
− This configuration is more
susceptible to recirculation and
bypass than row or cabinet
strategies.
Reliability
− Perimeter cooling units have been
in use for a long period of time and
have proven dependable.
− As with any active cooling, there
is the potential for liquid
leakage.
Equipment architectures
− This configuration promotes an
efficient use of computer room
floor white space because
cabinets can be placed side-byside in long rows.
− This configuration requires more
upfront design engineering to
ensure that future capacity
requirements are met.
− The physical layout of CRACs or
CRAHs can vary.
− The perimeter configuration allows
for cabinet equipment to be
− Room constraints, such as
CRAC/CRAH location, room
shape, ceiling height, under-floor
obstructions, etc., affect airflow
and therefore overall efficiency.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 21
positioned side-by-side, rather
than broken up with cooling units
in the IT rack space, as is the case
with some in-row cooling
configurations.
− Redundancy schemes need
careful review to assure that all
failure modes can really support
the data center environment.
− This architecture is applicable for
retrofit or new installations.
Standardization and acceptance
− This is the current standard in
most data centers around the
world.
− This is not as scalable as other
architectures for higher-density
applications.
− Supplemental techniques, such
as containment or row-based
cooling, are routinely used to
augment perimeter cooling.
Future Outlook:
Perimeter or room-oriented architecture is the traditional methodology for cooling data centers. It can
offer a well-tested and cost-effective means of providing high efficiency and reliability, especially when
deployed in medium-sized applications in conjunction with a high-containment strategy. Even higher
efficiencies are possible when this architecture is used in conjunction with air or chilled water
economization. There are a number of leading manufacturers offering highly efficient designs that
feature high Delta Ts and variable speed fans, and these designs will continue to be popular for the
foreseeable future.
STRATEGY 2D: COOLING EQUIPMENT – ROOFTOP/BUILDING EXTERIOR
Highlights:
ƒ
This strategy utilizes the building’s central air-handling units to cool the data center.
ƒ
The cooling equipment located outside the computer room typically consists of roof chillers and towers
associated with the central plant, and it could very well support cooling equipment within the white
space.
ƒ
Equipment can also be installed at ground level and ducted through walls.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 22
ƒ
This strategy represents the most efficient use of computer room floor white space because the
cooling equipment is located on the roof or exterior land area, allowing cabinets to be placed side-byside in long rows.
ƒ
It can be used in conjunction with one of the previously described airflow management strategies,
affecting the data center’s energy efficiency accordingly.
ƒ
As with room-oriented architecture, the data center is supplying cool air via a system of plenums,
ducts, dampers, vents, etc.
ƒ
This architecture must be designed and built into a building’s infrastructure.
Table 7. Rooftop/building exterior cooling equipment advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This is the traditional approach for
building HVAC systems and is
therefore widely available from
many suppliers.
− Roof penetrations create
potential for water leaks into the
data center.
− This architecture has been widely
and effectively adapted for
extremely large data center
applications.
Efficiency
− Exterior systems can be very
efficient, especially when used
with robust containment and
economization.
− Larger fans are typically more
efficient.
Reliability
− The reliability of rooftop/exterior
cooling equipment is similar to
that of other liquid cooling
offerings on the market today.
− Exterior cooling units occupy
land area, requiring additional
facility costs.
− This configuration is not costeffective for smaller applications.
− Without full containment, hot
spots can be problematic.
− Organizations can incur
increased energy costs for fans
to move air from roof through
ducts to the data center.
− As with any active cooling, there
is the possibility of liquid leaks.
− Redundancy is a greater
challenge because of the larger
capacity of the exterior units and
the ductwork/dampers
necessary to provide overlapping
cooling in the data center.
− Units are exposed to outdoor
elements, potentially reducing
life expectancy.
Equipment architectures
− This strategy reduces data center
requirements for floor space.
− This configuration is cost-effective;
roof space costs less per square
foot than interior space.
− For new applications, this strategy
− This strategy requires more
upfront design engineering to
ensure that future capacity
requirements will be met.
− This configuration is less flexible
than other architectures.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 23
must be designed into the
building’s infrastructure.
− Extended pipe and duct runs are
required to connect to non-selfcontained HVAC systems.
− Additional structural work on the
building may be required to
support this architecture.
− This is not cost-effective for
retrofit applications.
Standardization and acceptance
− This architecture is common,
available globally, and widely
accepted.
Future Outlook:
Rooftop architecture is the traditional methodology for cooling buildings and has been well-adapted to cooling
data centers. It can offer a well-tested and cost-effective means of providing high efficiency and reliability,
especially when designed into medium- to large-scale applications in conjunction with a robust containment
strategy. Even higher efficiencies are possible when this approach is used in conjunction with air or chilled
water economization. There are a number of leading manufacturers offering highly efficient designs that
feature high Delta-Ts and variable speed fans, and these designs will continue to be popular for the
foreseeable future for new installations.
VI.
Heat Rejection Strategies
STRATEGY 3A: HEAT REJECTION – CHILLED WATER SYSTEM
Highlights:
ƒ
A chilled water heat rejection system uses chilled water rather than refrigerant to transport heat
energy between the air handlers, chillers, and the outdoor heat exchanger (typically a cooling tower in
North America or a dry cooler in Europe). A chilled water heat rejection system uses either water or a
water/glycol solution to take heat away from air handlers serving the data center. This fluid may then
be cooled by a chiller using mechanical refrigeration, heat exchange with a cooling tower water loop,
or dry coolers operating in conjunction with air-cooled chillers.
ƒ
Chilled water–cooled HVAC systems are inherently more efficient, cooling larger amounts of heat as
typically found in large data centers, assuming proper airflow management is used.
ƒ
The components of the chiller (evaporator, compressor, air- or water-cooled condenser, and expansion
device) often come pre-installed from the factory, reducing field labor and installation time and
improving reliability.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 24
ƒ
Because they are installed in a central location, chilled water systems feature centralized
maintenance and improved control characteristics, through the minimization of leak potential and the
simplification of containment, compared with equipment located on the roof.
ƒ
Chilled water systems offer reduced TCO from lower installed costs for larger systems over 300 tons
(1055 kW) and lower operating costs when using a cooling tower or an evaporative condenser.
ƒ
Water-cooled units make less noise, offer more cooling per square foot, and, depending upon their
environment, usually require a little less routine maintenance than Strategies 3B and 3C described
below.
ƒ
Depending upon the geographic location, which affects average ambient temperature and humidity
conditions based on annual statistical weather variation data, both chilled water and direct expansion
architectures may achieve significant additional energy savings by taking advantage of air or water
economization.
Table 8. Chilled water system advantages and disadvantages
Advantages
Current usage and availability
Disadvantages
− This is the traditional approach for
building HVAC systems and is
therefore widely available from
many suppliers.
− This architecture has been widely
and effectively adapted for use in
data center applications.
Efficiency
− Proven technology provides high
overall efficiency.
− Efficiency is dependent upon the
outside year-round temperature.
− Although this strategy generally
makes for a more expensive plant
to build, floor units offer more
efficient cooling per square foot.
− This architecture is less costly to
operate than other architectures,
particularly as scale increases over
300 tons (1055 kW).
Reliability
− Maintenance is approximately
equivalent to other active systems.
− Centralized systems can provide
redundancy by installing multiple
chillers and pumps.
− This configuration has many
components when compared
with some other systems, which
may result in lower reliability.
− Leaks require containment.
− Many operators do not like IT
equipment near water.
− Large chiller building blocks
require redundant chillers and
pump systems.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 25
Equipment architectures
− The centralized design reduces
maintenance and improves
control.
− This strategy offers flexibility;
adding new chilled water circuits to
the existing system is a relatively
simple operation, and more
chillers and pumps can be added
to the existing system to increase
cooling capacity.
− This architecture is more suitable
for cooling large, multi-story
buildings and for very long
distances along the same floor
level.
− Chilled water configurations can
operate at lower noise levels than
other systems.
Standardization and acceptance
− Maintenance and water
treatment costs are the main
disadvantage of the evaporative
cooling towers and/or
evaporative condensers.
− Organizations incur higher
installed costs for systems less
than 300 tons (1055 kW),
compared with DX systems.
− Due to high-volume water
consumption, cities and
municipalities may enact limits
that result in availability
concerns and increased costs.
(Chillers are also available in aircooled versions, which do not
use excess water.)
− This architecture is common,
available globally, and widely
accepted.
Future Outlook:
Chilled water systems are another traditional methodology for cooling buildings and have been well
adapted to cooling data centers. They can offer a well-tested and cost-effective means of providing
high efficiency and reliability, especially when designed into medium- to large-scale applications in
conjunction with a high-containment strategy. There are a number of leading manufacturers offering
high-efficiency designs, which will continue to be a popular choice for the foreseeable future for new
installations.
STRATEGY 3B: HEAT REJECTION – DIRECT EXPANSION (DX) REFRIGERATION SYSTEM
Highlights:
ƒ
This cooling strategy uses refrigerant as part of a direct compression/expansion cooling system.
ƒ
In a direct expansion unitary system, the evaporator is in direct contact with the air stream, so the
cooling coil of the air-side loop is also the evaporator of the refrigeration loop. The term “direct” refers
to the position of the evaporator with respect to the air-side loop.
ƒ
In DX systems, the treated air stream passes through the outside (fin side) of the evaporator coil such
that it is directly cooled by the expansion of refrigerant passing through the tubes of the coil.
ƒ
DX systems, especially packaged DX systems, are more economical for smaller building cooling loads
of less than 300 tons (1055 kW) where installation costs are lower, compared with chilled water
systems, because DX requires less field labor and fewer materials to install.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 26
ƒ
Packaged DX systems using air-cooled condensers generally take up less floor space, so they are
frequently installed on a building’s roof, in a small room adjacent to a data center, or along the
perimeter of a data center where condenser air can be ducted in and out of the building.
ƒ
Depending upon the geographic location, which affects average ambient temperature and humidity
conditions based on annual statistical weather variation data, both chilled water and direct expansion
architectures may achieve significant additional energy savings by taking advantage of air or water
economization.
Table 9. Direct expansion (DX) refrigeration advantages and disadvantages
Advantages
Current usage and availability
Disadvantages
− This technology is readily available
from many major suppliers.
− This architecture has been widely
and effectively used in data center
applications.
Efficiency
− DX is generally more cost-effective
for cooling smaller rooms/heat
loads under 300 tons (1055 kW).
− This architecture requires less
labor and fewer materials for
installation, which reduces costs,
especially with packaged systems.
− In systems above 300 tons,
optimally designed DX systems
may be less efficient than
optimally designed chilled water
or economized systems.
− The energy usage of individual
packaged DX units can easily be
measured.
Reliability
− DX system refrigerant leaks
evaporate and therefore do not
require a containment area, as is
the case with chilled water
systems.
− With multiple units, there is no
single point of failure, and
redundancy strategies are more
efficiently applied.
− Maintenance is generally better
than that of other architectures
due to a simpler system design.
Equipment architectures
− New systems are generally less
expensive to build for data centers
that are less than 300 tons (1055
kW).
− Distance constraints between
the condenser and the air
handling unit and on refrigerant
piping limit DX systems to
smaller buildings or rooms on a
single floor, with no option for
multi-story high rises.
− This configuration is noisier than
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 27
other architectures.
Standardization and acceptance
− This architecture is common,
available globally, and widely
accepted.
Future Outlook:
Direct expansion architecture is a popular methodology for cooling data centers and has been
deployed in data centers throughout the world. It is a well-tested and cost-effective means of providing
high efficiency and reliability, especially when deployed in small- to medium-sized applications in
conjunction with a robust containment strategy. Even higher efficiencies are possible when used in
conjunction with economization. There are a number of leading manufacturers offering high-efficiency
designs, and these will continue to be a popular choice for the foreseeable future.
STRATEGY 3C: HEAT REJECTION – ECONOMIZATION SYSTEM
Highlights:
ƒ
Economizers are commonly used in data centers to complement or replace cooling equipment such as
CRACs or chillers by making the most of outside environmental air temperature differences to achieve
energy efficiency improvements.
ƒ
Unfortunately, economizers are most useful for data centers located in cooler and moderate climates.
Efficiency gains depend upon the geographic location affecting the average ambient temperature and
humidity conditions. (See The Green Grid Free Cooling Tools in the References section.)
ƒ
Data center economizers generally have one or more sets of filters to catch particulates that might
harm hardware. These filters are installed in the duct work connecting an outside environment to a
data center.
ƒ
There are two economizer versions used in data centers: air-side economizers and water-side
economizers.
ƒ
Air-side economizers pull cooler outside air directly into a facility to prevent overheating. They work
best in mild temperature climates such as those found along the Pacific coast.
ƒ
Outside air also must be monitored and conditioned to appropriate humidity levels; as per ASHRAE,
relative humidity (RH) should be between 5.5°C dew point and 15°C or 60% RH, whichever is lower.
ƒ
Water-side economizers use cold air to cool an exterior chilled water tower. That chilled water is used
to supply the data center air conditioners instead of using less efficient mechanically chilled water.
Water economization is best used in climates with a cool fall and spring and cold winter.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 28
ƒ
Air-side economizers require more complicated engineering in the design phase in order to develop
the complex operations necessary for the systems to respond to a fire or smoke event occurring inside
or outside the facility.
Table 10. Economization advantages and disadvantages
Current usage and availability
Advantages
Disadvantages
− This technology is readily available
from many major suppliers.
− This architecture presents some
geographic limitations,
depending upon average yearround outside air temperature
ranges.
− It has been widely and effectively
used in data center applications.
Efficiency
− Significant energy efficiency gains
are possible, depending upon the
number of days that the outside air
temperature is below the cooling
equipment’s set temperatures.
− Air-side economizers work best in
mild conditions; they also work
well in more variable
environments.
− In optimal environments that have
cold, dry weather, it’s possible to
experience energy cooling cost
savings of up to 78%.
Reliability
− Efficiency gains depend on the
outside air temperature being
lower than the cooling
equipment’s set temperatures.
− Water economizers do not work
well in hot or humid climates.
− Air-side economizers are not
justifiable in locations that have
high year-round temperatures.
− Filtration systems that bring
outside air into a facility require
maintenance, and the filtration
systems can be complex.
− Reliability and maintenance are
more problematic as compared
with other architectures, due to
system complexity.
− Tower freezing can be a problem
in extremely cold environments.
− Air filtration must also be
properly maintained.
Equipment architectures
− Economization can be used as
stand-alone or supplementary
cooling, depending upon outside
air temperatures.
− Economization should be
considered in new building
designs.
Standardization and acceptance
− This architecture is common,
available globally, and widely
accepted.
− Retrofitting economizers may not
be feasible. Not only is it
generally disruptive, but
retrofitting an existing facility—
especially if it’s one that is fairly
new—may not offer sufficient
return on investment.
− There is a potential of increased
risk from airborne contaminants
in air-side economizer
deployments.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 29
Future Outlook:
Economizers have been successfully deployed in data centers in many parts of the world.
Economization presents a well-tested and cost-effective means of providing high efficiency and
reliability, especially when deployed in conjunction with a high-containment strategy. With the growing
industry emphasis on energy efficiency, economization can provide significant near-term power cost
savings and defer capital cost investment required to build new data centers. There are a number of
leading manufacturers offering high-efficiency designs and growing use of this architecture will
continue for the foreseeable future. Economization should be considered wherever possible.
VII.
Conclusion
Ten common data center cooling architectures have been discussed—each having inherent
advantages and disadvantages—and the reader has been provided with a high-level qualitative
comparison to assist in assessing future data center cooling architecture needs. All but the open
airflow management strategy has application in modern data center cooling architecture. However, to
achieve the energy efficiency and high heat densities required for all future data center applications,
critical factors must be considered, such as installed cost, energy consumption and costs, IT heat
density, space requirements, temperature and humidity ranges, building height, size, shape, airflow
containment, retrofit or new building, maintenance, and control requirements. Also ductwork must be
properly sized, or cooling capacity can be compromised. Additional new technologies such as
virtualization, economization, and variable speed fans significantly reduce energy use and should be
given serious consideration. In summary, no one strategy fits all applications. Each strategy discussed
in this paper may be optimal depending upon different factors unique to the data center owner,
application, and location and therefore must be thoroughly vetted to determine which one is best.
VIII.
References
1. ASHRAE, Thermal Guidelines for Data Processing Environments, Second Edition (2009)
2. ASHRAE, Datacom Equipment Power Trends and Cooling Applications (2005)
3. ASHRAE, Design Considerations for Datacom Equipment Centers, Second Edition (2009)
4. The Green Grid Free Cooling Tool,
http://cooling.thegreengrid.org/namerica/WEB_APP/calc_index.html
5. The Green Grid Free Cooling Tool,
http://cooling.thegreengrid.org/europe/WEB_APP/calc_index_EU.html
6. The Green Grid, Proper Sizing of IT Power and Cooling Loads, White Paper #23 (2009)
http://www.thegreengrid.org/en/Global/Content/white-papers/Proper-Sizing-of-IT-Powerand-Cooling-Loads
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.
PAGE 30
IX.
About The Green Grid
The Green Grid is a global consortium of companies, government agencies, and educational institutions
dedicated to advancing energy efficiency in data centers and business computing ecosystems. The Green Grid
does not endorse vendor-specific products or solutions, and instead seeks to provide industry-wide
recommendations on best practices, metrics, and technologies that will improve overall data center energy
efficiencies. Membership is open to organizations interested in data center operational efficiency at the
Contributor, General, or Associate member level. Additional information is available at www.thegreengrid.org.
©2011 The Green Grid. All rights reserved. No part of this publication may be used, reproduced, photocopied, transmitted, or stored
in any retrieval system of any nature without the written permission of the copyright owner.