Precision Cooling Industry Fails the CIO

advertisement
Data Center White Paper
March 2, 2011
Carl Cottuli
Vice President, R&D/Service
Precision Cooling Industry
Fails the CIO
Executive Summary
The evolving densities of IT loads in the data center have quickly
outpaced the capacity of current cooling systems and strained
the limits of chaos cooling methods. Aggravating the problem,
supplemental cooling programs have increased data center energy
costs and carbon footprints. To keep pace with increasing data
center loads, the best response is to create a flexible, cost effective
and green directed airflow solution, one that can grow with data
center needs and that returns flexibility/control to the data center
manager.
The Data Center’s Evolution
A few short years ago data center managers became aware of a
new and growing problem. The rising density of IT loads was fast
outpacing the ability of their cooling systems to keep data center
temperatures in check. The cooling limitations of the data center’s
physical environment put IT equipment at risk.
Legacy cooling system designs were not working to support
growing data center IT loads. The situation surprised data center
managers because when they compared cooling needs and capacity
on paper, there appeared to be more than enough cold air capacity
to meet the demand. So what went wrong?
At the time, air delivery methods were totally reliant on a “chaos air
distribution strategy.” Namely, massive amounts of cool air were
supplied through a jet stream to stir up stagnant or warm air in the
data center. This supply of air served a dual purpose, one to cool
the IT equipment and the other to move warm air mass towards the
A/C return, or simply away from IT inlets. The hope was that the
newly supplied, jet streamed air would reach all of the data center’s
IT equipment; and the air conditioning system – relying on the same
chaos strategy – would extract the warm air generated by the IT
equipment. The strategy didn’t work because increasing data loads
kept driving up the amount of warm air in the data center.
The thought leadership of the vendor-based community responded
to this new challenge. Its members quickly made an assessment
of the problem and issued the edict that best-in-class data centers
should employ a hot aisle/cold aisle arrangement of the IT racks.
The hot aisle/cold aisle arrangement was actually a workable theory,
if one wanted to get marginal improvement on the chaos method
of air distribution. However, the marginal improvement was soon
outweighed by ever-increasing heat loads.
Now the problem became how to make the recently adopted hot/
cold aisle system perform better. Consequently, the focus shifted
to developing a new breed of “supplemental” cooling products.
Supplemental cooling products were then added to drive up the total
supply of cold air capacity in close proximity to the load.
All this really accomplished was to widen the gap between cold air
supply capacity to the data center and IT equipment demand within
the racks. The original paper problem got worse. A data center
equipped with traditional and supplemental cooling systems further
exacerbated the imbalance between energy demand to get the job
done and cool air supply to prevent equipment over-heating.
The adoption of the hot aisle/cold aisle strategy came with a high
price tag and a large business impact in the relocation of racks
within the existing data center. So painful was this relocation
concept that many data center managers have yet to take this step.
In addition, all subsequent development left data centers that had
completed the relocation process behind and unable to solve the
problem for which they originally sought industry help to resolve.
This “tale” of data center cooling was built on false assumptions
and poor problem analysis. The hot aisle/cold aisle arrangement
sounded like a good idea at the time, but has proven less than
ideal for keeping pace with the cooling demands of high density
equipment, let alone adding expense and unwanted data center
constraints.
Who could have guessed that the late adopters could now be seen
as wise? The early adopters have started down a road that will be
the first step in limiting the flexibility of their IT staffs.
Compounding the Problem
A wider group on the supply side of the data center industry was
quick to join the hot aisle/cold aisle movement, as well. After all,
who doesn’t want to be part of the next great thing? That thing
turned out to be the supply side’s introduction of a completely
new cooling platform that was dependent on the hot aisle/cold
aisle arrangement. The platform became a new industry segment
category called “supplemental cooling”.
Supplemental cooling products came from vendors in many sizes
and shapes. All of these products further limited the flexibility of the
data center manager. They all required significant installation work
which displaced racks to make room for the cooling product or for
significant structures to support overhead mountings, etc.
Some of these products made it impossible to pass interconnections
of data cables from rack to rack, forcing the use of longer cables
and additional rack based exit/entry holes. Other supplemental
cooling products created environmental health hazards with noise,
physical confinement or high temperature levels. All of these issues
combined to drive up human failure rates by those employees
working in the data center who were eager to escape the hostile
environment.
Precision Cooling Industry Fails the CIO White Paper
Industry Progress?
Over time, IT progress – in the form of increasingly dense devices
– made this data center problem worse. In response to the denser
IT devices, the thought leadership issued another edict which
recommended putting all the high density equipment in one corner
of the data center and even installing extra supplemental cooling
units in the new area. To add to the problem, the thought leadership
also recommended putting nothing but supplemental cooling units in
the area and renaming them “base cooling units”.
This last arrangement, again, further limited the choices available
to the data center manager and intensified the previous relocation
problems. The IT equipment deployments that were configured by
business process or business unit – such as processing, storage and
networking in contiguous racks or rows – then had to be divided into
high density and low density consumption, and subsequently, spread
across the data center.
As an industry, the supply side has continued to offer solutions that
limit the options available to the IT manager; and it shows no signs
of letting up. In fact, the situation could worsen.
For instance, let’s look at a real possibility. During an upcoming
three-year period, a data center, which has been divided into high,
medium and low density segments, now has to accommodate a
typical technology refresh cycle. IT demands require the installment
of new high-density equipment in the lower density segment areas.
How does one proceed? Put in more supplemental cooling or base
cooling units?
Another issue to be addressed is that the high density equipment
has to be changed out in the live data center. That is, new
equipment has to be installed before old equipment can be
removed. The overall steady-state demand doesn’t change,
because it is an even swap but one has to place the new high
density equipment somewhere. In addition, the original high density
equipment is now removed leaving its rack and cooling horsepower
idle and wasting away. Before long, the data center in question is
right back to the problem it had at first. This situation again could
require a mandatory rearrangement of the entire data center.
Greening the Data Center
In addition to a cooling methodology that enables data center
flexibility, the technology industry is also looking for a green solution.
A green solution can be defined as a system that cools the greatest
load, uses the least energy and creates the least CO2 emissions;
while it intelligently matches cooling demand with supply capacity.
Using a green hot aisle/cold aisle strategy, a data center may be
more efficient than one running on chaos model cooling; but there
is a radical imbalance between maximum supply capability and
actual demand. The problem only worsens when one factors in
redundancy.
Going Green and Growing Costs
It should also be understood that when factoring in greenness, it can
include additional costs, environmental unfriendliness and limitations
to the overall data center cooling solution.
For instance, consider the environmental impact to manufacture
supplemental cooling equipment for the hot aisle/cold aisle method.
Before the hardware has a chance to be inefficient, a factory is
putting out tons of CO2 to produce the product.
The simplest way to look at this is CO2 produced in relation to
pounds of equipment manufactured. It is important to understand
the point of origin, also, because products made in the USA have
lower CO2 emissions than when that same product is made in a less
regulated country of origin.
2
EATON CORPORATION www.eaton.com
For instance, if one were manufacturing a metric ton (approximately
2200 lbs.) of steel equipment, in America it would generate
approximately 7.04 lbs. 1 of CO2; but in a country, such as China,
the manufacturing process would produce about 4,400 lbs. 2 of CO2.
This does not take into consideration the often times, inferior gauge
of foreign steel or the additional costs and time for transport from
the international factory to its final destination.
Early On – The Right Answer
Many of the supplemental cooling solutions have created more
problems. So how should today’s data center manager solve
the problem? The root of the problem is the chaos model of air
distribution. The solution is to focus on the removal of the chaos
method of airflow and employ an organized system. Early on, the
industry, as a whole, was on the right track; but then it went off on
a tangent.
Initially, when the IT density overload problem became apparent, the
only “experts” to turn to were those in the precision cooling product
segment. There were only a handful of providers supplying these
products then. During that period there was also the larger “comfort
cooling” industry, which, at the time, was not able to meet the
tight tolerances and rigorous demands that the fledgling IT products
needed to be cooled.
However, over the years, advances in comfort cooling have resulted
in products that are more than capable of handling today’s IT load. In
addition, today’s more robust IT equipment can handle a wider range
of environmental conditions.
Where do we go from here?
It seems that the two factors cited above that led to the
development of the new products in the precision cooling industry
(the comfort cooling segment’s level of capability and IT equipment
tolerance capacity) may also lead to its demise. Precision cooling
as an industry will slowly become less relevant. What today’s data
center needs is a simple, scalable and organized airflow system that
addresses the failures of the chaos method.
The typical embodiment of this heat containment design strategy
is a rack exhaust system that connects to a return plenum with a
cold air supply either from the room flooding or from under the floor.
Employing this type of system enables the cooling demand and
supply capacity to be more closely aligned. This strategy achieves
not only lower energy costs, but also a lower amount of cooling
product to be manufactured.
Another benefit of this type of system is its independent from
the physical arrangement of the enclosures. Equipment can be
organized in any row configuration; and high density loads can be
spread across the data center, instead of in a dedicated location.
This kind of intelligently designed system returns flexibility to the IT
manager and removes the pain of rearranging existing equipment.
The legacy data center with front facing back of rack arrangement
can benefit from a heat containment solution strategy. In a welldesigned system with an open cold air supply and a contained
exhaust airflow, both tiles in front of the rack can be dedicated to
supplying cold air to that one rack, rather than splitting it between
front-facing racks. This design allows the legacy floor plan – the
method considered inefficient – to become the most efficiently
adapted method for meeting the cold air demands of today’s data
center.
The efficiency gains of a heat containment strategy provide a
significant, additional advantage to the organized airflow method,
which far exceeds that of all other systems on the market today.
Precision Cooling Industry Fails the CIO White Paper
The Better Solution
References
In this new era of competition for data center cooling, the concept
of heat containment is emerging as the most logical and practical
solution. The organized delivery of intake and exhaust airstreams, as
well as advances in the comfort cooling industry, have combined to
make new levels of operating efficiency available to the IT manager.
1 Climate Leaders U.S. Environmental Protection Agency,
(June, 2003) Greenhouse Gas Inventory Protocol Core Module
Guidance, Direct Emissions from Iron and Steel Production,
available at:
A key benefit of heat containment – which includes a rear plenum
implementation and the aggregation of all the heat into a single
location – is that it allows the data center to take the best advantage
of Air Side Economizers. An ASEer simply introduces cool outside
air into the data center and provides a subsequent reduction in
use of energy. The decision to run the system – using outside or
recycled air – depends on a comparison of conditions between the
two possible air supplies.
If intelligently integrated into a data center’s cooling systems,
ASEers can save energy across the operating spectrum, provide
benefits for operational issues and reduce carbon footprints. In order
for ASEers to provide these gains, however, they must be used in
combination with a heat containment methodology. Without heat
containment, the ASEer advantage is lost.
Overall, in today’s data center, using a heat containment
methodology in combination with a smart ASEer strategy can
optimize airflow, reduce overall energy costs, expand hardware
flexibility options and improve the data center reliability for the entire
IT community who rely upon it.
About the Author
As Vice President R&D/Service at Eaton, Carl Cottuli is a recognized
data center airflow management expert. With over 20 years of
experience, providing innovative solutions that solve real customer
problems, Mr. Cottuli has published several articles in the technology
marketplace and has been an invited speaker and panelist to
industry conferences worldwide.
Eaton Corporation
Electrical Sector
1111 Superior Avenue
Cleveland, OH 44114
United States
877-ETN-CARE (877-386-2273)
Eaton.com
© 2011 Eaton Corporation
All Rights Reserved
Publication No. 11-13/3-11
http://www.epa.gov/stateply/documents/resources/ironsteel.pdf
2 Zhang Qun, Beijing University of Science and Technology, (2006),
Talk about the Chinese Iron and Steel Industry Development and
Environment Protection, available at:
http://www.hm-treasury.gov.uk/d/final_draft_china_mitigation__
iron_and_steel_sector.pdf
Download