Rack Impacts on Cooling for High Density Servers

Dell™ PowerEdge™1650:
Rack Impacts on Cooling
for High Density Servers
Enterprise Systems Group (ESG)
Dell White Paper
By Paul Artman, David Moss, and Greg Bennett
August 2002
Contents
Introduction ................................................................................................................................... 3
Server Internal Cooling ............................................................................................................... 4
Rack Impact on Server Cooling ................................................................................................. 5
Thermal Data ................................................................................................................................. 6
Conclusion ..................................................................................................................................... 8
Appendix- Test Pictures ............................................................................................................ 10
August 2002
Page 2
Dell Enterprise Systems Group
Section
1
Introduction
The growth of the Internet fueled the need for rack dense servers as service
providers (ISPs/ASPs), dot-coms, enterprise customers, or any organization
leveraging Internet technologies all struggled with the same fundamental issue –
data center space. Even as the dot.com bubble burst and the economy weakened,
rack dense servers remained popular and have continued to gain share with
respect to the rest of the server market (rack servers currently account for
approximately 40 percent of the overall market 1, and account for over 50 percent
of Dell sales). The primary reason for this growth is that data center space is
either scarce, expensive, or both for most organizations; so whether customers
build their own data centers or lease space from a service provider, companies
must maximize their return by deploying as many servers as possible in the
smallest space possible.
These factors have made 1U and 2U2 servers particularly attractive. Moving
forward, servers will get even denser with the advent of Server Blades and
Modular Blades. With this increased density, however, comes increasing power
and thermal concerns as data center managers struggle with the ability to power
and cool these rack-dense configurations.
This paper provides some general guidance regarding data center design,
deployment, and cooling, and uses the PowerEdge 1650, Dell’s dual-processor
1U server, as a basis for examining the impact of power and cooling in an
extremely rack dense environment.
1
Based on market data from the IDC Quarterly Server Tracker, Q1 2002
A “U” is a widely accepted form of rack measurement. It represents 1.75-inches
and refers to the height of a server/storage system or the rack itself.
2
August 2002
Page 3
Dell Enterprise Systems Group
Section
2
Server Internal Cooling
The ultimate temperatures seen by internal server components will vary from
server to server depending on the configuration, application, position in the rack,
position in the data center, the amount of cabling, etc. Dell servers are designed
to cool from front to back and are tested to meet elevated temperatures
exceeding what is commonly found even in the worst-case locations in a data
center. All Dell servers are designed for a 35° C (95° F) inlet temperature (into
the front server surface) at maximum component power dissipations. This
means that when run at full load, internal components are maintained below
their recommended guidelines, or below the more stringent guidelines imposed
by Dell.
In a redundantly cooled system, the components meet these temperature
requirements even in the event of a fan failure. With processors, servers are
usually designed to cool to meet the requirements of future processor speeds, up
to the maximum speed expected (based on the Intel specification). So, for a
server component to exceed allowable operating temperatures, the server must
be operating at maximum power (a maximized application, maximum processor
speed) in an environment exceeding 35° C (95° F). Since most data centers are
cooled to the low 20° C (68° F) range, there should be significant margin. The
remainder of this white paper quantifies some of the variables introduced by the
data center that may affect that margin.
August 2002
Page 4
Dell Enterprise Systems Group
Section
3
Rack Impact on Server Cooling
The effect of installing a rack full of high-density servers is a moderate rise in
system component temperature. This white paper discusses the impact of
racking servers on system component temperatures compared with bench top
temperature measurements. In addition, the impact and variance in different
rack locations is investigated. Finally, the reasons for increased component
temperatures are discussed.
High-density servers often have reduced system airflow due to the added impact
of the rack, cables, and cable management arm. Factors for system-reduced
airflow include the following:

Blockage due to cable management arms

Blockage due to cables

Rack doors
In addition, there can be temperature variance in inlet temperature to the rack
due to the following:

Low data center cooling flow rates

Poor flow balancing under the raised floor

Proximity to perforated tile location resulting in temperature variance from
rack to rack

Temperature variance in the rack (top to bottom)
In summary, component temperatures can be higher due to decreased flow rate
through the server and increased temperature in front of the server. To try to
quantify cumulative effects within a rack full of PowerEdge 1650 1U servers, Dell
ran a series of thermal tests.
August 2002
Page 5
Dell Enterprise Systems Group
Section
4
Thermal Data
A Dell PowerEdge Rack 4210 42U (2m) rack was loaded with 29 PowerEdge 1650
1U servers to determine the impact on system component temperatures of
installing multiple servers in a rack. The servers were configured with dual Intel
Pentium III 1.26GHz with 512KB L2 cache processors. In addition to the servers,
the rack contained a 1U keyboard/monitor tray and three 3U UPSs. The servers
were cabled with dual NICs, dual power cords, and keyboard, video, mouse
(KVM) cables for each server. This represents an above average amount of
cabling for a 1U server (see pictures of test environment in Appendix). Cable
management arms were also installed on all servers. For the testing, the front and
rear doors of the cabinet were closed, and the side covers were in place. The test
was conducted in a lab with no raised floor at room temperature (~70° F/21° C).
From this testing, three generalities were determined:

Server inlet temperature could vary by as much as 6° C (11° F),
depending on location up and down the rack. Temperature increased
toward the top.

Dense rear cabling reduces server flow rates. Reduced flow rates could
raise component temperatures up to 5°C (9°F).

Component temperature increases were not uniform; component
temperatures at the rear of the server increased more than components at
the front.
Inferences about these three points:

August 2002
Higher Top Location Entrance Temperature
Within any enclosure, whether it is a rack enclosure or a room enclosure,
there is always a natural tendency toward a gradient of lower cool air to
higher warm air. Outside the rack, the gradient might be minimized or
virtually eliminated by the high velocity air shooting up from the floor
tiles. Inside the rack, each server imparts a small amount of heat to the
surrounding air and surfaces (known as convection and radiation).
Because of the natural tendency toward a temperature gradient, the air
surrounding and being drawn into the servers will be higher toward the
top of the rack. With the lack of a raised floor in the Dell lab test, the
observed 6°C variance up the rack may have been exaggerated due to a
room gradient outside the rack. It is important to note that the flow rate
through the floor tiles should be at least equal to the cumulative flow
rate expected through the servers. If it is not, air drawn into the top
servers (furthest from the cooling tiles) is possibly pre-heated air. It is
Page 6
Dell Enterprise Systems Group
also important to note that attempts to expel air from the top of a rack
should be closely scrutinized. Apart from providing venting at the top
of the rack, Dell does not recommend other means of expelling air from
the top of the rack without a complete analysis on the effect to each
server component. In some cases this has been known to raise server
temperatures by causing enough backpressure in the rack to reduce the
flow rates through the servers.
August 2002

Heavy cable blockage reduces server flow rates
Most 1U servers should not be as densely cabled as the test case, so they
should not exhibit as much of the 5°C temperature increase seen in the
test. Because of the large amount of cabling relative to the height of a 1U
server, 2U and greater servers should see less of an impact due to
cabling.

Rear Components Experience Greater Temperature Increases
When the overall flow rate through the server is decreased by added
flow resistance (cables, rack doors, etc), the air does not pick up the heat
as efficiently. The air flowing through a server is heated by and
increases in temperature as it passes each successive component or
heatsink. The resulting air temperature increase at any point in the
server is inversely proportional to how fast the air is moving through the
server. If the speed is cut in half, the increase in the air temperature
doubles at any point within the server. For instance, if the air is heated
up 1 degree as it passes the first component and is 5 degrees higher as it
passes the last, it should heat up 2 degrees past the first and 10 degrees
past the last component if the flow rate were halved. So, on a relative
basis, the increase in the air temperature around the rear components is
larger than around the front components for a server with a decreased
flow rate.
Page 7
Dell Enterprise Systems Group
Section
5
Conclusion
At least three factors inherent in data centers can lead to an increase of internal
server temperatures:

Decreased flow rate through the server due to cabling and rack doors

Uneven temperatures in front of each rack:
o Proximity to nearest cooling tile
o Imbalance below raised floor

Increased temperature gradient from bottom to top in a rack
This paper discussed and quantified the first and third factors. The second factor
is data center specific.
It is important to note that the component temperature increases discussed in
this paper are considered extreme: systems dissipating maximum power (at full
Intel speed/thermal specification), greater than average number of cables in a
rack dense platform (1U), and location at top of the rack. Based on the results in
this paper, a system operating in the top space in a rack and blocked by heavy
cabling might see temperatures as high as 11°C more than a bench test of the
same system. A system like the PowerEdge 1650 is rated to a bench test inlet
temperature of 35°C (95°F) and will maintain reliable component temperatures
even with an internal fan failure. An equitable comparison of the 35°C (95°F)
bench test would be a server with maximized operation, operating in the top of a
closed rack, with heavy cabling, where the temperature at the bottom of the rack
is 24OC (75OF). Most data centers will measure temperatures much lower at the
bottom of each rack.
Suggestions for optimizing cooling within a data center include:
August 2002

Data centers should be set up with hot and cold aisles. In other words, racks
should be set up back-to-back (hot aisle) and front-to-front (cold aisle).

Care should be taken to ensure uniformity of temperature in the front of each
rack by balancing flow rates under and through the floor.

Perforated tiles should be used in cool aisles. Different perforation levels
(e.g., 25%, 50%, etc.) can help balance airflow within a data center. Hot aisles
should not contain perforated tiles; this lowers the cooling ability of the cool
aisle.

If there are discrepancies between advertised system operating temperatures,
systems with lower operating temperatures should be located lower in the
rack.
Page 8
Dell Enterprise Systems Group

The addition of rack fans or fan trays is not recommended. In some cases,
additional top mounted rack fans have actually impeded server thermal
performance, but this may not be the case in every environment.

The servers are designed to expel their own heat, therefore the data center
should optimize the efficiency at which that expelled heat is picked up and
transported to the HVAC.
NOTE: The above suggestions are guidelines only. Results will vary by specific
data center attributes and server configurations. Dell recommends running tests
to help to determine ways to optimize cooling within a particular data center
environment.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL
ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR
IMPLIED WARRANTIES OF ANY KIND.
Dell and PowerEdge are trademarks of Dell Computer Corporation. Other trademarks and trade names may be used in
this document to refer to either the entities claiming the marks and names or their products. Dell disclaims proprietary
interest in the marks and names of others.
©Copyright 2002 Dell Computer Corporation. All rights reserved. Reproduction in any manner whatsoever without the
express written permission of Dell Computer Corporation is strictly forbidden. For more information, contact Dell.
Information in this document is subject to change without notice.
August 2002
Page 9
Dell Enterprise Systems Group
Section
6
Appendix- Test Pictures
This photo illustrates the front of the
tested rack. The rack consists of the
following:


29 x 1U PowerEdge 1650s
3 x 3U UPSs

1U Keyboard/Monitor Tray
This photo illustrates the rear of the
tested rack. Each 1U server had the
following 7 cables:

2 x power cords

2 x Gigabit NIC cables

3 x keyboard, video, and mouse
cables
This represents a greater than average
cabling scenario from a cable
management perspective.
August 2002
Page 10
Dell Enterprise Systems Group
The picture above is a close up of the back of the rack with the cable management
arms. It also shows the thermocouples attached to a one of the servers in the rack
to determine the test results.
August 2002
Page 11
Dell Enterprise Systems Group