A White Paper from the Experts
in Business-Critical Continuity
Energy Logic: Reducing Data Center Energy Consumption by
Creating Savings that Cascade Across Systems
The model demonstrates
Executive Summary
that reductions in energy consumption at the IT
equipment level have
the greatest impact on
overall consumption
because they cascade
across all supporting
systems.
A number of associations, consultants and vendors have promoted best practices for
enhancing data center energy efficiency. These practices cover everything from facility lighting
to cooling system design, and have proven useful in helping some companies slow or reverse
the trend of rising data center energy consumption. However, most organizations still lack a
cohesive, holistic approach for reducing data center energy use.
Emerson Network Power analyzed the available energy-saving opportunities and identified the
top ten. Each of these ten opportunities were then applied to a 5,000-square-foot data center
model based on real-world technologies and operating parameters. Through the model,
Emerson Network Power was able to quantify the savings of each action at the system level, as
well as identify how energy reduction in some systems affects consumption in supporting
systems.
The model demonstrates that reductions in energy consumption at the IT equipment level
have the greatest impact on overall consumption because they cascade across all supporting
systems. This led to the development of Energy Logic, a vendor-neutral roadmap for optimizing
data center energy efficiency that starts with the IT equipment and progresses to the support
infrastructure. This paper shows how Energy Logic can deliver a 50 percent or greater reduction
in data center energy consumption without compromising performance or availability.
This approach has the added benefit of removing the three most critical constraints faced by
data center managers today: power, cooling and space. In the model, the 10 Energy Logic
strategies freed up two-thirds of floor space, one-third of UPS capacity and 40 percent of
precision cooling capacity.
All of the technologies used in the Energy Logic approach are available today and many can be
phased into the data center as part of regular technology upgrades/refreshes, minimizing
capital expenditures.
The model also identified some gaps in existing technologies that could enable greater energy
reductions and help organizations make better decisions regarding the most efficient
technologies for a particular data center.
1
Introduction
In a report to the U.S. Congress, the
Environmental Protection Agency
concluded that best practices can reduce
data center energy consumption by 50
percent by 2011.
The double impact of rising data center
energy consumption and rising energy costs
has elevated the importance of data center
efficiency as a strategy to reduce costs,
manage capacity and promote environmental responsibility.
Data center energy
consumption is being
driven by the demand
within almost every
organization for greater
computing capacity
The EPA report included a list of Top 10
Energy Saving Best Practices as identified by
the Lawrence Berkeley National Lab. Other
organizations, including Emerson Network
Power, have distributed similar information
and there is evidence that some best
practices are being adopted.
Data center energy consumption has been
driven by the demand within almost every
organization for greater computing capacity
and increased IT centralization. While this
was occurring, global electricity prices
increased 56 percent between 2002 and
2006.
The Spring 2007 DCUG Survey found that
77 percent of respondents already had their
data center arranged in a hot-aisle/coldaisle configuration to increase cooling system efficiency, 65 percent use blanking panels to minimize recirculation of hot air and
56 percent have sealed the floor to prevent
cooling losses.
The financial implications are significant;
estimates of annual power costs for U.S.
data centers now range as high as $3.3
billion.
This trend impacts data center capacity as
well. According to the Fall 2007 Survey of
the Data Center Users Group (DCUG®), an
influential group of data center managers,
power limitations were cited as the primary
factor limiting growth by 46 percent of
respondents, more than any other factor.
While progress has been made, an objective, vendor-neutral evaluation of efficiency
opportunities across the spectrum of data
center systems has been lacking. This has
made it difficult for data center managers to
prioritize efficiency efforts and tailor best
practices to their data center equipment
and operating practices.
In addition to financial and capacity considerations, reducing data center energy use
has become a priority for organizations
seeking to reduce their environmental
footprint.
This paper closes that gap by outlining a
holistic approach to energy reduction,
based on quantitative analysis, that enables
a 50 percent or greater reduction in data
center energy consumption.
There is general agreement that improvements in data center efficiency are possible.
2
and increased IT
centralization.
The distinction between
percent of total consumption. Supply-side
systems include the UPS, power distribution, cooling, lighting and building
switchgear, and account for 48 percent of
consumption.
Data Center Energy Consumption
demand and supply
power consumption is
valuable because reductions in demand-side
energy use cascade
through the supply side.
The first step in prioritizing energy saving
opportunities was to gain a solid understanding of data center energy consumption.
Emerson Network Power modeled energy
consumption for a typical 5,000-squarefoot data center (Figure 1) and analyzed
how energy is used within the facility.
Energy use was categorized as either
“demand side” or “supply side.”
Information on data center and infrastructure equipment and operating parameters
on which the analysis was based are presented in Appendix A. Note that all data
centers are different and the savings potential will vary by facility. However, at minimum, this analysis provides an order-ofmagnitude comparison for data center
energy reduction strategies.
Demand-side systems are the servers, storage, communications and other IT systems
that support the business. Supply-side
systems exist to support the demand side.
The distinction between demand and supply power consumption is valuable because
reductions in demand-side energy use cascade through the supply side. For example,
in the 5,000-square-foot data center used to
In this analysis, demand-side systems—
which include processors, server power supplies, other server components, storage and
communication equipment—account for 52
Power and Cooling 48%
Computing Equipment 52%
Computing Equipment 52%
(Demand)
Building Switchgear /
MV Transformer 3%
Lighting 1%
Processor
15%
Support
Systems 48%
(Supply)
Cooling
38%
Server Power
Supply 14%
UPS 5%
Power Draw*
Computing
588 kW
Lighting
10 kW
UPS and distribution
losses
72 kW
Cooling power draw for
computing and UPS losses
Other Server
15%
PDU 1%
Category
Building switchgear/MV
transformer/other losses
Storage 4%
TOTAL
429 kW
28 kW
1127 kW
Communication
Equipment 4%
Figure 1. Analysis of a typical 5000-square-foot data center shows that demand-side computing equipment accounts
for 52 percent of energy usage and supply-side systems account for 48 percent.
* This represents the average power draw (kW). Daily energy consumption (kWh) can be captured by multiplying the power draw by 24.
3
complete. The energy saving measures
included in Energy Logic should be considered a guide. Many organizations will already
have undertaken some measures at the end
of the sequence or will have to deploy some
technologies out of sequence to remove
existing constraints to growth.
analyze energy consumption, a 1 Watt
reduction at the server component level
(processor, memory, hard disk, etc.) results
in an additional 1.84 Watt savings in the
power supply, power distribution system,
UPS system, cooling system and building
entrance switchgear and medium voltage
transformer (Figure 2).
As these technologies are specified in new
equipment, inefficient servers will
be phased out and replaced with higherefficiency units, creating a solid foundation
for an energy-optimized data center.
The Energy Logic Approach
Energy Logic takes a sequential approach to
reducing energy costs, applying the 10 technologies and best practices that exhibited
the most potential in the order in which they
have the greatest impact.
Power management software has great
potential to reduce energy costs and should
be considered as part of an energy optimization strategy, particularly for data centers that
have large differences between peak and
average utilization rates. Other facilities may
While the sequence is important, Energy
Logic is not intended to be a step-by-step
approach in the sense that each step can only
be undertaken after the previous one is
1 Watt
saved here
The Cascade Effect
-1.0 W
DC-DC
Saves an additional
.18 W here
-1.18 W
AC-DC
-1.49 W
and .31 W here
Energy Logic approach is
to establish an IT equipment procurement
policy that exploits the
energy efficiency benefits of low power processors and high-efficiency
The first step in the Energy Logic approach is
to establish an IT equipment procurement
policy that exploits the energy efficiency benefits of low power processors and high-efficiency power supplies.
Consequently, every Watt of savings that
can be achieved on the processor level
creates a total of 2.84 Watts of savings for
the facility.
Server
Component
The first step in the
Power
Distribution
and .04 W here
-1.53 W
UPS
and .14 W here
1 Watt saved at the processor saves a
total of 2.84 Watts of total consumption
-1.67 W
Cooling
and 1.07 W here
-2.74 W
Building
Switchgear/
Transformer
-2.84 W
= Cumulative Saving
and .07 W here
Figure 2. With the Cascade Effect, a 1 Watt savings at the server component level creates a
reduction in facility energy consumption of 2.84 Watts.
4
power supplies.
Total floor space
required for IT equipment was reduced by 65
percent.
Reducing Energy Consumption and
Eliminating Constraints to Growth
choose not to employ power management
because of concerns about response times. A
significant opportunity exists within the
industry to enhance the sophistication of
power management to make it an even more
powerful tool in managing energy use.
Employing the Energy Logic approach to the
model data center reduced energy use by 52
percent without compromising performance
or availability.
The next step involves IT projects that may
not be driven by efficiency considerations, but
have an impact on energy consumption. They
include:
• Blade servers
• Server virtualization
In its unoptimized state, the 5,000-squarefoot data center model used to develop the
Energy Logic approach supported a total
compute load of 588 kW and total facility
load of 1127 kW. Through the optimization
strategies presented here, this facility has
been transformed to enable the same level of
performance using significantly less power
and space. Total compute load was reduced
to 367 kW, while rack density was increased
from 2.8 kW per rack to 6.1 kW per rack.
These technologies have emerged as “best
practice” approaches to data center management and play a role in optimizing a data
center for efficiency, performance and manageability.
Once policies and plans have been put in
place to optimize IT systems, the focus shifts
to supply-side systems. The most effective
approaches to infrastructure optimization
include:
• Cooling best practices
• 415V AC power distribution
• Variable capacity cooling
• Supplemental cooling
• Monitoring and optimization
This has reduced the number of racks required
to support the compute load from 210 to 60
and eliminated power, cooling and space limitations constraining growth (Figure 4).
Total energy consumption was reduced to
542 kW and the total floor space required for
IT equipment was reduced by 65 percent
(Figure 5).
Energy Logic is suitable for every type of
data center; however, the sequence may be
affected by facility type. Facilities operating at
high utilization rates throughout a 24-hour
day will want to focus initial efforts on
sourcing IT equipment with low power
processors and high efficiency power supplies. Facilities that experience predictable
peaks in activity may achieve the greatest
benefit from power management technology.
Figure 6 shows how compute load and type
of operation influence priorities.
Emerson Network Power has quantified the
savings that can be achieved through each of
these actions individually and as part of the
Energy Logic sequence (Figure 3). Note that
savings for supply-side systems look smaller
when taken as part of Energy Logic because
those systems are now supporting a smaller
load.
5
Savings Independent of
Other Actions
Energy Logic Savings
with the Cascade Effect
Energy Saving Action
ROI
Savings (kW)
Savings (%)
Savings (kW)
Savings (%)
Cumulative
Savings (kW)
Lower power processors
111
10%
111
10%
111
12 to 18 mo.
High-efficiency power
supplies
141
12%
124
11%
235
5 to 7 mo.
Power management
features
125
11%
86
8%
321
Immediate
8
1%
7
1%
328
TCO reduced
38%*
Server virtualization
156
14%
86
8%
414
TCO reduced
63%**
415V AC power distribution
34
3%
20
2%
434
2 to 3 mo.
Cooling best practices
24
2%
15
1%
449
4 to 6 mo.
Variable capacity cooling:
variable speed fan drives
79
7%
49
4%
498
4 to 10 mo.
Supplemental cooling
200
18%
72
6%
570
10 to 12 mo.
Monitoring and optimization: Cooling units work as a
team
25
2%
15
1%
585
3 to 6 mo.
Blade servers
* Source for blade impact on TCO: IDC ** Source for virtualization impact on TCO: VMware
Figure 3. Using the model of a 5,000-square-foot data center consuming 1127 kW of power, the actions included in
the Energy Logic approach work together to produce a 585 kW reduction in energy use.
Constraint
Unoptimized
Optimized
Capacity Freed Up
4988
1768
3220 (65%)
2 * 750
2 * 500
2 * 250 (33%)
Cooling plant capacity (tons)
350
200
150 (43%)
Building entrance switchgear and genset (kW)
1169
620
549 (47%)
Data center space (sq ft)
UPS capacity (kVA)
Figure 4. Energy Logic removes constraints to growth in addition to reducing energy consumption.
6
P2 P1
P2 P1
D1 R
R
R
R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R D2 D1 R
R
R
R
HOT AISLE
R
R
R
CW CRAC
R
R
D1 R
R
R
R
R
R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R D2 D1 R
R
R
R
CW CRAC
R
R
D1 R
R
R
R
R
R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R D2 D1 R
R
R
R
CW CRAC
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R
R
R
R
R D2
R
R
R
R
R
R D2
R
R
R D2
R
R
R D2
R
R
R D2
R
R
R D2
R
R
R D2 D1 R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
R
R
R D2
HOT AISLE
R
R
R
Battery
Room
UPS Room
COLD AISLE
R
R
R D2 D1 R
R
R
R
R
R
R D2 D1 R
R
R
R
R
R
R
HOT AISLE
R
Control
Center
COLD AISLE
COLD AISLE
HOT AISLE
R
HOT AISLE
HOT AISLE
COLD AISLE
D1 R
R
COLD AISLE
HOT AISLE
R
R
HOT AISLE
COLD AISLE
D1 R
R
COLD AISLE
HOT AISLE
R
R
HOT AISLE
COLD AISLE
D1 R
R
R
R
R
COLD AISLE
R
R D2 D1 R
R
R
HOT AISLE
R
R
R
R
HOT AISLE
CW CRAC
R
CW CRAC
R
COLD AISLE
CW CRAC
D1 R
COLD AISLE
CW CRAC
CW CRAC
COLD AISLE
P2
CW CRAC
P1
Before Energy Logic
P1
P2 P1
P2
R
R
R D2 D1 R
D2 D1
XDV
R
R
R
R
COLD AISLE
D2 D1
XDV
R
R
R
R
R D2
D2
D2
HOT AISLE
XDV
R
XDV
HOT AISLE
D1 R
R
Control
Center
COLD AISLE
XDV
R
XDV
R
XDV
R
XDV
CW CRAC
D1 R
R D2
CW CRAC
R
R
XDV
R
R
XDV
R
R
XDV
R
XDV
D1 R
R
HOT AISLE
COLD AISLE
Space Savings: 65%
R
CW CRAC
R D2 D1 R
XDV
R
XDV
R
XDV
R
HOT AISLE
XDV
R
XDV
R
XDV
R
XDV
D1 R
COLD AISLE
XDV
CW CRAC
COLD AISLE
Battery
Room
COLD AISLE
CW CRAC
UPS Room
After Energy Logic
P1 = Stage one distribution, side A
D1 = Stage two distribution, side A
P2 = Stage one distribution, side B
D2 = Stage two distribution, side B
CW CRAC = Chilled water computer room air conditioner
R = Rack
XDV = Supplemental cooling module
Figure 5. The top diagram shows the unoptimized data center layout. The lower diagram shows the data center after the
Energy Logic actions were applied. Space required to support data center equipment was reduced from 5,000 square feet
to 1,768 square feet (65 percent).
7
1. Lowest power processor
2. High-efficiency power supplies
3. Blade servers
4. Power management
Daytime
operations
24-hour
operations
1. Virtualization
2. Lowest power processor
3. High-efficiency power supply
4. Power management
5. Consider mainCooling best practices
frame architecture
Variable capacity cooling
Supplemental cooling
1. Virtualization
415V AC distribution
Monitoring and optimization 2. Power management
1. Power
management
2. Low power
processor
3. High-efficiency power supplies
4. Blade servers
3. Low power
processor
4. High-efficiency
power supplies
Compute Intensive
I/O Intensive
Figure 6 . The Energy Logic approach can be tailored to the compute load and type of operation.
The Ten Energy Logic Actions
lower power processors deliver the same performance as higher power models (Figure 8).
1. Processor Efficiency
In the absence of a true standard measure of
processor efficiency comparable to the U.S.
Department of Transportation fuel efficiency
standard for automobiles, Thermal Design
Power (TDP) serves as a proxy for server
power consumption.
In the 5,000-square-foot data center modeled for this paper, low power processors create a 10 percent reduction in overall data
center power consumption.
2. Power Supplies
As with processors, many of the server power
supplies in use today are operating at efficiencies below what is currently available. The
U.S. EPA estimated the average efficiency of
installed server power supplies at 72 percent
in 2005. In the model, we assume the unoptimized data center uses power supplies that
average 79 percent across a mix of servers
that range from four-years old to new.
The typical TDP of processors in use today is
between 80 and 103 Watts (91 W average).
For a price premium, processor manufacturers provide lower voltage versions of their
processors that consumes on average 30
Watts less than standard processors (Figure
7). Independent research studies show these
Sockets
Speed (GHz)
Standard
Low power
Saving
1
1.8-2.6
103 W
65 W
38 W
2
1.8-2.6
95 W
68 W
27 W
2
1.8-2.6
80 W
50 W
30 W
AMD
Intel
Figure 7. Intel and AMD offer a variety of low power processors that deliver average savings
between 27 W and 38 W.
8
AS3AP
System Performance Results
Transaction/Second (Higher is Better)
Best-in-class power supplies are available
today that deliver efficiency of 90 percent.
Use of these power supplies reduces power
draw within the data center by 124 kW or 11
percent of the 1127 kW total.
30000
25000
As with other data center systems, server
power supply efficiency varies depending on
load. Some power supplies perform better at
partial loads than others and this is particularly important in dual-corded devices where
power supply utilization can average less
than 30 percent. Figure 9 shows power supply efficiencies at different loads for two
power supply models. At 20 percent load,
model A has an efficiency of approximately
88 percent while model B has an efficiency
closer to 82 percent.
Transactions/Second
20000
15000
10000
5000
0
Load 1
Load 2
Opteron 2218 HE 2.6 GHz
Load 3
Woodcrest LV S148 2.3 GHz
Load 4
Opteron 2218 2.6 GHz
Load 5
Woodcrest 5140 2.3 GHz
Source Anandtech
Figure 8. Performance results for standard- and low-power processor
systems using the American National Standard Institute AS3AP
benchmark.
Figure 9 also highlights another opportunity
to increase efficiency: sizing power supplies
closer to actual load. Notice that the maximum configuration is about 80 percent of
“Typical” PSU Load in
Redundant Config.
“Typical” Config
100% CPU Util.
“Max”
Configuration
Nameplate
Rating
90%
Efficiency
85%
80%
Model A
Model B
75%
70%
0%
5%
10%
15%
20%
25%
30%
35%
40%
45%
50%
55%
60%
65%
70%
75%
80%
85%
90%
95%
100%
Percent of Full Load
Figure 9. Power supply efficiency can vary significantly depending on load and power supplies
are often sized for a load that exceeds the maximum server configuration.
9
While the move to blade
are disabled because of concerns regarding
response time; however, this decision may
need to be reevaluated in light of the significant savings this technology can enable.
the nameplate rating and the typical configuration is 67 percent of the nameplate rating.
Server manufacturers should allow purchasers to choose power supplies sized for a
typical or maximum configuration.
servers is typically not
driven by energy considerations, blade servers
can play a role in reduc-
In the model, we assume that the idle power
draw is 80 percent of the peak power draw
without power management, and reduces to
45 percent of peak power draw as power
management is enabled. With this scenario,
power management can save an additional
86 kW or eight percent of the unoptimized
data center load.
3. Power management software
Data centers are sized for peak conditions that
may rarely exist. In a typical business data center, daily demand progressively increases from
about 5 a.m. to 11 a.m. and then begins to
drop again at 5 p.m. (Figure 10).
Server power consumption remains relatively
high as server load decreases (Figure 11). In
idle mode, most servers consume between
70 and 85 percent of full operational power.
Consequently, a facility operating at just 20
percent capacity may use 80 percent of the
energy as the same facility operating at 100
percent capacity.
4. Blade Servers
Many organizations have implemented blade
servers to meet processing requirements and
improve server management. While the
move to blade servers is typically not driven
by energy considerations, blade servers can
play a role in energy consumption.
Server processors have power management
features built-in that can reduce power when
the processor is idle. Too often these features
Blade servers consume about 10 percent less
power than equivalent rack mount servers
because multiple servers share common
Servers
ing energy consumption.
AC Power Input Versus Percent CPU Time
45
150
40
120
Dell Power Edge 2400
(Web/SQL Server)
125
35
Watts
100
30
100
80
75
60
50
40
25
20
0
0
20
15
10
% CPU Time
25
Watts
Percent Utilization
Proc Time %
5
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Hours
Figure 10. Daily utilization for a typical
business data center.
Figure 11. Low processor activity does not translate into low power
consumption.
10
Implementing virtualization provides an
data center airflow (Figure 12). Many organizations, including Emerson Network Power,
offer CFD imaging as part of data center
assessment services focused on improving
cooling efficiency.
power supplies, cooling fans and other
components.
incremental eight percent reduction in total
data center power draw
for the 5,000-squarefoot facility.
In the model, we see a one percent reduction
in total energy consumption when 20 percent
of rack-based servers are replaced with blade
servers. More importantly, blades facilitate
the move to a high-density data center architecture, which can significantly reduce energy
consumption.
Additionally, temperatures in the cold aisle
may be able to be raised if current temperatures are below 68° F. Chilled water temperatures can often be raised from 45° F to 50° F.
5. Server Virtualization
As server technologies are optimized, virtualization is increasingly being deployed to
increase server utilization and reduce the
number of servers required.
In the model, cooling system efficiency is
improved five percent simply by implementing best practices. This reduces overall facility
energy costs by one percent with virtually no
investment in new technology.
In our model, we assume that 25 percent of
servers are virtualized with eight non-virtualized physical servers being replaced by one
virtualized physical server. We also assume
that the applications being virtualized were
residing in single-processor and two-processor servers and the virtualized applications
are hosted on servers with at least two
processors.
7. 415V AC Power Distribution
The critical power system represents another
opportunity to reduce energy consumption;
however, even more than other systems, care
must be taken to ensure reductions in energy
consumption are not achieved at the cost of
reduced equipment availability.
Most data centers use a type of UPS called a
double-conversion system. These systems
convert incoming power to DC and then back
to AC within the UPS. This enables the UPS to
generate a clean, consistent waveform for IT
equipment and effectively isolates IT equipment from the power source.
Implementing virtualization provides an
incremental eight percent reduction in total
data center power draw for the 5,000-squarefoot facility.
6. Cooling Best Practices
Most data centers have implemented some
best practices, such as the hot-aisle/cold-aisle
rack arrangement. Potential exists in sealing
gaps in floors, using blanking panels in open
spaces in racks, and avoiding mixing of hot
and cold air. ASHRAE has published several
excellent papers on these best practices.
UPS systems that don’t convert the incoming
power—line interactive or passive standby
systems—can operate at higher efficiencies
because of the losses associated with the conversion process. These systems may compromise equipment protection because they do
not fully condition incoming power.
Computational fluid dynamics (CFD) can be
used to identify inefficiencies and optimize
A bigger opportunity exists downstream
from the UPS. In most data centers, the UPS
11
8. Variable Capacity Cooling
Data center systems are sized to handle peak
loads, which rarely exist. Consequently, operating efficiency at full load is often not a good
indication of actual operating efficiency.
Newer technologies, such as Digital Scroll
compressors and variable frequency drives in
computer room air conditioners (CRACs),
allow high efficiencies to be maintained at
partial loads.
Figure 12. CFD imaging can be used to evaluate cooling efficiency and optimize airflow.
This image shows hot air being recirculated
as it is pulled back toward the CRAC, which is
poorly positioned.
Digital scroll compressors allow the capacity
of room air conditioners to be matched
exactly to room conditions without turning
compressors on and off.
provides power at 480V, which is then
stepped down via a transformer, with
accompanying losses, to 208V in the power
distribution system. These stepdown losses
can be eliminated by converting UPS output
power to 415V.
Typically, CRAC fans run at a constant speed
and deliver a constant volume of air flow.
Converting these fans to variable frequency
drive fans allows fan speed and power draw to
be reduced as load decreases. Fan power is
directly proportional to the cube of fan rpm
and a 20 percent reduction in fan speed provides almost 50 percent savings in fan power
consumption. These drives are available in
retrofit kits that make it easy to upgrade
existing CRACs with a payback of less than
one year.
The 415V three-phase input provides 240V
single-phase, line-to-neutral input directly to
the server (Figure 13). This higher voltage not
only eliminates stepdown losses but also
enables an increase in server power supply
efficiency. Servers and other IT equipment
can handle 240V AC input without any issues.
In the model, an incremental two percent
reduction in facility energy use is achieved by
using 415V AC power distribution.
Traditional 208V Distribution
480V
UPS
480V
Servers
Storage
Comm Equipment
12V
PDU
208V
transformer
PDU
panelboard
208V
PSU
415V 3-phase
240V phase to
neutral
PDU
panelboard
240V
PSU
415V Distribution
480V
UPS
Servers
Storage
Comm Equipment
12V
Figure 13. 415V power distribution provides a more efficient alternative to using 208V power.
12
Newer technologies such
as Digital Scroll compressors and variable
frequency drives in computer room air conditioners (CRACs) allow
high efficiencies to be
maintained at partial
loads.
Supplemental cooling
units are mounted
above or alongside
equipment racks and
pull hot air directly from
In the chilled water-based air conditioning
system used in this analysis, the use of variable frequency drives provides an incremental
saving of four percent in data center power
consumption.
the hot aisle and deliver
cold air to the cold aisle.
9. High Density Supplemental Cooling
Traditional room-cooling systems have
proven very effective at maintaining a safe,
controlled environment for IT equipment.
However, optimizing data center energy efficiency requires moving from traditional data
center densities (2 to 3 kW per rack) to an
environment that can support much higher
densities (in excess of 30 kW).
This requires implementing an approach to
cooling that shifts some of the cooling load
from traditional CRAC units to supplemental
cooling units. Supplemental cooling units are
mounted above or alongside equipment
racks (Figure14), and pull hot air directly from
the hot aisle and deliver cold air to the cold
aisle.
Supplemental cooling units can reduce cooling costs by 30 percent compared to traditional approaches to cooling. These savings
are achieved because supplemental cooling
brings cooling closer to the source of heat,
reducing the fan power required to move air.
They also use more efficient heat exchangers
and deliver only sensible cooling, which is
ideal for the dry heat generated by electronic
equipment.
Refrigerant is delivered to the supplemental
cooling modules through an overhead piping
system, which, once installed, allows cooling
modules to be easily added or relocated as
the environment changes.
Figure 14. Supplemental cooling enables
higher rack densities and improved cooling
efficiency.
In the model, 20 racks at 12 kW density per
rack use high density supplemental cooling
while the remaining 40 racks (at 3.2 kW density) are supported by the traditional room
cooling system. This creates an incremental
six percent reduction in overall data center
energy costs. As the facility evolves and more
racks move to high density, the savings will
increase.
10. Monitoring and Optimization
One of the consequences of rising equipment
densities has been increased diversity within
the data center. Rack densities are rarely uniform across a facility and this can create cooling inefficiencies if monitoring and optimization is not implemented. Room cooling units
on one side of a facility may be humidifying
the environment based on local conditions
while units on the opposite side of the facility
are dehumidifying.
Cooling control systems can monitor conditions across the data center and coordinate
the activities of multiple units to prevent conflicts and increase teamwork (Figure 15).
13
TEAMWORK MODE
Humidifier ON
Unit 1
Humidifier ON
Unit 2
UNBALANCED LOAD
Control Disables
heaters in “Cold Zone”
Compressors ON
Dehumidification ON
Unit 3
Unit 1
SWITCH
OR HUB
NO LOAD
Unit 2
Unit 3
UNBALANCED LOAD
NO LOAD
Unit 5
Unit 4
POWER
Unit 6
Unit 5
Humidifier ON
Unit 4
Unit 6
Compressors ON
Dehumidification ON
Figure 15. System-level control reduces conflict between room air conditioning units
operating in different zones in the data center.
In the model, an incremental saving of one
percent is achieved as a result of system-level
monitoring and control.
• Monitor and reduce parasitic losses from
generators, exterior lighting and perimeter
access control. For a 1 MW load, generator
losses of 20 kW to 50 kW have been
measured.
Other Opportunities for Savings
The Energy Logic model
also provides a roadmap
for the industry to deliver the information and
technologies that data
center managers can use
Energy Logic prioritizes the most effective
energy reduction strategies, but it is not
intended to be a comprehensive list of energy
reducing measures. In addition to the actions
in the Energy Logic strategy, data center managers should consider the feasibility of the
following:
What the Industry Can Learn from the
Energy Logic Model
The Energy Logic model not only prioritizes
actions for data center managers. It also provides a roadmap for the industry to deliver the
information and technologies that data center
managers can use to optimize their facilities.
Here are specific actions the industry can take
to support increased data center efficiency:
• Consolidate data storage from direct
attached storage to network attached storage. Also, faster disks consume more
power so consider reorganizing data so that
less frequently used data is on slower
archival drives.
1. Define universally accepted metrics for
processor, server and data center efficiency
There have been tremendous technology
advances in server processors in the last
decade. Until 2005, higher processor performance was linked with higher clock speeds
and hotter chips consuming more power.
Recent advances in multi-core technology
have driven performance increases by using
more computing cores operating at relatively
lesser clock speeds, which reduces power
consumption.
• Use economizers where appropriate to
allow outside air to be used to support data
center cooling during colder months,
creating opportunities for energy-free
cooling. With today’s high-density
computing environment, economizers can
be cost effective in many more locations
than might be expected.
14
to optimize their
facilities.
An industry standard of
data center efficiency
that measures performance per Watt of energy
used would be extremely beneficial in measuring the progress of data
center optimization
efforts.
Today processor manufacturers offer a range
of server processors from which a customer
needs to select the right processor for the
given application. What is lacking is an easyto-understand and easy-to-use measure such
as the miles-per-gallon automotive fuel efficiency ratings developed by the U.S.
Department of Transportation, that can help
buyers select the ideal processor for a given
load. The performance per Watt metric is
evolving gradually with SPEC score being used
as the server performance measure, but more
work is needed.
eliminate much of the resistance to using
power management.
3. Matching power supply capacity to server
configuration
Server manufacturers tend to oversize power
supplies to accommodate the maximum
configuration of a particular server. Some
users may be willing to pay an efficiency
penalty for the flexibility to more easily
upgrade, but many would prefer a choice
between a power supply sized for a standard
configuration and one sized for maximum
configuration. Server manufacturers should
consider making these options available
and users need to be educated about the
impact power supply size has on energy
consumption.
This same philosophy could be applied to the
facility level. An industry standard of data center efficiency that measures performance per
Watt of energy used would be extremely beneficial in measuring the progress of data center optimization efforts. The PUE ratio developed by the Green Grid provides a measure of
infrastructure efficiency, but not total facility
efficiency. IT management needs to work
with IT equipment and infrastructure manufacturers to develop the miles-per-gallon
equivalent for both systems and facilities.
4. Designing for high density
A perception persists that high-density environments are more expensive than simply
spreading the load over a larger space. Highdensity environments employing blade and
virtualized servers are actually economical as
they drive down energy costs and remove
constraints to growth, often delaying or eliminating the need to build new facilities.
2. More sophisticated power management
While enabling power management features
provides tremendous savings, IT management often prefers to stay away from this
technology as the impact on availability is not
clearly established. As more tools become
available to manage power management features, and data is available to ensure that
availability is not impacted, we should see
this technology gain market acceptance.
More sophisticated controls that would allow
these features to be enabled only during periods of low utilization, or turned off when critical applications are being processed, would
5. High-voltage distribution
415V power distribution is used commonly
in Europe, but UPS systems that easily support this architecture are not readily available
in the United States. Manufacturers of critical
power equipment should provide the 415V
output as an option on UPS systems and can
do more to educate their customers regarding high-voltage power distribution.
6. Integrated measurement and control
Data that can be easily collected from IT systems and the racks that support them has
15
yet to be effectively integrated with support
systems controls. This level of integration
would allow IT systems, applications and support systems to be more effectively managed
based on actual conditions at the IT equipment level.
IT consolidation projects also play an important role in data center optimization. Both
blade servers and virtualization contribute to
energy savings and support a high-density
environment that facilitates true
optimization.
Conclusion
The final steps in the Energy Logic optimization strategy is to focus on infrastructure systems, employing a combination of best practices and efficient technologies to increase
the efficiency of power and cooling systems.
Together these strategies created a 52 percent reduction in energy
use in the 5,000-squarefoot data center model
developed by Emerson
Network Power while
Data center managers and designers, IT
equipment manufacturers and infrastructure
providers must all collaborate to truly optimize data center efficiency.
Together these strategies created a 52 percent reduction in energy use in the 5,000square-foot data center model developed by
Emerson Network Power while removing
constraints to growth.
For data center managers, there are a number
of actions that can be taken today that can
significantly drive down energy consumption
while freeing physical space and power and
cooling capacity to support growth.
Appendix B shows exactly how these savings
are achieved over time as legacy technologies
are phased out and savings cascade across
systems.
Energy reduction initiatives should begin with
policies that encourage the use of efficient IT
technologies, specifically low power processors and high-efficiency power supplies. This
will allow more efficient technologies to be
introduced into the data center as part of the
normal equipment replacement cycle.
Power management software should also be
considered in applications where it is appropriate as it may provide greater savings than
any other single technology, depending on
data center utilization.
16
removing constraints to
growth.
Appendix A: Data Center Assumptions Used to Model Energy Use
To quantify the results of the efficiency
improvement options presented in this
paper, a hypothetical data center that has
not been optimized for energy efficiency
was created. The ten efficiency actions presented in this paper were then applied to
this facility sequentially to quantify results.
Servers
• Age is based on average server replacement cycle of 4-5 years.
• Processor Thermal Design Power averages 91W/processor.
• All servers have dual redundant power
supplies. The average DC-DC conversion
efficiency is assumed at 85% and average
AC-DC conversion efficiency is assumed at
79 percent for the mix of servers from
four-years old to new.
This 5000-square-foot hypothetical data
center has 210 racks with average heat density of 2.8kW/rack. The racks are arranged
in a hot-aisle/cold-aisle configuration. Cold
aisles are four feet wide, and hot aisles are
three feet wide. Based on this configuration
and operating parameters, average facility
power draw was calculated to be 1127 kW.
Following are additional details used in the
analysis.
• Daytime power draw is assumed to exist
for 14 hours on weekdays and 4 hours on
weekends. Night time power draw is 80
percent of daytime power draw.
• See Figure 16 for more details on server
configuration and operating parameters.
Single Socket
Two
sockets
Four
Sockets
More than
four
Total
Number of servers
157
812
84
11
1064
Daytime power draw (Watts/server)
277
446
893
4387
–
Nighttime power draw (Watts/server)
247
388
775
3605
–
Total daytime power draw (kW)
44
362
75
47
528
Total nighttime power draw (kW)
39
315
65
38
457
Average server power draw (kW)
41
337
70
42
490
Figure 16. Server operating parameters used in the Energy Logic model.
17
Storage
Cooling system
• Storage Type: Network attached storage.
• Cooling System is chilled water based.
• Capacity is 120 Terabytes.
• Total sensible heat load on the precision
cooling system includes heat generated
by the IT equipment, UPS and PDUs,
building egress and human load.
• Average Power Draw is 49 kW.
Communication Equipment
• Routers, switches and hubs required to
interconnect the servers, storage and
access points through Local Area Network
and provide secure access to public
networks.
• Cooling System Components:
- Eight 128.5 kW chilled water based
precision cooling system placed at the
end of each hot aisle. Includes one
redundant unit.
- The chilled water source is a chiller
plant consisting of three 200 ton
chillers (n+1) with matching condensers for heat rejection and four
chilled water pumps (n+2).
• Average Power Draw is 49 kW.
Power Distribution Units (PDU):
• Provides output of 208V, 3 Phase through
whips and rack power strips to power
servers, storage, communication equipment and lighting. (Average load is
539kW).
• Input from UPS is 480V 3-phase.
- The chiller, pumps and air conditioners are powered from the building distribution board (480V 3 phase).
• Efficiency of power distribution is 97.5
percent.
- Total cooling system power draw is
429 kW.
UPS System
Building substation:
• Two double conversion 750 kVA UPS
modules arranged in dual redundant
(1 +1) configuration with input filters for
power factor correction (power factor =
91 percent).
• The building substation provides 480V
3-phase power to UPS’s and cooling
system.
• Average load on building substation is
1,099 kW.
• Utility input is 13.5 kVA, 3-phase
connection.
• System consists of transformer with isolation switchgear on the incoming line,
switchgear, circuit breakers and distribution panel on the low voltage line.
• Substation, transformer and building
entrance switchgear composite efficiency
is 97.5 percent.
• The UPS receives 480V input power for
the distribution board and provides a
480V, 3 Phase power to the power distribution units on the data center floor.
• UPS efficiency at part load: 91.5 percent.
18
Appendix B: Timing Benefits from Efficiency Improvement Actions
Infrastructure
Projects
Best
Practices
IT
Projects
IT
Policies
Efficiency Improvement
Area
Estimated Cumulative Yearly Savings
Savings
(kW)
Year 1
Year 2
Year 3
Year 4
Year 5
Low power processors
111
6
22
45
78
111
Higher efficiency power
supplies
124
12
43
68
99
124
Server power
management
86
9
26
43
65
86
Blade servers
7
1
7
7
7
7
Virtualization
86
9
65
69
86
86
415V AC power
distribution
20
0
0
20
20
20
Cooling best practices
15
15
15
15
15
15
Variable capacity
cooling
49
49
49
49
49
49
High density
supplemental cooling
72
0
57
72
72
72
Monitoring and optimization
15
0
15
15
15
15
585
100
299
402
505
585
19
Note: Energy Logic is an approach developed by
Emerson Network Power to provide organizations
with the information they need to more effectively reduce data center energy consumption. It is
not directly tied to a particular Emerson Network
Power product or service. We encourage use of
the Energy Logic approach in industry discussions
on energy efficiency and will permit use of the
Energy Logic graphics with the following
attribution:
Emerson Network Power
1050 Dearborn Drive
P.O. Box 29186
Columbus, Ohio 43229
800.877.9222 (U.S. & Canada Only)
614.888.0246 (Outside U.S.)
Fax: 614.841.6022
EmersonNetworkPower.com
Liebert.com
The Energy Logic approach developed by
Emerson Network Power.
While every precaution has been taken to ensure
accuracy and completeness in this literature, Liebert
Corporation assumes no responsibility, and disclaims all
liability for damages resulting from use of this information or for any errors or omissions.
Specifications subject to change without notice.
© 2007 Liebert Corporation. All rights reserved
throughout the world.
Trademarks or registered trademarks are property of
their respective owners.
® Liebert and the Liebert logo are registered trademarks of the Liebert Corporation
Business-Critical Continuity, Emerson Network Power
and the Emerson Network Power logo are trademarks
and service mark of the Emerson Electric Co.
To request a graphic send an email to
energylogic@emerson.com.
WP154-158-117
SL-24621
Emerson Network Power.
The global leader in enabling Business-Critical Continuity™.
EmersonNetworkPower. com
AC Power
Embedded Computing
Outside Plant
Racks & Integrated Cabinets
Connectivity
Embedded Power
Power Switching & Controls
Services
DC Power
Monitoring
Precision Cooling
Precision Cooling