Saving Costs with Thermal Management Units

advertisement

Saving Costs with Thermal Management Units

Table of Contents

Summary pag. 3

1.

2.

3.

Thermal Management vs. Comfort Cooling

Cooling Optimized for Electronic System Requirements

Systems Sized for Higher Density Data Center Environments pag. 3 pag. 4 pag. 5

Conclusion pag. 9

Sources pag.10

2

Executive Summary

Ensuring efficient operation without compromising reliability is essential to a company’s success in achieving business continuity. Unfortunately, whether due to rapid, unanticipated growth - or a general lack of awareness with regards to the cooling requirements of sensitive electronic equipment

- some data center professionals, particularly managers of small and/or remote facilities, do not provide adequate attention to their data centers’ infrastructure system design.

Many data center managers end up using comfort cooling solutions to satisfy their environmental control needs. At first glance, these systems may seem simple and economically beneficial up front.

However, because they are not designed to meet the unique needs of critical IT equipment, reliance on comfort cooling systems can result in an increased risk for premature system failures as well as grossly inflated operating costs. In contrast, Thermal Management systems are specifically designed to meet the needs of dense electrical heat loads.

The decision to accept the increased risk of comfort cooling systems is usually based on lower initial costs. However, when total cost of ownership is evaluated, Thermal Management systems actually represent the most cost-effective solutions. Finally, data centers that incorporate high density cooling design into their expansion or renovation plans will benefit from scalable IT capacity and reduce the risk of downtime.

1. Thermal Management vs. Comfort Cooling

Oftentimes, in small or remote facilities, equipment that is essential to reliable operation is frequently overlooked in favor of other aspects of the planning process. As a result, areas such as environmental control, power continuity and monitoring of critical computer support systems often receive far less attention than decisions made about servers, operating systems, and other networklevel components that make up a critical IT infrastructure. However, achieving critical continuity is as dependent on power and environmental support systems as advanced networking equipment.

The high sensitivity of electronic components in data center and networking environments means that temperature, humidity, air movement, and air cleanliness must be kept consistent and within specific limits to prevent premature equipment failures and costly downtime. Comfort cooling systems - including room air conditioners, residential central air conditioners, and air conditioning systems for office and commercial buildings - are occasionally used in these applications. This is because the differences between comfort and Thermal Management systems can be ambiguous.

Comfort cooling systems are primarily engineered for the intermittent use required to maintain a comfortable environment in facilities with a moderate amount of traffic. However, while these systems are capable of effectively maintaining acceptable conditions for human occupants, they are not designed to regulate humidity and temperature within precise margins.

Thermal Management systems, on the other hand, are specifically engineered for facilities that require year-round constant cooling, precise humidity control, and a higher cooling capacity per square foot. This type of environmental conditioning can utilize a wide range of cooling mediums, including air, chilled water, and pumped refrigerants.

3

Thermal Management systems can be equipped with a freecooling function to maximize energy savings. The idea of freecooling is to use the outdoor air to cool the data center when outside temperatures are low instead of running the compressor. The Thermal Management unit automatically switches between compressor and freecooling operation.

2. Cooling Optimized for Electronic System Requirements

In recent years, there have been many changes in data center cooling requirements and capabilities that have affected operation. Primary among these is an increase in the density of new IT equipment and the consequent increase in heat loads. Higher heat densities created by increased equipment loads generate sensible heat (a drier heat than that can be found in traditional facility environments) significantly changing the demands faced by the cooling system. To better understand these demands, it is important to remember that there are two types of cooling: latent and sensible .

Latent cooling , in short, is the ability to remove moisture from the surrounding environment.

This is an important component in typical comfort-cooling applications - such as office buildings, retail stores, and other facilities with high human occupancy - which are designed to maintain a comfortable balance of temperature and humidity.

Sensible cooling , on the other hand, is the ability to remove heat that can strictly be measured by a thermometer.

Comfort cooling systems have a sensible heat ratio (SHR) of 0.60 to 0.70. This means that they are

60 to 70 percent dedicated to lowering temperature (cooling), and 30 to 40 percent dedicated to lowering humidity (de-humidification). These systems are typically found in facilities with considerable occupant traffic. People in a closed environment also produce moisture (perspiration, breathing, etc.) which must be removed. Therefore, cooling systems with lower SHR are suitable for such humid conditions.

Unlike facilities that need to take occupant comfort into consideration, data center environments are typically not occupied by people. In most cases, they have limited access and no direct means of egress except for seldom used emergency exits. And because these environments are primarily composed of dry heat, data centers require minimal moisture removal.

Conversely, Thermal Management systems are designed to achieve a sensible heat ratio of 0.80 to

1.0, with 80 to 100 percent of their effort devoted to lowering temperature (cooling) and only zero to 20 percent to lowering humidity (de-humidification).

In light of their unique environmental attributes, most data center environments require a 0.80 to

0.90 sensible heat ratio for effective and efficient cooling. This is due to the need for latent heat removal decreases along with the need for dehumidification.

Essentially, this means more nominal 20 kW comfort cooling units would be required to handle the same sensible load as nominal 20 kW Thermal Management units.

4

1.0

Sensible Heat Ratio

LATEnT

LATEnT

1.0

.90

.65

SEnSibLE SEnSibLE

General

Office

Computer

Room

Figure 1: General office environments have loads that are 60-70 percent sensible (0.6-0.7 SHR), while computer rooms have 90-

100 percent sensible loads (0.85-1.0 SHR)

3. Systems Sized for Higher Density Data Center Environments

In a typical data center environment, heat densities can be up to 20-50 times higher than in typical office settings. With high density rack configurations becoming more popular due to efficient virtualization, data center temperatures are increasing at a faster rate than ever before.

To illustrate, 1 kW of comfort cooling capacity is usually required per 6.5 – 8 m 2 of office space. This translates into 0.12 to 0.15 kW per square meter. In contrast, data center rooms require 1 – 3.0 kW of cooling capacity per square meter. Some sites can have heat load densities even as high as 7 to 10 kW/m 2 .

From an airflow standpoint, Thermal Management systems are designed to manage larger load densities than most comfort cooling systems.

4,0

3,5

3,0

2,5

2,0

1,5

1,0

0,5

Heat

Density

(kWm

2

)

3 kWm 2

0,15 kWm 2

General

Office

Computer

Room

Figure 2: Heat densities are typically 6 to 20 times higher in computer room environments than in office environments.

5

In addition, the use of blade servers and other newer rack-mounted equipment has created unprecedented new data center cooling requirements. New servers and communication switches generate heat densities that produce as much as ten times the heat per square foot compared to systems manufactured just 10 years ago. Comfort cooling systems are simply ill-equipped to effectively deal with this new challenge and the cooling requirements these new approaches call for.

Data center designers and engineers should consider the need for specialized cooling units to work in conjunction with raised floor Thermal Management systems.

Furthermore, data centers using traditional Thermal Management configurations with large computer room air conditioning (CRAC) units feeding air underneath raised floors can run into problems - such a hot spots - as high-density rack configurations are implemented.

Even though there may be a high level of cooling capacity in the room, the way in which the air is distributed can lead to problems if a cooling unit fails. To mitigate these issues, critical cooling systems are being integrated with new techniques - such as row-based cooling, cold-aisle containment, and other targeted cooling strategies. Further, emerging technologies in environmental control are allowing individual room-based CRAC units to be networked together and monitored by comprehensive management programs.

4. Precise Humidity Control

Since most data centers operate 24/7, precise humidity conditions must be maintained all year long. Ignoring the impact of humidity can result in critical long-term problems, including damage to equipment, facility infrastructure, and other valuable resources.

The optimal dew point range for data centers is 5.5°C to 15°C as recommended by the ASHRAE ‘TC9.9

2011’ guidelines.

1 For example, moisture that is higher than the above mentioned range can corrode switching circuitry causing malfunctions and equipment failures. Contrastingly, low humidity can cause static discharges that interfere with normal equipment operation.

This scenario is more likely to occur in a data center, due to cooling continually taking place, resulting in lower relative humidity levels. Typically, comfort cooling systems do not have built-in humidity controls, making it difficult to maintain stable dew point levels over extended periods of time. If the required controls and components are added, they must be specially configured to operate as a complete system. Thermal Management systems feature multimode operation to provide a proper ratio of cooling, humidification, and dehumidification, making them much more suitable for data center applications.

5. Lower Operating Costs

Due to basic engineering, design, and equipment differences, a purchase price comparison of comfort versus precision air conditioning systems is insufficient. A more accurate comparison considers the difference in operating costs between the two techniques. Following is a basic example of an operating cost comparison between the two approaches, using the assumptions outlined below:

• Room conditions are 24°C, 50 percent relative humidity

Each kW of cooling requires 0.23 kW of Electrical Energy

• The compressor motors and fans are 90 percent efficient

Electricity costs $0.12 kW/h

• Humidification is required from November through March (3,650 hours)

The Thermal Management system has an SHR of 0.90; the comfort system has an SHR of 0.60.

1 Thermal Guidelines for Data Processing Environments

6

First, calculate the cost per kW of cooling for a year:

Electrical Power input for one kW of Cooling x Running Hours per Year x Cost per kW/h

Efficiency of Compressor and Fans

0.23 kW x 8760 hrs./yr. x $0.12/kWh

0.90 Efficiency

This results in a yearly cost of energy of 269 EUR for 1 kW of cooling.

The cost per sensible kW of cooling must then be determined by dividing the total cost by the SHR for each system. For the Thermal Management system, the yearly cost per sensible kW is:

For the comfort cooling system, the yearly cost per sensible kW is:

As illustrated by the above example, the operating cost to run a comfort cooling system for one year exceeds the cost of running a Thermal Management system by 149 EUR /kW of sensible load. This is consistent with the generally accepted principle stating that ‘it takes three kW of comfort cooling capacity to equal two kW of precision capacity’.

If an application with a 30 kW sensible heat load is assumed, we will spend:

Thermal Management Air Conditioner: 30 x 299 EUR = 8,970 EUR/Year

Comfort Air Conditioner: 30 x 448 EUR = 13,440 EUR/Year

The saving in operating costs of a Thermal Management unit compared to a comfort unit is, in this case, 4,470 EUR.

80

70

60

50

40

30

20

10

LATEnT

SEnSibLE

LATEnT

SEnSibLE

Thermal

Management Unit

Confort

Cooling Unit

Figure 3: To deliver 30 kW of Sensible Cooling Capacity, we would need a 3 kW Thermal Management unit or a 50 kW comfort unit.

The 50 kW comfort unit will have a larger compressor than the 33 kW Thermal Management unit, which essentially means higher power consumption.

7

A second point of comparison is the cost of humidification. Here, let us assume that 1 kW of electric energy is needed to re-humidify 1 kW of latent heat.

Latent heat of the Thermal Management unit with sensible cooling capacity of 30 kW is:

Latent heat of the comfort cooling unit with sensible cooling capacity of 30 kW is:

Therefore, with a comfort cooling unit we would need to re-humidify 16.67 kW of latent heat more than with a Thermal Management unit.

Assuming that humidification will require 3,650 hours in a year, we can calculate the yearly running costs of a comfort and Thermal Management system for re-humidification of the air. In this simplified example, the assumption that 1 kW of electricity is required for re-humidification of 1 kW of latent heat is considered.

Thermal Management Unit:

3.33 kW x 3650 hours x 0.12 EUR/kWh = 1,459 EUR

Comfort Cooling Unit:

20 kW x 3650 hours x 0.12 EUR/kWh = 8,760 EUR

Yearly Costs for Cooling (EUR)

Thermal

Management

8,970

Comfort

Cooling

13,440

Difference

4,470

Yearly Costs for Humidification (EUR) 1,459 8,760 7,301

Total Yearly Operating Costs (EUR) 10,429 22,200 11,771

Figure 4: In this example, the total yearly operating costs of a comfort unit are 11,771 EUR higher than the operating costs of a

Thermal Management unit.

From this example, the yearly operating costs of a Thermal Management unit are significantly lower than the comfort cooling unit. Even though the purchasing price of a Thermal Management unit is higher than comfort cooling, the savings on operating costs are considerably large. The return on investment of a Thermal Management unit is usually seen within one year by means of drastic savings on operating costs.

8

Conclusion

Business-critical data centers have unique environmental requirements and, as a result, necessitate cooling systems that match these requirements.

While comfort cooling systems are appropriate for “comfort” environments - such as facilities occupied by people or that house routine equipment and supplies - they are not well-suited for environments that require precise temperature, humidity and air quality regulation. Furthermore, although initial equipment costs may favour comfort cooling unit installations, the overall cooling quality and energy costs over the life of the equipment make Thermal Management solutions the better long-term investment. These systems, in fact, are specifically designed to address the unique demands of today’s complex IT environments. When maintained properly, they can meet the increasing demands for heat rejection, humidity control, filtration, and other requirements in data centers and high availability computer facilities while providing additional efficiency, reliability and flexibility benefits.

9

Sources

ASHRAE, The American Society of Heating, Refrigerating and Air Conditioning Engineers establishing guidelines relating to HVAC systems.

10

Ensuring The High Availability

Of Mission-Critical Data And Applications.

About Emerson network Power

Emerson Network Power, a business of Emerson (NYSE:EMR), delivers software, hardware and services that maximize availability, capacity and efficiency for data centers, healthcare and industrial facilities. A trusted industry leader in smart infrastructure technologies, Emerson Network

Power provides innovative data center infrastructure management solutions that bridge the gap between IT and facility management and

Locations

Emerson network Power

Global Headquarters

1050 Dearborn Drive

P.O. Box 29186

Columbus, OH 43229, USA

Tel: +1 614 8880246 deliver efficiency and uncompromised availability regardless of capacity demands. Our solutions are supported globally by local Emerson

Network Power service technicians.

Learn more about Emerson Network Power products and services at www.EmersonnetworkPower.eu

Emerson network Power

Thermal Management EMEA

Via Leonardo Da Vinci, 16/18

Zona Industriale Tognana

35028 Piove di Sacco (PD) Italy

Tel: +39 049 9719 111

Fax: +39 049 5841 257

ThermalManagement.NetworkPower.Eu@Emerson.com

Emerson network Power

United Kingdom

George CurlWay

Southampton

SO18 2 RY, UK

Tel: +44 (0)23 8061 0311

Fax: +44 (0)23 8061 0852

While every precaution has been taken to ensure accuracy and completeness herein, Emerson assumes no responsibility, and disclaims all liability, for damages resulting from use of this information or for any errors or omissions.

Specifications subject to change without notice.

Globe Park

Fourth Avenue

Marlow Bucks

SL7 1YG

Tel: +44 1628403200

Fax: +44 1628403203

Uk.Enquiries@Emerson.com

EmersonnetworkPower.eu Follow us on Social Media:

Emerson. Consider it Solved. and Emerson Network Power are trademarks of Emerson Electric Co. or one of its affiliated companies. ©2015 Emerson Electric Co.

Download