Server Primer

advertisement
 Server Primer Understanding the current state of the industry Golisano Institute for Sustainability DATE: 10/10/2012 2012
Golisano Institute for Sustainability Rochester Institute of Technology 111 Lomb Memorial Drive Rochester, NY 14623 www.sustainability.rit.edu 2 Server Primer Acknowledgements
The primary authors of this report are Brian Hilton, Senior Research Engineer, and Michael Welch, Masters Student, Golisano Institute for Sustainability (GIS) at Rochester Institute of Technology (RIT). Questions, comments and feedback on this report should be directed to: Brian Hilton, Sr. Research Engineer Golisano Institute for Sustainability Rochester Institute of Technology 133 Lomb Memorial Drive, Building 78, Room 1220 Rochester, New York 14623‐5608 Tel: 585‐475‐5379 Email: Brian.Hilton@rit.edu We gratefully acknowledge and thank the primary sponsor, the International Sustainable Development Foundation, for providing the resources and support to make this report possible. We would also like to acknowledge the faculty and staff at the Golisano Institute for Sustainability for providing research data and advice on the report focus and content. Additionally, we would also like to acknowledge the U.S. Environmental Protection Agency for providing the initial ecolabel comparison document “Server ecolabel comparison 8 15 2011.xls” which was used as the foundation for the companion document for this report “Master list of server standards June 2012.xlsx”. We also send a special thank‐
you to Pamela Brody‐Heine and Patty Dillon who served as an advisors and reviewers of this report. We believe this report provides useful data, information, findings and recommendations for positioning the industry for the future, including key considerations on energy, environment and sustainability. © Rochester Institute of Technology October 10, 2012 3 Server Primer TableofContents
Acknowledgements ............................................................................................... 2 1. Introduction ................................................................................................... 5 2. Background and Purpose of the Study .......................................................... 6 3. Server Introduction ........................................................................................ 7 3.1. Server Hardware .................................................................................... 7 3.1.1. Blade Server Hardware ................................................................ 11 3.2. Server Price and Performance ............................................................. 12 3.3. Server Market & Sales .......................................................................... 12 3.4. ENERGY STAR and Exclusion of Servers with More than Four Processor Sockets ............................................................................................................. 15 4. Server Industry Trends ................................................................................. 16 4.1. High Density Computing ...................................................................... 16 4.2. Server Internal Waste Heat Management ........................................... 18 4.3. Server Utilization and Connectivity ..................................................... 20 4.3.1. Server Virtualization .................................................................... 21 4.3.2. Server Consolidation .................................................................... 22 4.3.3. Cloud Computing ......................................................................... 23 5. Server Impact on Overall Data Center Energy Use ...................................... 24 6. Server Environmental Assessments ............................................................. 27 7. 6.1. Carbon Footprint of a Typical Dell Rack Server ................................... 27 6.2. Carbon Footprint of Fujitsu Primergy RX and TX 300 S5 Servers ......... 29 6.3. Case Study of an IBM Rack‐mounted Server ....................................... 30 Server Standard Scope Topics ..................................................................... 31 7.1. Server Material Selection ..................................................................... 31 7.1.1. Server Demanufacturing: GIS ....................................................... 31 7.1.2. Server Demanufacturing: Cascade ............................................... 33 7.2. Environmentally Sensitive Materials ................................................... 34 7.2.1. RoHS Directive ............................................................................. 35 7.3. Product Longevity ................................................................................ 36 7.4. Design for End of Life ........................................................................... 37 7.5. End of Life Management ...................................................................... 37 7.5.1. Server End‐of‐Life ........................................................................ 37 7.5.2. End of Life Management .............................................................. 40 © Rochester Institute of Technology October 10, 2012 4 Server Primer 7.6. 7.6.1. PSU Efficiency Standards ............................................................. 43 7.6.2. Processor Energy Use ................................................................... 44 7.7. 8. Energy Conservation ............................................................................ 42 Packaging ............................................................................................. 46 Server Environmental Standards and Labels ............................................... 47 8.1. Key Acronyms ...................................................................................... 50 © Rochester Institute of Technology October 10, 2012 5 Server Primer 1. Introduction
The computer server industry is in the midst of major change stimulated by increasing demand for data processing and storage as a result of our economy’s shift from paper‐based to digital information management. The Golisano Institute for Sustainability (GIS) at Rochester Institute of Technology (RIT), was commissioned by the International Sustainable Development Foundation (ISDF) to better understand the state of the computer server industry and to what extent the industry has faced or is facing challenges associated with energy, environment and sustainability. A three‐month research effort was conducted to collect, identify, assess, and understand the industry trends and environmental impacts associated with computer servers. Research conducted by RIT sought to balance the acquisition of data and information through quantitative and qualitative research methods to support the server standard development work by: •
•
•
•
•
Assessing and understanding environmental impacts on a life‐cycle basis Assessing and understanding energy use in the computer server industry Reviewing current environmental purchasing standards for computer servers and other computer equipment Broadly understanding the business, technology, regulatory and market challenges of the computer server industry Distilling the comments and data provided by the Technical Committee The remainder of this report presents data and information known at the time of publication on the environmental impact of the server industry. The purpose is to document the current state of the industry to inform the Technical Committee charged with drafting a framework of environmental performance criteria for the development of a product standard for servers. According to the IEEE Project Authorization Request,1 the product standard is intended to “define a measure of environmental leadership in: the design and manufacture of servers; the delivery of specified services that are associated with the sale of the product; and associated corporate performance characteristics. This standard is defined with the intention that the criteria are technically feasible to achieve, but that only products demonstrating the leading environmental performance currently available in the marketplace would meet them at the time of their adoption.” 1
P1680.4 Standard for Environmental Assessment of Servers, Project Authorization Request (PAR), https://development.standards.ieee.org/get‐
file/P1680.4.pdf?t=11051900003 © Rochester Institute of Technology October 10, 2012 6 Server Primer 2. BackgroundandPurposeoftheStudy
The International Sustainable Development Foundation (ISDF) requested the Golisano Institute for Sustainability (GIS) to conduct background research on the technical and sustainability issues surrounding the development of computer server hardware (server). Research results are intended to inform a Technical Committee established by the ISDF which will help draft a framework for the development of a product standard for servers. This study provides a literature review of technical and scientific studies, as well as publicly available life‐cycle assessments performed on servers. This work is compiled and summarized and is intended to provide a common foundation and reference materials for participants in the Technical Committee and Working Group. GolisanoInstitutefor
Sustainabilitynew
75,000sq‐ftacademic
researchbuilding
includingaresearch
datacenterfor
sustainable
computingisopening
fallof2012. This report includes background information on servers such as description of servers, server functions, server components, typical server performance characteristics, analysis of market and market size and key players, and key market and performance trends. The report also highlights data within environmental performance categories of material selection, environmentally sensitive materials, product longevity, design for end of life, end of life management, energy conservation, corporate performance and packaging. This study was conducted with financial support from ISDF and data support from GIS. About the Golisano Institute for Sustainability The Golisano Institute for Sustainability is a multidisciplinary academic and applied research unit of Rochester Institute of Technology, Rochester, NY, USA. The mission of GIS is to undertake world‐class education and research missions in sustainability. GIS academic and research programs focus on sustainable production, sustainable energy, sustainable mobility, and ecologically friendly information technology systems. These programs are led by a multidisciplinary team of faculty and researchers who collaborate with organizations locally, nationally, and internationally to create implementable solutions to complex sustainability problems. The academic component of GIS was founded in 2007 with a $10M grant from B. Thomas Golisano. The GIS Ph.D. program started in 2008 – offering the world's first doctorate in sustainable production. An M.S. Program in Sustainable Systems was approved and begun in 2010. The first GIS graduates received their diplomas in 2011. This academic program is built, in part, upon the strong track record of the five (5) applied research centers within GIS that address problems facing industry, government, and nongovernmental partners as they regulate, design, deploy, maintain, and recycle products. The Center for Remanufacturing, Re‐use, and © Rochester Institute of Technology October 10, 2012 7 Server Primer Resource Recovery (C3R), established in 1992, has played a major role in this regard, as will the New York State Pollution Prevention Institute (NYSP2I). The applied research centers’ missions are accomplished through a dynamic collaboration of nearly 100 full‐time in‐house technical experts, support professionals, faculty, and students. The Center’s 170,000 square‐foot facility supports research and development through applied technology laboratories and a state‐of‐the‐art education center. Additional information on GIS can be found at: http://www.rit.edu/gis/about/ 3. ServerIntroduction
A computer server is a hardware device connected to a network whose purpose is to manage networked resources. The term “server” can also refer to the software used to manage the networked resources; however, this report only addresses the environmental impact of the server hardware. Serversanddata
centersconsumedan
estimated238billion
kWhworldwidein
2010,or1.3%ofthe
worldwideelectricity
consumption. Computer server hardware has historically been dedicated to managing a single functional purpose; therefore, the server hardware can range widely in size, performance, cost, capability, and environmental impact. Dedicated server functions include: application servers, file servers, game servers, mail servers, print servers, database servers, and many more. Several servers are typically required to enable a computer to properly interact with other network clients due to this dedicated nature of a server. A collection of servers is referred to as a server farm or server cluster and the facility used to house the server farm and associated components is referred to as a data center. Data centers have increased in popularity over the past decade as the number of servers required by businesses has increased to compensate for increased internet traffic in all facets of life. To keep pace with increased server space, the traditional data center has evolved to include cooling equipment, network equipment, and storage equipment. The following subsections discuss both the server hardware and the server market. 3.1.
ServerHardware
A server, as referenced in this document, is computer hardware that provides services and manages networked resources for client devices. Servers range widely in size and performance; however, generally, they will contain similar hardware components. One server model, the IBM System x3650 M4 (X3650), was therefore chosen to illustrate the hardware components that are in a server. The X3650 server performance (Table 1) is within the likely target market for the proposed purchasing standard as its scope is within what is described by ENERGY STAR (described in more detail in Section 3.4). © Rochester Institute of Technology October 10, 2012 8 Server Primer Figure 1: IBM System X3650 M4 Server Source: [IBM 2012] IBM has published images and specification for the IBM System x3650 M4 server.2 These images and specifications are reproduced here to describe in general server hardware components. The X3650 suggested uses are: database, virtualization, enterprise applications, collaboration/email, streaming media, web, and cloud applications. The IBM System X3650 M4 server supports two processors in a scalable “2U” package. Rack servers such as the X3650 are designed to mount in steel racks that are 19 inches wide. Rack servers are therefore described with a form factor that indicates the server height in multiples of rack units (U), which is a height of 1.75 inches. The IBM X3650 is 3.4 inches high, thus 2U. Note that a standard server rack is 42U high. Typically, server components include: an external enclosure, central processing unit (CPU), 1‐4 CPU sockets, main mother board, memory, storage (hard drives, solid state drive (SSD)), Input / Output adaptors, fans, power supplies, and may include a small screen.3 Figure 2 and Figure 3 show the X3650 server front and back view respectively which show many available connections. Figure 4 shows the internal components. Note that many of the components are redundant and hot‐swappable4 including fans, disks and power supplies making it easy to replace failures without taking the system down. 2
[IBM 2012] IBM System x3650 M4IBM Redbooks Product Guide, http://www.redbooks.ibm.com/technotes/tips0850.pdf 3
Source: Server Technical Committee meeting, Houston, Texas, July 31, 2012. 4
Components are hot‐swappable if they can be installed or removed without powering down the system. © Rochester Institute of Technology October 10, 2012 9 Server Primer Figure 2: IBM System X3650 Front View Source: [IBM 2012] Figure 3: IBM System X3650 M4 Back View Source: [IBM 2012] © Rochester Institute of Technology October 10, 2012 10 Server Primer Figure 4: IBM System X3650 M4 Internal Components Source: [IBM 2012] Table 1: IBM System X3650 M4 Product Specifications Source: [IBM 2012] Components Specification Form factor 2U Rack. Up to two Intel Xeon processor E5‐2600 product family CPUs with eight cores (up to 2.9 GHz) or six cores (up to 2.9 GHz) or quad‐cores (up to 3.3 GHz). Two Processor QPI links up to 8.0 GT/s each. Up to 1600 MHz memory speed. Up to 20 MB L3 cache. Chipset Intel C602J Up to 24 DIMM sockets (12 DIMMs per processor). RDIMMs, UDIMMs, Memory HyperCloud DIMMs, and LRDIMMs (Load Reduced DIMMs) supported, but memory types cannot be intermixed. Memory speed up to 1600 MHz. With RDIMMs: Up to 384 GB with 24x 16 GB RDIMMs and two processors With UDIMMs: Up to 64 GB with 16x 4 GB UDIMMs and two processors Memory With HyperCloud DIMMs: Up to 768 GB with 24x 32 GB HyperCloud DIMMs and maximums two processors With LRDIMMs: Up to 768 GB with 24x 32 GB LRDIMMs and two processors Memory ECC, Chipkill, memory mirroring, and memory rank sparing. protection Up to 32 1.8" SSD bays, or 16 2.5" hot‐swap SAS/SATA bays, or up to six 3.5" Disk drive hot‐swap SAS/SATA bays, or up to eight 2.5" Simple Swap SATA bays, or up to bays six 3.5" Simple Swap SATA bays. Maximum Up to 14.4 TB with 900 GB 2.5" SAS HDDs, up to 16 TB with 1 TB 2.5" NL internal SAS/SATA HDDs, or up to 18 TB with 3 TB 3.5" NL SAS/SATA HDDs. Intermix of storage SAS/SATA is supported. RAID 0, 1, 10 with integrated ServeRAID M5110e; optional upgrades to RAID 5, RAID support 50 are available (zero‐cache; 512 MB battery‐backed cache; 512 MB or 1 GB flash‐backed cache). Optional upgrade to RAID 6, 60 is available for 512 MB or 1 © Rochester Institute of Technology October 10, 2012 11 Server Primer Components Optical drive bays Tape drive bays Network interfaces PCI Expansion slots Ports Cooling Power supply Video Hot‐swap parts Limited warranty Dimensions Weight Specification GB cache. One bay for optional DVD‐ROM or Multiburner drive. Optional Tape Enablement Kit is available to support one DDS5, DDS6, or RDX internal USB tape drive. Four integrated Gigabit Ethernet 1000BASE‐T ports (RJ‐45); two embedded 10 Gb Ethernet ports (10GBASE‐T RJ‐45 or 10GBASE‐SR SFP+ based) on optional 10 Gb Ethernet mezzanine card (does not consume PCIe slot). Up to six slots depending on the riser cards installed. Optional riser cards available with PCIe x8 or PCIe x16 or PCI‐X slots. Two USB 2.0 and one DB‐15 video on front. Four USB 2.0, one DB‐15 video, one DB‐9 serial, one RJ‐45 systems management, four RJ‐45 GbE network ports, two optional RJ‐45 or SFP+ 10 GbE network ports on rear. Two internal USB ports IBM Calibrated Vectored Cooling™ with up to four redundant hot swap fans (three standard, additional fan with second processor); two fan zones with N+1 fan design; each fan has two motors. Up to two redundant hot‐swap 550 W ac or 750 W ac or 900 W ac power supplies (all 80 PLUS Platinum certification) Matrox G200eR2 with 16 MB memory integrated into the IMM2. Maximum resolution is 1600x1200 at 75 Hz with 16 M colors. Hard drives, power supplies, and fans. Three‐year customer‐replaceable unit and on‐site limited warranty with 9x5 next business day (NBD). Height: 86 mm (3.4 in), width: 445 mm (17.5 in), depth: 746 mm (29.4 in) Minimum configuration: 25 kg (55 lb), maximum: 30 kg (65 lb) 3.1.1. BladeServerHardware
A blade server can be considered a stripped‐down rack mounted server. Blade servers have a modular design and do not include many components typically found in traditional rack servers to save space, and minimize power consumption. The removed components are found in the blade enclosure, which can hold multiple blade servers and provides services such as power, cooling, and various interconnects and management. Different blade server manufacturers include different components in the blade itself. Blade servers are discussed further in Section 4.1. Typically, blade server components include: an external enclosure, CPU, 1‐4 CPU sockets, main mother board, memory, storage (hard drives, solid state drive (SSD). Other components such as input / output adaptors, fans, and power supplies can be shared resources.5 IBM has published images and specification for the IBM BladeCenter HS22 server.6 An image is reproduced here to demonstrate a general blade server hardware configuration. See Figure 5. 5
Source: Server Technical Committee meeting, Houston, Texas, July 31, 2012. 6
[IBM 2011] IBM BladeCenter HS22 Technical Introduction, http://www.redbooks.ibm.com/redpapers/pdfs/redp4538.pdf, REDP‐4538‐03 was created or updated on May 12, 2011. © Rochester Institute of Technology October 10, 2012 12 Server Primer Figure 5: IBM BladeCenter HS22 Server (service cover removed) Source: [IBM 2011] 3.2.
ServerPriceandPerformance
International Data Corporation (IDC) is a provider of market intelligence for the information technology (IT), telecommunications and consumer technology markets. IDC has mapped 11 price bands within the server market into three (3) price ranges: volume servers, midrange servers and high‐end servers. By IDC’s definition, volume servers cost less than $25,000 per server, midrange servers cost $25,000‐$250,000, and high‐end servers cost more than $250,000. These three price ranges are commonly used to define market trends and are used throughout this report. 3.3.
ServerMarket&Sales
Volume servers are currently the most common type of server with 4Q 2011 factory revenue of $8.8 billion. For the same timeframe, midrange servers had factory revenue of $1.8 billion, and high‐end servers had factory revenue of $3.7 billion.7 The IDC reported that the server industry generated $52.3 billion in revenue and shipped 8.3 million servers worldwide during 2011. Despite these strong sales the market growth was reported to be decelerating in 3Q11 as demand stabilized for many system categories8. This prediction was accurate as all three price bands showed a decrease in revenue during 4Q11. This trend continued 7
Morgan, T. P. “Where Did the Midrange Go?” IT Jungle, 12 Mar 2012. Web. 12 Jun 2012. http://www.itjungle.com/tfh/tfh031212‐story03.html 8
“IDC – Press Release.” Worldwide Server Market Revenues Increase 4.2% in Third Quarter as Market Stabilizes, According to IDC. Nov 2011. Web. June 12, 2012. http://www.idc.com/getdoc.jsp?containerId=prUS23179011 © Rochester Institute of Technology October 10, 2012 13 Server Primer into 1Q12 for mid‐range and high‐end servers as both experienced over 10% year‐over‐year revenue declines; however, volume servers experienced 2% year‐over‐year growth. Matt Eastwood, an IDC analyst, states that “The server market worked through a transitional period in the first quarter of 2012 as suppliers prepared to introduce numerous critically important x86 server offerings,” and that lower revenue in the Asia/Pacific region critically affects the market because “China is one of only three countries that regularly spend more than $1 billion quarterly on servers.”9 Publically available data from the IDC press releases was collected to generate the following revenue stream for the past decade. Note that the numbers listed include revenue from server peripherals such as the “frame or cabinet and all cables, processors, memory, communications boards, operating system software, other bundled software and initial internal and external disk shipments,” and so are not purely indicative of the server market itself. 10
9
50
8
7
40
30
6
5
20
4
3
10
0
Revenue ($B)
Year
2004 2005 2006 2007 2008 2009 2010 2011
Shipmnets (M)
Revenue Estimate ($B)
60
2
1
0
49.5 51.8 52.8 55.1 53.3 43.22 48.77 52.27
Shipments (M) 6.712 7.565 8.233 8.84 9.07 7.56 8.89 9.52
Figure 6 ‐ Annual Server Market Revenue (IDC & Gartner Estimates) Information from this section was combined from a number of different sources.10,11,12,13,14,15,16,17,18,19 9
“IDC – Press Release.” Worldwide Server Market Revenues Decline 2.4% in First Quarter as Market Growth Slows in Face of Market Transitions, According to IDC, 30 May 2012, http://www.idc.com/getdoc.jsp?containerId=prUS23513412 10
Koomey, J. “Estimating Total Power Consumption by Servers in the U.S. and the World.” 2007. http://sites.amd.com/us/Documents/svrpwrusecompletefinal.pdf 11
“2006 Press Releases.” Gartner Says Worldwide Server Shipments Experience Double‐Digit Growth, While Industry Revenue Posts Single‐Digit Increase in 2005. Gartner, Feb 2006. Web. 12 Jun 2012. http://www.gartner.com/it/page.jsp?id=492245 12
“Gartner Newsroom.” Gartner Says Worldwide Server Shipments Experience 9 Percent Growth, While Industry Revenue Posted a 2 Percent Increase in 2006. Gartner, Feb 2007. Web. 12 Jun 2012. http://www.gartner.com/it/page.jsp?id=501405 © Rochester Institute of Technology October 10, 2012 14 Server Primer It is expected that server shipments will continue to increase in the near future as the world becomes more dependent on the IT sector. In 10 years, the number of Internet users has more than quadrupled from 0.5 billion in 2001 to 2.0 billion in 2010 and this trend is expected to continue. Hewlett Packard (HP) held the number one position in the worldwide server market with 29.3% factory revenue market share for the first quarter of 2012. Additional worldwide sales leaders are listed in the table below. Table 2: Worldwide Server Factory Revenue (in Millions of US dollars)20 Vendor
1Q12
1Q12
1Q11
1Q11
Revenue
Market
Revenue
Market
11
Share
Revenue
Share
1Q12/1Q
Growth
1. HP
$3,460
29.3%
$3,838
31.7%
2. IBM
$3,223
27.3%
$3,477
28.8%
-9.8%
-7.3%
3. Dell
$1,842
15.6%
$1,879
15.5%
-2.0%
-7.3%
4. Oracle
$718
6.1%
$775
6.4%
5. Fujitsu
$614
5.2%
$573
4.7%
7.3%
$1,950
16.5%
$1,551
12.8%
25.8%
$11,808
100%
$12,093
100%
-2.4%
Others
All Vendors
13
“Gartner Newsroom.” Gartner Says Worldwide Server Shipments Experienced 7 Percent Growth, While Industry Revenue Posted a 4 Percent Increase in 2007. Gartner, Feb 2008. Web. 12 Jun 2012. http://www.gartner.com/it/page.jsp?id=608710 14
“Gartner Newsroom.” Gartner Says Worldwide Server Shipments and Revenue Experience Double‐Digit Declines in Fourth Quarter of 2008. Gartner, Mar 2009. http://www.gartner.com/it/page.jsp?id=905914 15
“Gartner Newsroom.” Gartner Says 2010 Worldwide Server Market Returned to Growth with Shipments Up 17 Percent and Revenue 13 Percent. Gartner, Feb 2011. Web. 4 Jun 2012. http://www.gartner.com/it/page.jsp?id=1561014 16
“Gartner Newsroom.” Gartner Says Worldwide Server Revenue Grew 7.9 Percent and Shipments Increased 7 Percent in 2011. Gartner, Feb 2012. Web. 4 Jun 2012. http://www.gartner.com/it/page.jsp?id=1935717 17
“IDC – Press Release.” Worldwide Server Market Accelerates Sharply in Fourth Quarter as Demand for Heterogeneous Platforms Leads the Way, According to IDC. IDC, Feb 2011. Web 4 Jun 2012. http://www.idc.com/about/viewpressrelease.jsp?containerId=prUS22716111 18
“IDC – Press Release.” Despite a 7.2% Decline in Fourth Quarter Revenue, Worldwide Server Market Revenues Increase 5.8% in 2011, According to IDC. IDC, Feb 2012. Web 4 Jun 2012. http://www.idc.com/getdoc.jsp?containerId=prUS23347812 19
Short, J. Bohn, R., Chaitanya, B. “How Much Information? 2010 Report on Enterprise Server Information.” 2011. http://hmi.ucsd.edu/pdf/HMI_2010_EnterpriseReport_Jan_2011.pdf 20
“IDC – Press Release.” Worldwide Server Market Revenues Decline 2.4% in First Quarter as Market Growth Slows in Face of Market Transitions, According to IDC, 30 May 2012, http://www.idc.com/getdoc.jsp?containerId=prUS23513412 © Rochester Institute of Technology October 10, 2012 15 Server Primer HP was also the number one manufacturer of blade servers with 47.4% market share. Additional sales leaders were: IBM (21.5%), Cisco (11.0%) and Dell (8.7%).21 3.4.
ENERGY STAR and Exclusion of Servers with More
thanFourProcessorSockets
In 1992 the EPA introduced ENERGY STAR® as a voluntary labeling program to identify and promote energy‐efficient products and thereby reduce greenhouse gas emissions.22 Now a joint program between the U.S. Environmental Protection Agency and the U.S. Department of Energy, the ENERGY STAR label is on major appliances, office equipment, lighting, home electronics, computer servers and more. As seen by the market data, servers range widely in size and performance. The US Environmental Protection Agency (EPA) has therefore created a limiting definition of servers to bound the energy specification. While the latest ENERGY STAR standard revision for servers is currently under review, its current definition (Draft 3, Version 2.0) for computer server is reproduced below: A computer that provides services and manages networked resources for client devices (e.g., desktop computers, notebook computers, thin clients, wireless devices, PDAs, IP telephones, other computer servers, or other network devices). A computer server is sold through enterprise channels for use in data centers and office/corporate environments. A computer server is primarily accessed via network connections, versus directly‐
connected user input devices such as a keyboard or mouse. For purposes of this specification, a computer server must meet all of the following criteria: 1) Is marketed and sold as a computer server; 2) Is designed for and listed as supporting one or more computer server operating systems (OS) and/or hypervisors, and is targeted to run user‐installed enterprise applications; 3) Provides support for error‐correcting code (ECC) and/or buffered memory (including both buffered DIMMs and buffered on board (BOB) configurations). 4) Is packaged and sold with one or more AC‐DC or DC‐DC power supplies; and 5) Is designed such that all processors have access to shared system memory and are independently visible to a single OS or hypervisor23. 21
ibid 22
History of ENERGY STAR, http://www.energystar.gov/index.cfm?c=about.ab_history 23
[ENERGY STAR 2012] “Energy Star Program Requirements Product Specifications for Computer Servers Eligibility Criteria Draft 3 Version 2.0.” US EPA. 2012. © Rochester Institute of Technology October 10, 2012 16 Server Primer Additionally, the ENERGY STAR scope states that a “product must meet the definition of a Computer Server provided in Section 1 of this document [as reproduced above] to be eligible for ENERGY STAR qualification under this specification. Eligibility under Draft 3 Version 2.0, is limited to blade‐, rack‐
mounted, or pedestal form factor computer servers with no more than four processor sockets.”24 This scope restricts the servers that are covered by the ENERGY STAR standard by their ability to support additional processors; this in turn limits the server’s energy use as well as other environmental criteria. According to many server manufacturers, 98% of server units sold are 4 sockets or less. The remainder of the market is high‐end servers, which are typically custom builds/configurations.25 4. ServerIndustryTrends
Over the past decade, server manufacturers and others within industry have developed programs to create faster and better servers. Server development is being driven, in part, by “Moore’s Law,” a principle named after Intel co‐founder Gordon E. Moore, and based on his observation in 1965 that the number of transistors that can be placed inexpensively on an integrated circuit doubles roughly every two years, thus enhancing the performance of succeeding circuit generations. After nearly half a century the trend toward progressively higher performance still continues. The following subsections discuss some of these performance improving trends and associated issues. 4.1.
HighDensityComputing
A major technology trend in the server industry is toward smaller form factors to accommodate IT expansion within confined floor spaces. Modular form factors drove the server market in the first quarter of 2012, with blade servers increasing 7.3% annually and density optimized servers increasing 38.8% annually. Density optimized servers, as defined by IDC26, are servers that have been designed for large scale data centers with streamlined system designs that focus on performance, energy efficiency, and density. Blade servers now account for 16.6% of all server revenue, while density optimized accounts for 4.5%. In the first quarter of 2012, several vendors announced converged solutions for blade platforms; IDC expects these to enter the market starting in http://www.energystar.gov/ia/partners/prod_development/revisions/downloads/computer_serv
ers/Servers_V2_Draft_3_Specification.pdf 24 25
ibid Source: Server Technical Committee meeting, Houston, Texas, July 31, 2012. 26
I IDC – Press Release.” Despite a 7.2% Decline in Fourth Quarter Revenue, Worldwide Server Market Revenues Increase 5.8% in 2011, According to IDC. IDC, Feb 2012. Web 4 Jun 2012. http://www.idc.com/getdoc.jsp?containerId=prUS23347812 © Rochester Institute of Technology October 10, 2012 17 Server Primer the second quarter of 2012, delivering an integrated system for server, storage, and network.27 IDC estimates that server system density has increased by 15% annually over the last 10 years as companies shifted from pedestal servers to rack‐optimized systems and mainstream adoption of blade servers began.28 In 1996, companies deployed an average of 7 servers per rack. In 2006, the average had increased to 14 servers per rack. During 2008 HP revealed the potential to have up to 256 half‐height blade servers in a single 42U rack, with support for up to 1024 processors.29 The ENERGY STAR Program Requirements for Computer Servers draft 2 of Version 2.0 defines a blade server as “a high‐density device that functions as an independent computer server and includes at least one processor and system memory, but is dependent upon shared blade chassis resources (e.g., power supplies, cooling) for operation.”30 In order to be considered equivalent to a traditional rack server, a blade server must be installed within a Blade Chassis with access to Blade Storage. The ENERGY STAR Computer Server Version 2 draft defines these two systems as: Blade Chassis: An enclosure that contains shared resources for the operation of blade servers, blade storage, and other blade form‐factor devices. Shared resources provided by a chassis may include power supplies, data storage, and hardware for DC power distribution, thermal management system management, and network services. Blade Storage: A storage device that is designed for use in a blade chassis. A blade storage device is dependent upon shared blade chassis resources (e.g. power supplies, cooling) for operation. The blade server, blade chassis, and blade storage combined form a “Blade System.” The industry move to high‐density computing may provide significant life cycle financial and environmental benefits. Scaramella and Perry studied eight companies that had replaced 19‐100% of their server infrastructure with blade servers and reported several benefits including:31 27
“IDC Press Release.” Worldwide Server Market Revenues Decline 2.4% in First Quarter as Market Growth Slows in Face of Market Transitions, According to IDC. IDC, May 2012. Web. 12 Jun 2012. http://www.idc.com/getdoc.jsp?containerId=prUS23513412 28
Scaramella, J. “Worldwide Server Power and Cooling Expense 2006‐2010 Forecast.” IDC. 2006. http://www.mm4m.net/library/IDCPowerCoolingForecast.pdf 29
Branscombe, M. “HP Puts 1000 Cores in a Single Rack.” Tom’s Hardware, Jun 2008. Web. 11 Jun 2012. http://www.tomshardware.com/reviews/hp‐server‐web,1943.html 30
[ENERGY STAR 2012] 31
Scaramella, J., Perry, R. “Business Value of Blade.” HP. 2011. http://h17007.www1.hp.com/docs/proliantgen8/IDC‐White‐Paper‐Business‐Value‐of‐Blades.pdf © Rochester Institute of Technology October 10, 2012 18 Server Primer -
-
Power costs were reduced by $17 per user per year IT Infrastructure costs were reduced by $55 per user per year; an additional 17.1% savings was reported by companies that utilized virtualization (refer to Section 4.3.1 for a discussion of virtualization) An estimated return on investment of 250% over a three‐year period The move to high‐density computing is also likely to increase the pressure on system‐level power and cooling management. Additionally, increased power draw and hotspots are likely to decrease server reliability, thus increasing failure rates. Power and cooling challenges caused by densification is therefore likely to require novel cooling solutions, both in cooling systems and the mating server hardware. 4.2.
ServerInternalWasteHeatManagement
Note that the issue of waste heat management is addressed both internal to the server through design and externally in the data center. The following section focuses on the issues internal to the server, and the data center is discussed more in Section 5. Nearly99%ofthe
electricityusedto
poweraserveris
convertedtoheat. According to the APC White Paper 57, typically more than 99% of the electricity used to power a server is converted into heat.32 The heat energy increases the internal temperature of components which will eventually lead to equipment failure. Servers are therefore designed to remove the heat energy, usually through forced convection cooling by directing cool air over the hot components. Note however that server cooling is becoming a more difficult challenge as the amount of heat generated by a server increases with the increase in energy use associated with the increase in server performance. Traditional rack servers have internal fans that move cool room air into the server and across the components and expel the generated heat back into the room. For blade server systems, fans provide similar functionality; however, they are resident in the blade server chassis and therefore not server components. Computer room air conditioners (CRAC) provide recurrent heat exchange accepting the heat energy expelled by the server and other equipment, cooling it, and returning the cooled air back to room. The cooled air is typically controlled within a specified temperature range to satisfy the cooling demands of IT equipment with the current ASHRAE specification is 64.4˚F‐
80.6˚F.33 The cooling effectiveness is therefore limited by the incoming air 32
Evans, T. “APC White Paper #57: Fundamental Principles of Air Conditioners for Information Technology.” APC. Rev 2004‐2. http://www.apcdistributors.com/white‐papers/Cooling/WP‐57 Fundamental Principles of Air Conditioners for Information Technology.pdf 33
“2008 ASHRAE Environmental Guidelines for Datacom Equipment ‐Expanding the Recommended Environmental Envelope.” American Society of Heating, Refrigerating and Air‐
Conditioning Engineers, 2008. http://tc99.ashraetcs.org/documents/ ASHRAE_Extended_Environmental_Envelope_Final_Aug_1_2008.pdf © Rochester Institute of Technology October 10, 2012 19 Server Primer temperature, the maximum operating temperature of the components, and the speed of the air moving over the component surfaces. 34
Figure 7 ‐ Diagram of Internal Server Components The amount of heat being released to the room environment is compounded by the increasing density of servers. The APC White Paper notes that a single blade server chassis can release four kilowatts of heat energy into the IT room or data center, with approximately 50% of the heat energy released by servers originating in the microprocessor itself. Hewlett Packard offers addition insight, stating that a traditional rack‐type server setup with 14 servers will require 8kW of heat exchange, 26 servers will require 15kW, and 42 servers will require 24.2kW.35 Data centers are having difficulty adjusting to the effect of high density racks on power and cooling resources and alternate cooling technologies are being developed. Several companies are considering liquid cooling as an alternative to traditional air cooling as a means to promote energy and cost efficiency. A common method of liquid cooling is to use water as the cooling medium since water has 3500 times the thermal capacity of air.36 In order to utilize water 34
“The Problem of Power Consumption in Servers.” Intel, 2009. http://www.intel.com/intelpress/articles/The_Problem_of_Power_Consumption_in_Servers.pdf 35
Miller, R. “Data Center Knowledge.” Too Hot for Humans, but Google Servers keep Humming. March 2012. Web. 4 Jun 2012. http://www.datacenterknowledge.com/archives/2012/03/23/too‐
hot‐for‐humans‐but‐google‐servers‐keep‐humming/ 36
“HP Modular Cooling System: water cooling technology for high‐density server installations.” HP. 2007. http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00600082/c00600082.pdf © Rochester Institute of Technology October 10, 2012 20 Server Primer cooling, a water block must be fixed to the heat generating components in place of the traditional air cooling heat‐sink and fan. As the processors generate heat it is transferred to the water which is run through a cooling system that dissipates the generated heat and chills the water. A benefit of water‐cooled systems is their modularity; they can operate on a server‐by‐server basis or for an entire rack while effectively dissipating heat. This offers several advantages over traditional air cooling since the energy use is substantially reduced. Water‐
cooled systems can also be overclocked, a process that increases the processor speed and allows for increased performance in exchange for increased heat generation. IBM‘s Aquasar supercomputer, built in 2010, uses water cooling to maintain the system’s temperature. Due to water’s thermal capacity, it carries a majority of the heat generated away from the system at over 60°C; the water is then used as a heat source for nearby buildings. This has resulted in an energy savings of 40% and a reduction of CO2 emissions by up to 85%.37 Other liquid cooling strategies exist. For example, Green Revolution Cooling, a small Texas‐based company, uses a modified mineral water called GreenDef as a dielectric medium to cool servers. Because of the dielectric properties of their solution, servers can be submerged in the liquid after waterproofing; this involves removing the fans and encapsulating the hard drives. GreenDef has 1200 times the thermal capacity of air, allowing for the custom server rack to be densely packed; this property enables server processors to overclocked successfully, creating even higher output. The system is attached to a pump and heat exchanger. Some setups feature exporting the hot water as a heat source to nearby facilities. In a regular 100kW installation, the cost of install and energy requirements per year was half that of the same‐sized air‐cooled system.38 4.3.
ServerUtilizationandConnectivity
As previously stated, a server is typically dedicated to a single function and therefore the amount of time an average server is actually being used, or the server utilization, is only around 10‐22%.39 This means that a data center’s processing capacity as a whole is significantly underutilized. Some estimates state that 15% of the servers in datacenters are never utilized.40 The following 37
"IBM Research ‐ Zurich." Zero‐emission datacenter. IBM, Jul 2010. Web. 4 Jun 2012. http://www.zurich.ibm.com/st/server/zeroemission.html 38
“The CarnotJet System.” Green Revolution Cooling. http://www.grcooling.com/docs/Green‐
Revolution‐Cooling‐CarnotJet‐System‐Pamphlet.pdf 39
Koomey, J., Belady, C., Wong, H., Snevely, R., Nordman, B., Hunter, E., Lange, K., Tipley, R., Darnell, G., Accapadi, M., Rumsey, P., Kelley, B., Tschudi, B., Moss, D., Greco, R., Brill K. “Server Energy Measurement Protocol.” (2006). http://www.energystar.gov/ia/products/downloads/Finalserverenergyprotocol‐v1.pdf 40
[Microsoft 2011] Aggar, M. “The IT Energy Efficiency Imperative.” Microsoft. 2011. http://download.microsoft.com/ download/7/5/A/75AB83E8‐2487‐409F‐AC6C‐
4C3D22B72139/ITEI_Paper_5.27.11.pdf © Rochester Institute of Technology October 10, 2012 21 Server Primer subsections discuss some of trends to boost utilization, reduce energy costs, and save equipment and space. 4.3.1. ServerVirtualization
Virtualization is a software‐based solution to server underutilization. By using specially designed software, one physical server, or host, can be converted into multiple virtual machines, or guests. Each virtual server acts like a unique physical device, capable of running its own operating system (OS). This allows the one application per server motif to be reworked into one application per virtual machine. Using virtualization, a typical small data center with one domain name server, one mail server, and one web server could be compacted to a single machine running the base processor and two virtual machines. Following a survey of the IT industry, Healy, Humphreys, and Anderson suggested that virtualization can reduce hardware costs by 20% and generate a savings of 23%.41 Despite these potential benefits, two‐thirds of organizations have virtualization enabled on less than half of their servers.42 Virtualization not only provides hardware reduction benefits, but it also saves energy. Figure 6 shows a typical server energy profile, where at low utilization the power consumed is about half of the power necessary at full utilization. Two of the same servers operating at 20% utilization each would require more energy than a single server operating at 40% utilization. Figure 8 ‐ Relationship between Server Utilization and Power Consumption43 41
Healy, M., Humphreys, J., Anderson, C. “IBM Virtualization Services.” IBM. 2008. http://www‐
935.ibm.com/services/us/its/pdf/idc_white_paper_for_ibm_on_virtualization_srvcs‐v2.pdf 42
[Microsoft 2011] 43
ibid © Rochester Institute of Technology October 10, 2012 22 Server Primer 4.3.2. ServerConsolidation
Like virtualization, consolidation is a method to reduce the number of servers within a data center. However, unlike the software‐based virtualization, consolidation is hardware based. Instead of grouping different applications or functions onto one server, consolidation replaces multiple servers, each with low utilization and serving the same function, with a single higher‐utilized server.44 As with virtualization this method can help lower costs and save space by eliminating excess equipment. Carr states that by 2005, “large data centers are becoming increasingly common as smaller data centers consolidate.”45 As noted in Figure 6, servers have high power requirements at low‐utilization; thus consolidating two servers into one is less energy intensive than running two independent servers. Figure 7 depicts six firms operating individual mail servers and a second scenario where they share a cloud‐based service instead, enabling a net reduction of two servers. A cloud computing center can be considered to be a large data center consolidated from several smaller ones. This highlights a basic economy of scale: the larger the data center, the more efficient it is compared to a set of smaller data centers serving the same purpose.46 44
Iams, T., “Consolidation and virtualization: The same, but different.” http://searchdatacenter.techtarget.com/tip/Consolidation‐and‐virtualization‐The‐same‐but‐
different 45
Carr, Nicholas G. “The End of Corporate Computing.” MIT Sloan Management Review. vol. 46, no. 3, pp. 67‐73. 2005. http://sloanreview.mit.edu/the‐magazine/2005‐spring/46313/the‐end‐of‐
corporate‐computing/ 46
“Google’s Green Computing: Efficiency at Scale.” Google. 2011. http://static.googleusercontent.com/external_content/untrusted_dlcp/www.google.com/en/us/g
reen/pdfs/google‐green‐computing.pdf © Rochester Institute of Technology October 10, 2012 23 Server Primer Figure 9 – Effects of Consoldation / Cloud Computing In February 2010, the U.S. government launched the Federal Data Center Consolidation Initiative (FDCCI) and issued guidance for Federal Chief Information Officers (CIO) Council agencies. The guidance called for agencies to inventory their data center assets, develop consolidation plans throughout fiscal year 2010, and integrate those plans into agency fiscal year 2012 budget submissions. The Consolidation Initiative is intended to reduce the number of data centers across the government and assist agencies in applying best practices from the public and private sector, with goals to: reduce the overall energy and real estate footprint of government data centers, reduce the cost of data center hardware, software, and operations, increase the overall IT security posture of the government, and shift IT investments to more efficient computing platforms and technologies. The Consolidation Initiative plan is to shut down at least 1,200 of the 3,133 data centers the government owns and operates. To date, 250 data centers have been shut down and there are plans to close a total of 479 by the end of fiscal year 2012.47 4.3.3. CloudComputing
The National Institute of Standards and Technology defines cloud computing as “a model for enabling ubiquitous, convenient, on‐demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Figure 7 illustrates how consolidation using a cloud‐computing data center is more efficient. A Google case study analyzed the effects of locally hosted email service compared to cloud‐hosted email service. The study methodology was based on businesses with 50 (small business), 500 (medium business), and 10,000+ (large business) employees, and compared the datacenters required by these businesses to a cloud computing datacenter operated by Google. The results indicate that as the number of users increase, per‐user requirements for power and the corresponding emissions decrease exponentially following a basic economy of scales argument. Some results from this study are outlined below:48 47
Federal Chief Information Officers Council, Maximizing ROI: Consolidating Federal IT Infrastructure https://cio.gov/maximizing‐roi/, accessed 10/8/12 48
“ [Google. 2011] © Rochester Institute of Technology October 10, 2012 24 Server Primer Business email service Small Medium Large Google Server Requirements A single, mid‐range multi‐core server with local disk that can serve 300 users and draws 200 Watts.
A single, large, many core server with combinations of local and network storage, which can host 1,000 users and which draws 450 Watts Several, large, many‐core servers with combinations of local and network storage which can host 1,000 users and draws 450 Watts. Cloud based services Annual Energy Per User Annual CO2 emissions Per User 175 kWh 103 kg 28.4 kWh 16.7 kg 7.6 kWh 4.1 kg < 2.2 kWh < 1.23 kg Datacenterdesign
isimprovingwith
aPUEaverage
valueof1.91.This
meansthatover
47%ofthepower
usedinadata
centerisusedto
supportthe
infrastructure,
includingcooling. Accenture, a research firm, derived similar results using company sizes of 100, 1000, 10,000 and comparing individual data center emissions to those from a single Microsoft cloud data center.49 The analysis suggests that typical carbon emission reductions by deployment size are more than 90 percent for small deployments of about 100 users, 60 to 90 percent for medium‐sized deployments of about 1,000 users, 30 to 60 percent for large deployments of about 10,000 users. 5. ServerImpactonOverallDataCenterEnergyUse
A significant portion of the energy required to use a server is directed into the infrastructure used to support the server. The server design can not only influence the energy used by the server itself, but also the energy used by the infrastructure. Power Usage Effectiveness (PUE) is a frequently used measure of the effectiveness of data center infrastructure and operation. PUE is the ratio of the total data center power input, to the power directly consumed by the IT equipment. Assuming that all power sources have been properly accounted for, the theoretically ideal PUE for a data center is 1.0, meaning that there is no additional energy required to operate the data center beyond the power directly consumed by the equipment. While 1.0 is not practically achievable, a number of companies and organizations are reporting PUE numbers for their new state‐of‐the‐art data centers which are approaching 1.0, with a number of 49
“Cloud Computing and Sustainability.” Accenture. 2010. http://download.microsoft.com/download/A/F/F/AFFEB671‐FA27‐45CF‐9373‐
0655247751CF/Cloud%20Computing%20and%20Sustainability%20‐%20Whitepaper%20‐
%20Nov%202010.pdf © Rochester Institute of Technology October 10, 2012 25 Server Primer facilities reporting approximately 1.10.50 A study conducted in 2009 by the U.S. EPA ENERGY STAR program looked at PUE for a broad range of 100 data centers, this study showed a range of PUE values between 1.25 ‐ 3.75, with an average value of 1.91.51 Electrical power management, equipment utilization levels, and HVAC are major areas for energy consumption within data centers. In conventionally cooled data centers, the air conditioning loads are one of the largest drivers of energy consumption after the IT equipment. Heat recovery and reuse (for example in absorptive cooling systems), water or refrigerant based cooling, and free air cooling52 are all strategies for reducing the energy cost of data center cooling. However, these can be very difficult or expensive to implement as a retrofit to existing designs. Expanding the allowable environmental operating range (temperature and humidity) of IT equipment can result in lower HVAC related energy consumption. In 2008, ASHRAE expanded its classes for data center equipment environmental specifications; four classes are defined 1‐4, with each higher number class having a wider environmental range.53 The classes define recommended and allowable (wider) operational ranges for dry‐bulb temperature and humidity (RH and wetbulb), as well as ranges for non‐
operating equipment. In the 2011 whitepaper referenced, the recommended and allowable ranges are refined relative to the 2008 standard, and classes (A1‐
A4) are defined; the operational ranges for A3 and A4 are expanded relative to the 2008 standard. In the model R270 server technical documentation, Dell provides environmental specifications that allow for continuous operation at the A2 level, and transient operation at A3 and A4 (less than 10% of annual operating hours, less than 1% of annual operating hours).54 This type of product information can be helpful to the data center designer/operator in setting 50
M. K. Patterson, "Metrics overview and update," Powerpoint presentation, presented at the Proceedings of the 2011 workshop on Energy Efficiency: HPC System and Datacenters, Seattle, Washington, USA, 2011. http://dl.acm.org/citation.cfm?id=2159350&CFID=170830100&CFTOKEN=87344492 51 Sullivan, A., “ENERGY STAR® for Data Centers,” US EPA, ENERGY STAR PowerPoint Presentation, Feb 4, 2010, http://www.energystar.gov/ia/partners/prod_development/new_specs/downloads/uninterruptib
le_power_supplies/ENERGY_STAR_Buildings_Team_Metering_Presentation.pdf, last accessed October 8, 2012. 52 Pendelberry, S., Thurston, M., et. al., “Case study — The making of a Green Data Center.” Proceedings of the 2012 IEEE International Symposium on Sustainable Systems and Technology, Boston, MA, May 16‐18, 2012. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6228001 53
ASHRAE TC 9.9, “2011 Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance,” ASHRAE, 2011, http://www.eni.com/green‐data‐
center/it_IT/static/pdf/ASHRAE_1.pdf, last accessed October 8, 2012. 54
“PowerEdge R720 and R720XD Technical Guide, Rev 1.1,” March 2012. http://i.dell.com/sites/content/shared‐content/data‐sheets/en/Documents/dell‐poweredge‐r720‐
r720xd‐technical‐guide.pdf, last accessed October 8, 2012. © Rochester Institute of Technology October 10, 2012 26 Server Primer environmental controls criteria that minimize HVAC related energy consumption, and could be added to the “ENERGY STAR® Power and Performance Data Sheet.55” 55
ENERGY STAR Power and Performance Data Sheet, Dell PowerEdge R720 XD featuring the Dell Smart 1100W PSU and Intel E5 2640. http://www.dell.com/downloads/global/products/pedge/en/Dell‐PowerEdge‐R720‐XD‐1100W‐
E5‐2640‐Family‐Data‐Sheet.pdf, last accessed October 8, 2012. © Rochester Institute of Technology October 10, 2012 27 Server Primer 6. ServerEnvironmentalAssessments
Environmental impacts of a product occur throughout the product life cycle. Some examples in which a product will impact the environment are through the depletion of natural resources (fuel / energy, material, water), the impact on the ecosystem health (terrestrial and aquatic ecotoxicity, acidification, eutriophication, land use), and the impact on human health (human toxicity, stratospheric ozone depletion, ionizing radiation, and climate change). ALCAshowedthe
DellPowerEdge
R710was
responsiblefor
approx.6360kg
CO2eovera4‐yr
lifetime.90%of
theCO2eemissions
werefromuse. Life cycle assessment (LCA) is a tool used to quantify the environmental impacts of a product, holistically, throughout the entire life cycle; from material extraction, manufacturing, transportation, use, and end of life. The impacts associated with the product are assessed by compiling an inventory of relevant energy and material inputs and environmental releases, evaluating the potential environmental impacts associated with the identified inputs and releases, and interpreting the results to help make more informed decisions. These studies are also very useful in identifying if environmental burdens are shifted from one product life cycle phase (for example: material extraction) to another product life cycle phase (for example: product end of life). Effort was exerted to identify full life cycle studies for computer servers. In addition to literature searches and inquiries through RIT industry contacts, the manufacturers on the Technical Committee were asked to identify any know full LCA on servers. At the time of this writing, no full LCA studies using multiple environmental impacts have been identified. Carbon footprinting studies of computer servers, however, were identified, and the following sub‐sections highlight some of these studies that have investigated the life cycle global warming potential of a server. It should be noted that carbon footprinting is a simplified form of LCA focused on only one environmental impact, and that computer servers have additional known environmental impacts such as resource depletion, human toxicity, and environmental toxicity that are not reported by these studies. Complementing the carbon footprinting studies with full life cycle assessments would avoid burden shifting from GHG to other relevant environmental areas of concern. 6.1.
CarbonFootprintofaTypicalDellRackServer
Dell conducted a study in 2011 to determine the carbon footprint (greenhouse gas (GHG) emissions contribution to global warming potential (GWP) in kg of carbon dioxide equivalents (CO2e)) of the Dell PowerEdge R710 server. 56 This analysis was performed following the ISO 1404057 and ISO 1404458 standard 56
Stutz, M., O'Connell, S., & Pfluefer, J. “Carbon footprint of a typical dell rack server.” International Symposium on Sustainable Systems and Technology. May 2012. Boston, MA. 57
ISO 14040:2006 Environmental management ‐ Life cycle assessment – Principles and framework 58
ISO 14044:2006 Environmental management ‐ Life cycle assessment ‐ Requirements and guidelines © Rochester Institute of Technology October 10, 2012 28 Server Primer framework on the PowerEdge R710 server with two Intel Xeon processors, 12Gb of RAM, 4x146 GB hard drives (HDD), two high output power supplies, one DVD drive, and four fans. The Dell paper states that the total carbon footprint of a Dell PowerEdge R710 is approximately 6360 kg CO2e. This was calculated over a 4‐year lifetime running 24 hours a day, 7 days a week assuming operating 50% of the time at 148W idle workload, and 50% of the time at 285W full workload. The average US grid mix was used for this calculation. Results show that over 90 percent of the total life‐cycle GHG emissions was from the use phase (5960 kg CO2e). See Figure 10. Only 7 percent of the GHG emissions was from manufacturing, which included raw material extraction, subassembly manufacturing, transportation of subassemblies, and final assembly. Dell PowerEdge R710
GHG Emissions (kg CO2e)
7000
6000
5000
4000
Use
5960
3000
Manufacturing
2000
Transport
1000
0
‐1000
Recycling
15
471
‐86
GHG Emissions
Figure 10 ‐ Total Product Carbon Footprint of the Dell PowerEdge R710 in the US Dell ran two additional model scenarios. The first was to model the server at 100 percent utilization, and the second was to run the server at 100 percent idle. At full utilization the unit produced 8240 kg CO2e, or 30% more carbon emissions, and at idle it produced 4470 kg CO2e, or 30% less emissions (see Figure 11). Dell stated that these results were a powerful message for eliminating underutilized server through virtualization. Using the above report numbers, one can see that two servers running at 50 percent utilization (nominal case at 2x 6360 = 12720 kg CO2e) would produce 54 percent more carbon emissions than one server running at 100 percent utilization (8240 kg CO2e) reinforcing their support for virtualization. © Rochester Institute of Technology October 10, 2012 29 Server Primer GHG Emissions (kg CO2e)
Dell PowerEdge R710
9000
8000
7000
6000
5000
4000
3000
2000
1000
0
‐1000
Use
Manufacturing
Transport
Recycling
Full Nominal
Utilization
Full Idle
Figure 11 ‐ Total Carbon Footprint (kg CO2e) of the Dell PowerEdge R710 server 6.2.
Carbon Footprint of Fujitsu Primergy RX and TX
300S5Servers
Fujitsu published a study in 2010 called “Life Cycle Assessment and Product Carbon Footprint –PRIMERGY TX 300 S5 and PRIMERGY RX 300 S5 Server.”59 Though the title implies that a full life cycle assessment has been completed, the white paper only published results of the carbon footprint. The paper however states that greenhouse effect, cumulative energy demand, acidification, terrestrial and aquatic eutrophication, photochemical oxidant formation, human toxicity, and eco‐toxicity were studied. The servers included one Intel Xeon 2.26 GHz 8MB processor, one 4GB DDR3‐1066 PC3‐8500 ECC memory, one 146GB hard drive, RAID controller, DVD‐RW, and rack mount kit server. The Fujitsu paper states that the total carbon footprint of the PRIMERGY TX300 S5 is approximately 3750 kg CO2e. This was calculated over a 5‐year lifetime operating at a 30% workload. The average German grid mix was used for this calculation. Results show that over 85 percent of the total life‐cycle GHG emissions was from the use phase. See Figure 12. The impact of the use phase on carbon footprint was very similar to the Dell results. Though limited data is contained in the white paper, a few other interesting results were presented. One result highlighted how source power generation impacts the carbon footprint. The same analysis as above run in France, where there is a high level of nuclear power, instead of Germany, where there is high coal use, reduced the carbon footprint from 3750 kg CO2e to 980 kg CO2e. 59
“White Paper: Life Cycle Assessment and Produce Carbon Footprint – Server PRIMERGY TX/RX 300 S5.” Fujitsu. 2010. http://fujitsu.fleishmaneurope.de/wp‐content/uploads/2010/12/LCA_PCF‐
Whitepaper‐PRIMERGY‐TX‐RX‐300‐S5.pdf © Rochester Institute of Technology October 10, 2012 30 Server Primer Additionally, one of the report “lessons learned” was to avoid focusing solely on energy efficiency of servers. Though the use phase plays a big role in the greenhouse effect, raw materials are key factors for several other impact categories. This is an important statement, though no supporting data was provided. Figure 12 ‐ Respective Share of the Total Product Carbon Footprint (Fujitsu) 6.3.
CaseStudyofanIBMRack‐mountedServer
Weber looked at the uncertainty and variability in the carbon footprinting methodology using an IBM rack‐mounted server. The specific server model number and components are not identified; however, the server is identified as an IBM circa 2008 model. The server life was modeled as a triangular distribution with a most likely value of 6 years and minimum and maximum values of 3 and 10 years. The use phase mean was 6238 kg CO2e representing around 94 percent of the server’s total carbon footprint (88% ‐ 97% with uncertainty). The analysis also highlighted the contribution of the various components without including the dominant use phase. The analysis showed that the manufacture of the Integrated Circuits (ICs) and printed wiring boards (PWBs) are responsible for a combined 45% of the remaining carbon emissions not including the use phase. The breakdown of individual product carbon footprint contributions from the components is shown in the figure below.60 60
Weber, C. L. “Uncertainty and Variability in Product Carbon Footprinting.” Journal of Industrial Ecology, 16(2), 203‐211. 2012. doi: 10.1111/j.1530‐9290.2011.00407.x http://onlinelibrary.wiley.com/doi/10.1111/j.1530‐9290.2011.00407.x/full © Rochester Institute of Technology October 10, 2012 Product Carbon Footprint (without use) (kg CO2e)
31 Server Primer 400
Logistics
350
Packaging
300
Bulk Materials
250
Power Supplies
200
DVD‐ROM
150
Hard Drive
100
Components
50
Raw PWB
0
Server
IC
Figure 13 ‐ Mean Results for Server Carbon Footprint by Subgroup without Use Phase61 7. ServerStandardScopeTopics
The following sub‐sections highlight industry current practices and current requirements of various environmental impact areas of concern. The sub‐
section topics are aligned with the topics in the IEEE 1680 family of standards. 7.1.
ServerMaterialSelection
All materials used in products impact the environment in some manner either through their production, their use in products, or in the disposal of those products. Minimizing the impact that a product has on the environment requires the selection of materials that are, in general, less toxic, are less energy intensive to make (which may include containing recycled content), are from renewable sources, and are easier to reuse or recycle. The current materials used in servers are estimated in the following subsections. These materials were determined through a disassembly analysis performed at GIS, and a material analysis provided by Cascade Asset Management. Note that a significant percentage of materials used in servers are steel and aluminum, with a low percentage of plastic content. 7.1.1. ServerDemanufacturing:GIS
In 2010, graduate students lead by GIS faculty, Dr. Callie Babbit and Dr. Michael Thurston, researched the effects that electronic waste has on specific material flows. This study62 included various end‐of‐life scenarios, including reuse and recycling, for components of common IT products. The study included 61
ibid 62
RIT internal report, “Analysis of E‐Waste Material Flows, and Opportunities for Improved Material Recovery” March, 2010, Confidential, some data reproduced here with permission. © Rochester Institute of Technology October 10, 2012 32 Server Primer completing a full disassembly analysis down to the individual material level with materials identified by using various material analysis laboratory techniques. One of the products studied was a Dell PowerEdge R710 server. This server model was considered representative of volume servers. Though the main study scholarship remains confidential, some of the general findings of this study are reproduced below with permission. A major study focus area applied material flow analysis (MFA) methodology to servers. The MFA investigated each material for their total volumes, value, and percent of total waste and then this data was used to estimate the current breakdown and volumes in which products and components are recovered for refurbishment and reuse, remanufactured, recycled, or disposed. The Dell PowerEdge components and assemblies were separated into individual material types. In some cases, for simple geometries, material breakdown was determined by means of simple volume/density calculations. Plastic components were identified by their material codes. For those plastics components without material codes, the material was assigned to an “Undefined Plastic” category. A variety of methods were used to assign metal components to a material category such as: inspection (observed density and stiffness), level of magnetism, and Energy Dispersive X‐ray Spectroscopy was also used on some components that were not obvious by inspection. Finally, for Lithium Ion batteries and for printed circuit boards, previous compositional studies from the literature were used to estimate the material composition by weight percentage. The material analysis results indicate that the majority of a Dell PowerEdge server’s weight is composed of ferrous steel (62.7%), aluminum (15.9%), halogenated epoxy (9.6%) and plastics, (4.8%). Detailed estimates of material composition can be found in the table below. Table 3 ‐ Total Material Composition of the Dell PowerEdge R710 by weight Total Weight Weight (grams) 24680 Steel / Ferrous 15480 62.70% Ferrites / Magnets 456 1.80% Aluminum 3934 15.90% Copper 863 3.50% Tin 123 0.50% Brass 1.8 0.00% Mercury 0 0.00% Carbon 0.36 0.00% Lithium 0.21 0.00% Cobalt 0.6 0.00% Nickel 32.1 0.10% Silver 10.6 0.00% Material © Rochester Institute of Technology October 10, 2012 Percentage ‐ 33 Server Primer Gold Weight (grams) 0.68 0.00% Palladium 0.17 0.00% Total Plastic 1179 4.80% Plastic (various) 701 2.80% PC+ABS FR 23.8 0.10% PC+ABS FR(40) 299.7 1.20% PBT‐GF30‐FR(17) 154.9 0.60% PVC 0 0.00% Rubber (including foams) 5.3 0.00% Paper 15.3 0.10% Epoxy 11.5 0.00% Capacitor Electrolyte (Ethylene Glycol or Butyrolactone) 30 0.10% HD Glass/Ceramic Disk 176.3 0.70% Halogenated Epoxy+Glass Reinf 2359.4 9.60% Li‐Ion Electrolyte Non‐Aqueous Li‐Ion Solvent (propylene carbonate, 1.3 dioxolane, Dimethoxyethane) 0.03 0.00% 0.14 0.00% Material Percentage 7.1.2. ServerDemanufacturing:Cascade
Neil Peters‐Michaud, Owner and CEO Cascade Asset Management (Cascade), provided a rack server demanufacturing study performed in August 2012. Cascade analyzed the material fractions in 1972 lbs of servers that were being processed at end of life. The study results are reproduced in Figure 14 with permission. © Rochester Institute of Technology October 10, 2012 34 Server Primer Demanufactured Fractions ‐ Servers
Total Weight ‐ 1972 lbs
Scrap Ferrous Metal (1251 lbs)
Precious Circuit Boards (202 lbs)
Copper Heatsinks w/ Aluminum (58 lbs)
Aluminum Breakage (24 lbs)
Computer Cables (10 lbs)
Universal Waste: Ni‐Cad/Ni‐MH Batteries (1 lbs)
12.8%
Power Supplies (249 lbs)
Shredded Hard Drives (104 lbs)
Mixed Plastic (25 lbs)
Copper (18 lbs)
Low Value Boards (3 lbs)
Universal Waste: Lithium Batteries (1 lbs)
10.4%
1.2%
0.9%
5.3%
7.2%
64.3%
0.5%
1.3%
0.2%
0.1%
3.0%
0.1%
Figure 14 – Cascade Demanufactured Material Fractions of Servers (August 2012) 7.2.
EnvironmentallySensitiveMaterials
No information was found in the literature search concerning the specific chemical makeup of substances used in servers. However, servers contain components similar to other electronics devices and that data is reported here as a proxy. The report, Information on Chemicals in Electronic Products63, states that analysis of chemicals present in electronic products is not easy. For example, computers and mobile phones can contain over one thousand different substances. The report also states that the main hazardous substances found in electronic products are: lead, mercury, cadmium, zinc, yttrium, chromium, beryllium, nickel, brominated flame retardants, antimony trioxide, halogenated flame retardants, polyvinyl chloride (PVC), and phthalates. The report also provided some examples of material use in electronic equipment. Batteries can contain heavy metals such as lead, mercury, and cadmium. Solder can contain lead, tin, and other metals. Internal and external wiring is often coated with PVC which can contain additional substances such as phthalates. Semiconductors can be encapsulated by plastics containing brominated flame retardants. Finally, printed circuit boards can contain 63
Nimpuno, N & C Scruggs (2011). Information on Chemicals in Electronic Products. Copenhagen: Nordic Council of Ministers. ISBN 978‐92‐893‐2218‐8 http://www.norden.org/en/publications/publikationer/2011‐524 © Rochester Institute of Technology October 10, 2012 35 Server Primer brominated flame retardants, antimony trioxide, and other hazardous materials such as chromium, lead, mercury, beryllium, zinc and nickel. Limited data was found on the active use of alternative materials. Server operating conditions and performance demands of high reliability, high energy use, and high temperature operation to name a few require performance materials that are not easily replaced with “green” alternatives. Some information was found on the use of lead‐free solders. Dell64 advertises that since late 2007, they have been launching lead‐free servers such as the Dell R900 and R905. In early 2008, Dell launched their first lead‐free blade servers, the PowerEdge™ M600 and M605. Since then, they claim that all new basic‐
configuration PowerEdge servers have been lead‐free. 7.2.1. RoHSDirective
A few hazardous materials in electronic equipment are governed by the European Directive 2002/95/EC on the Restriction of the Use of Certain Hazardous Substances in Electrical and Electronic Equipment (commonly referred to as the RoHS Directive). The Directive was adopted in February 2003 by the European Union (EU) and took effect in July of 2006 and is required to become law in each member state of the European Union. This directive restricts the use of six hazardous materials (lead, mercury, cadmium, hexavalent chromium, polybrominated biphenyls (PBB) or polybrominated diphenyl ethers (PBDE)) in the manufacture of electrical equipment sold in the EU. This Directive has been adopted by many server manufacturers worldwide due to the global nature of IT equipment sales. Additionally, some states in the U.S. such as California65 have adopted RoHS legislation based on the EU directive. On May 14, 2009, H.R. 2420, the Environmental Design of Electrical Equipment Act (EDEE) Act, was introduced as a Bill in the US House of Representatives with similar requirements as the EU RoHS; however, this bill died in committee. The EU Directive has an exemption specific to servers for “lead in solders for servers, storage and storage array systems, network infrastructure equipment for switching, signaling, transmission as well as network management for telecommunications.” The primary reason for this exemption is that solder joints are subjected to significant stress due to thermal cycling, and solders with lead have historically been more tolerant and have higher reliability than lead‐
free solders. 64
Design. Smarter material choices: what's inside our products and what's not. Dell, 2012. Web. 12 Jun 2012. http://content.dell.com/us/en/corp/d/corp‐comm/earth‐greener‐products‐materials 65
“California Department of Toxic Substances Control.”Restrictions on the use of Certain Hazardous Substances (RoHS) in Electronic Devices. State of California, 2010. Web. 12 Jun 2012. http://dtsc.ca.gov/HazardousWaste/rohs.cfm © Rochester Institute of Technology October 10, 2012 36 Server Primer 7.3.
ProductLongevity
Server equipment has historically been replaced when it no longer meets the performance needs of the market, not necessarily by the functional life of the equipment itself. A survey of the IT market from the IDC notes that the optimal time to replace a server is after three years of operation, at which time the return on investment (ROI) to purchase new compared to continual operation of current equipment will be less than one year66. These product refreshes have the benefit of increased efficiency and better power utilization, as noted by Dell67. Data from Dell supports this replacement timeframe noting that their PowerEdge servers usually operate for about 4 years before they are removed from the market. Hewlett Packard suggests a typical lifetime is approximately 3‐
4 years. One of the main reasons given for replacing servers before complete failure is that many experience a decrease in server reliability which increases operating costs. The survey of the IT market mentioned above questioned over 50 participants in the server market to discover the effect that aging had on server equipment. Note the increase in failure rates and downtime as the equipment ages. (Figure 15) 20%
18%
16%
14%
12%
10%
8%
6%
4%
2%
0%
8
7
6
5
4
3
2
1
Downtime (hrs per year)
Failure Rate
Effect of Time on Server Reliability
Failure Rate
Downtime
0
1
2
3
4
5
6
7
Server Age (years)
66
Figure 15 ‐ Effect of Time on Server Reliability 66
Perry, R., Pucciarelli, J., Bozman, J., Scaramella, J. “The Cost of Retaining Aging IT Infrastructure.” HP. 2012. http://h18006.www1.hp.com/storage/pdfs/4AA3‐9351ENW.pdf 67
Stutz, M., O'Connell, S., & Pfluefer, J. “Carbon footprinting of a typical dell rack server.” International Symposium on Sustainable Systems and Technology. May 2012. Boston, MA. © Rochester Institute of Technology October 10, 2012 37 Server Primer 7.4.
DesignforEndofLife
A significant mass of electronic equipment reaches the end of life and is discarded every year. A 2005 paper estimated that global electronic waste generation was on the scale of 20‐50 million tons per year68 with approximately 40 thousand tons of this waste from end of life servers. This value is projected to continue to increase as more servers reach the end of life through both failure and obsolescence due to rapid technology advancements. Servers have traditionally been designed for rapid repair and easy upgrade to ensure minimal downtime. Many of the components are therefore hot swappable, or able to be removed and changed while the server continues to run. The traditional repairable and modular design provides the secondary benefit of simple separation at the server’s end of life. Servers are therefore easily separated into recyclable material streams, or are easily upgraded to extend the product life. The challenges associated with upgrade are further detailed in section 7.5.2. This ability to disassembly the server to the component level was also seen in the GIS demanufacturing analysis covered in section 7.1.1. The Dell server studied in this analysis had a very modular design, with many of the major components retained by quick release attachments. This design allows for quick and cost effective servicing of components with little or no down time during the use phase, and complete removal of all major components at the end‐of‐
life. In the study, the total disassembly time was only 8.2 minutes. The quick release latches had either light blue or orange colored tabs, which increased the ease of locating and identifying these latches. Wire harnesses were also clearly labeled for easy identification with quick release connectors that did not require the use of tools to remove. The motherboard was mounted to a large steel frame which was easily removed by removing a few T15 Torx screws. The modular design of both components and the use of quick release clips instead of threaded fasteners facilitated manual disassembly as an option prior to mechanical separation. This separation of components with high‐value materials can potentially maximize the value recovered. 7.5.
EndofLifeManagement
7.5.1. ServerEnd‐of‐Life
The European Union (EU) has implemented legislation (the WEEE – Waste Electrical and Electronic Equipment directive) to control the disposition of end‐
of‐life electronics. The WEEE directive requires manufacturer to report the material content of products and support environmentally sound end‐of‐life processing. Within the United States, there is also a move towards legislation to 68
“Environmental Alert Bulletin: E‐waste, the hidden side of IT equipment’s manufacturing and use.” United Nations Environment Programme. Jan, 2005. http://www.grid.unep.ch/products/3_Reports/ew_ewaste.en.pdf
© Rochester Institute of Technology October 10, 2012 38 Server Primer prevent dumping end‐of‐life electronics into municipal waste streams, and many companies are taking proactive steps to provide for collection and processing of end‐of‐life products. In the recent past, some end‐of‐life electronics have been shipped from developed countries to developing countries that do not have stringent environmental requirements; publicity around these practices has raised worldwide concern resulting in increased monitoring by NGOs and increased oversight by western governments. The increased legislation, oversight, and public and corporate sensitivities are resulting in improvements in the environmental impact of end‐of‐life electronics; however, there are technical, logistic, and economic limitations on the effectiveness of end‐of‐life processing practices. There are a variety of sources of “best practice” type information on electronics design to decrease the end‐of‐life environmental impacts through remanufacturing, recycling, and recovery. There is a significant body of literature in the area of product, and more specifically electronic product, recycling and material recovery; this research can be grouped into several areas. One area describes the state of the art in recycling processes; this includes broad reviews as well as detailed evaluations of particular processes.69,70,71,72 A second group of literature looks at the economic and/or environmental aspects of recycling, this is of particular interest as free market recycling will not survive if not economically viable.73,74,75,76,77,78 A 69
Cui, J., Forssberg, E., “Mechanical recycling of waste electric and electronic equipment: a review,” Journal of Hazardous Materials, Vol. 99, No. 3, pp 243‐263, May 2003. http://www.sciencedirect.com/science/article/pii/S030438940300061X 70
Kang, H.‐Y., and Schoenung, J.M., “Electronic waste recycling: A review of U.S. infrastructure and technology options,” Vol. 45, No. 4, Dec 2005, pp. 368‐400. http://aix.meng.auth.gr/pruwe/dhmosieuseis/weee_usa.pdf 71
Hageluken, C., “Recycling of electronic scrap at Umicore’s integrated metals smelter and refinery,” Proceedings of EMC, Vol. 1, p. 307, 2005. http://www.preciousmetals.umicore.com/PMR/Media/e‐
scrap/show_recyclingOfEscrapAtUPMR.pdf 72
Li, J., et. al., “Printed Circuit Board Recycling: A State‐of‐the‐Art Survey,” IEEE Transactions on Electronics Packaging Manufacturing, Vol. 27, No. 1, pp 33‐42, Jan. 2004. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1331573&userType=&tag=1 73
Boon, J.E., Isaacs, J.E., and Gupta, S.M., “Economics of PC Recycling,” Proceedings of the SPIE International Conference on Environmentally Conscious Manufacturing, Boston, MA, Nov. 5‐8, pp. 29‐35, 2000. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=927163 74
Sodhi, M.S., and Reimer, B., “Models for recycling electronics end‐of‐life products,” OR Spectrum, Vol 23, No. 1, Feb, 2001. http://aix.meng.auth.gr/helcare/ScareEng/Papers/%D7%D1%C7%D3%C9%CC%CF%20‐
%20Models%20for%20recycling%20electronics.pdf 75
Kang, H.‐Y., and Schoenung, J.M., “Estimation of future outflows and infrastructure needed to recycle personal computer systems in California,” Journal of Hazardous Materials, Vol. 137, Issue 2, pp. 1165‐1174, Sept 2006. http://www.sciencedirect.com/science/article/pii/S0304389406003360 © Rochester Institute of Technology October 10, 2012 39 Server Primer third group of literature attempts to evaluate the suitability for recycling of a particular design.79,80 These references from Villalba et al., provide a metric for material recyclability that takes into account the post recycled material value as compared to the original value; a second index that takes into account the cost of disassembly provides an overall recyclability metric. This approach does not allow visibility into the design factors that affect disassembly cost. In general, most of the components of e‐waste have some economic value as part of the recovery or recycling process; however, the costs associated with transportation, disassembly, and separation can very quickly exceed the potential material recovery value.81 In addition to proper material selection, design for disassembly is critical to cost effective recycling of electronics. There is robust literature in the area of design for disassembly and disassembly planning for remanufacturing that is also applicable to recycling. Bras and McIntosh (1999)82 provide an overview of the early research in this field. The work includes models that can be used to optimize assembly or disassembly processes for a particular design; these models can also be used for design optimization. This continues to be an area of significant research interest, and recent work has begun to apply some of these techniques to recycling applications.83,84,85,86 76
Gregory, J.R., and Kirchain, R.E., “A Framework for Evaluating the Economic Performance of Recycling Systems: A Case Study of North American Electronics Recycling Systems,” Environmental Science and Technology, 42 (18), pp. 6800‐6808, 2008. http://pubs.acs.org/doi/abs/10.1021/es702666v 77
Choi, B.‐C., et. al., “Life Cycle Assessment of a Personal Computer and its Effective Recycling Rate,” The International Journal of Life Cycle Assessment, Vol. 11, No. 2, March, 2006. http://psp.sisa.my/elibrary/attachments/441_11LifecycleAssessment.pdf 78
Schmidt, M., “A production‐theory‐based framework for analysing recycling systems in the e‐
waste sector,” Environmental Impact Assessment Review, Vol. 25, Issue 5, pp. 505‐524, July 2005. http://www.sciencedirect.com/science/article/pii/S0195925505000545 79
Villalba, G., et. al., “A proposal for quantifying the recyclability of materials,” Resources, Conservation, and Recycling, Vol. 37, no. 1, pp. 39‐53, 2002. http://www.sciencedirect.com/science/article/pii/S0921344902000563 80
Villalba, G., et. al., “Using the recyclability index of materials as a tool for design for disassembly,” Ecological Economics, Vol. 50, Issues 3‐4, pp 195‐200, Oct. 2004. http://www.sciencedirect.com/science/article/B6VDY‐4DBSX38‐
1/2/9e10b1f37aa9be4a8dede9eb647374f2 81
Personal communications with Mike Whyte, President of Regional Computer Recycling and Recovery, Rochester, NY, Aug 28, 2009. 82
Bras, B., and McIntosh, M.W., “Product, process, and organizational design for remanufacture – an overview of research,” Robotics and Computer‐Integrated Manufacturing, Vol. 15, Issue 3, pp. 167‐178, June 1999. http://www.sciencedirect.com/science/article/pii/S0736584599000216 83
Tang, Y., et. al., “Disassembly Modeling, Planning, and Application,” Journal of Manufacturing Systems, Vol. 21, Issue 3, pp. 200‐217, 2002. http://www.sciencedirect.com/science/article/pii/S0278612502801625 © Rochester Institute of Technology October 10, 2012 40 Server Primer Product or component reuse or remanufacturing recover the greatest amount of energy embedded in the original product. However, for electronic products, the rapid change in performance or features, coupled with a high level of integration or changes in product architectures, limit 3rd party reuse and remanufacturing. Material selection, tight coupling of dissimilar materials, the presence of hazardous materials, and the cost of disassembly create technical and economic challenges that limit material recovery/recycling options. 7.5.2. EndofLifeManagement
In order to combat the e‐waste stream from servers and other electronic products, manufacturers have created both recycling and remanufacturing programs. Dell87, HP88, and IBM89 have set up recycling programs for their equipment. Little operational data is publically available; however, Hewlett Packard detailed their e‐waste handling process in a 2008 testimony before a House Committee90 and this testimony is reproduced below as an example program. “The equipment returned to HP is managed through a network of partners and service providers who perform the recycling of the equipment. HP formerly partnered with a large electronics recycling company to operate two recycling centers in the US; our partner now operates these facilities with the assistance of HP. HP invested in the development of those recycling centers in order to directly participate and lead the development of the types of technology and processes necessary to recycle used electronics to the environmental and data 84
Veerakamolmal, P., and Gupta, S., “A case‐based reasoning approach for automating disassembly process planning,” Journal of Intelligent Manufacturing, Vol. 13, No. 1, pp. 47‐60, Feb, 2002. DOI: 10.1023/A:1013629013031. http://www.springerlink.com/content/u173h5g4604h1836/ 85
Campbell, M.I., and Hasan, A., “Design Evaluation Method for the Disassembly of Electronic Equipment,” Proceedings of the International Conference on Engineering Design, Stockholm, Aug 19‐21, 2003. http://www.me.utexas.edu/~campbell/pubs/conf/ICED2003abstract_dfd.htm 86
Rios, P.J., and Stuart, J.A., “Scheduling Selective Disassembly for Plastics Recovery in an Electronics Recycling Center,” IEEE Transactions on Electronics Packaging Manufacturing, Vol. 27, Issue 3, pp 187‐197, July, 2004. http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=01393074 87
Design: Design for Recyclability. Dell, 2012. Web. 12 Jun 2012. http://content.dell.com/us/en/corp/d/corp‐comm/designing‐green‐recycling.aspx 88
“HP Renew Program – North America.” HP, 2012. Web. 12 Jun 2012. http://www.hp.com/united‐states/renew/ 89
“IBM Product Take Back for Recycling.” IBM, 2012. Web. 12 Jun 2012. http://www.ibm.com/ibm/recycle/us/index.shtml 90
Testimony of Hewlett‐Packard Company Before the House Committee on Science and Technology For the Hearing on “Electronic Waste: Can the Nation Manage Modern Refuse in the Digital Age?”, April 30, 2008. http://www.hp.com/hpinfo/abouthp/government/us/pdf/Sciencecommitteetestimony.pdf © Rochester Institute of Technology October 10, 2012 41 Server Primer security standards we require. Over time, an infrastructure has started to emerge which has created an ability for HP to reduce our focus on the actual recycling operation and to renew our focus on the design of products which are easier to recycle and can include recyclable commodities in their manufacture as well as the development of recycling services for our customers. Any reusable equipment is segregated. From there, any customer data is destroyed and the equipment is then reused either in whole or in part. Equipment without a reuse channel is sent for removal of any hazardous components (typically CRT glass, batteries or other elements). After removal of any hazardous components the equipment is either manually or mechanically separated into a variety of basic commodities: various types of precious and base metals, plastics and other constituent materials. These materials are processed in the separation process to create valuable commodity streams which are then sold for reuse into a variety of industrial processes. These include the manufacture of new parts and products for a number of industries, including, in some cases, the electronics industry.” One major concern is that end of life materials are handled and recycled responsibly. A few standards and initiatives exist to help guide and direct responsible recycling. These standards relate to material recyclers and as such are outside the scope of this report, but are briefly mentioned here for informational purposes. Currently two accredited certification standards are widely recognized: the Responsible Recycling Practices (R2)91 and the e‐Stewards®92 standards. These standards assess an electronics recyclers’ environmental, worker health and safety and security practices. Many organizations encourage the use of certified recyclers to assure that a product is handled responsibly at its end of life. HP93 and IBM both have active remanufacturing operations to modernize old equipment. The following is from a recent IBM press release94: “ARMONK, N.Y. ‐ 29 Feb 2012: IBM (NYSE: IBM) today announced the opening of the first‐ever server remanufacturing center in China. The new center, located in Shenzhen, will help reduce the impact of e‐waste on the environment by extending the life of older IT equipment that otherwise would go into landfills. IBM will also buy back select IBM Power Systems from clients as they upgrade to new IBM equipment. 91
http://www.r2solutions.org/ 92
http://e‐stewards.org/ 93
“HP Renew Program – North America.” HP, 2012. Web. 12 Jun 2012. http://www.hp.com/united‐states/renew/ 94
“IBM News Room.” IBM Opens the First Server Remanufacturing Center in China. IBM, Feb 2012. Web. 4 Jun 2012. http://www‐03.ibm.com/press/us/en/pressrelease/36976.wss © Rochester Institute of Technology October 10, 2012 42 Server Primer The new facility expands IBM's global remanufacturing and refurbishment operations in Australia, Singapore, Japan, Brazil, Canada, France, Germany and the United States. The Shenzhen facility will initially remanufacture hundreds of mid‐range IBM Power Systems, which are reconditioned, tested and certified using rigorous processes and original manufacturing standards, or rebuilt to meet specific customer requirements. The facility will rapidly expand to remanufacture 100,000 PCs and low‐end and mid‐range IBM and non‐
IBM servers per year by 2014.” Thecombinationof
increased
performanceand
costyieldsa36%
increaseperyearin
aserver’s
performanceper
watt. Brill claims that such remanufacturing, or ‘tuning up,’ is very low‐risk and can promote a 20% improvement in efficiency for relatively little capital investment.95 Some next generation processor and component improvements are incompatible with prior equipment limiting the potential for remanufacturing. Though the carbon footprint improvement due to improved energy efficiency is well documented (see section 6), there is limited life cycle data on the benefits of material savings and end of life material recover due to remanufacturing. Further study of this topic could provide useful information concerning multiple environmental tradeoffs. 7.6.
EnergyConservation
Energy consumption by the world’s servers and data centers is significant. The EPA reported to Congress that servers and data centers consumed an estimated 61 billion kilowatt‐hours (kWh) in 2006, or 1.5% of the total U.S. electricity consumption. This was more than double the energy consumption of servers and data centers just six years prior.96 The economic downturn in 2008‐2009, the effort by industry to improve efficiency, and the increased prevalence of virtualization has slowed the rate of increase of energy consumption by data centers. However, according to Koomey 201197, the U.S. and world data center electricity use grew by about 36% and 56%, respectively, from 2005 to 2010. Therefore, it is estimated that data centers consumed about 1.3% of the world electricity use (238 billion kWh) and 2% of the U.S. electricity use (76.3 billion kWh) in 2010. One caveat to the increased power consumption, however, is that servers are also increasing performance. A modern server, when compared to an older model, was observed to have 45% better performance at the cost of only 9% greater power consumption. The combination of increased performance and 95
Brill, Kenneth G. 2007. Data Center Energy Efficiency and Productivity. Santa Fe, NM: The Uptime Institute. www.uptimeinstitute.org/symp_pdf/(TUI3004C)DataCenterEnergyEfficiency.pdf. 96
“Report to Congress on Server and Data Center Efficiency, Public Law 109‐431.” US EPA. 2007. http://www.energystar.gov/index.cfm?c=prod_development.server_efficiency_study 97
Jonathan Koomey. 2011. Growth in data center electricity use 2005 to 2010. Oakland, CA: Analytics Press. July. http://www.analyticspress.com/datacenters.html The midpoint data between the Low and High cases shown in Tables 2 and 3 is presented here. © Rochester Institute of Technology October 10, 2012 43 Server Primer cost yields an annual 36% increase in a server’s performance per watt.98 In addition to the large gains in computational performance per watt, other trends have emerged in recent years that are helping reduce server energy usage. A typical breakdown of energy usage is shown in Figure 16. Figure 16 ‐ Server Power Consumption Source: “The Problem of Power Consumption in Servers.” Intel, 200999 7.6.1. PSUEfficiencyStandards
A significant contributor to the server power consumption is the server power supply unit, or PSU. A server uses a PSU to convert a data center’s AC or DC electricity input to a low‐voltage output so the server’s internal components may operate at their rated voltage, usually 5VDC.100 By this definition, PSUs are miniature transformers (they transform the voltage from one level to another) and also regulators (they may take an AC input and regulate it to a DC output by using switches). Both transformers and regulators are inherently inefficient and shed energy in the form of heat. The magnitude of PSU efficiency loss is also illustrated in Figure 16. In order to better understand inefficiency of server PSUs, ECOS Consulting and the Electric Power Research Institute (EPRI) tested a variety of common servers at an input of 230VAC and took the individual results and extrapolated them to the data center‐level for comparison purposes. The results of their study 98
Koomey, J., Belady, C., Patterson, M., Santos, A., & Lange, K. “Assessing Trends Over Time In Performance, Costs, And Energy Use For Servers.” Microsoft Corp. and Intel Corp. 2009. http://www.intel.com/assets/pdf/general/servertrendsreleasecomplete‐v25.pdf 99
“The Problem of Power Consumption in Servers.” Intel, 2009. http://www.intel.com/intelpress/articles/The_Problem_of_Power_Consumption_in_Servers.pdf 100
Meisner, D., Gold, B., Wenisch, T. “PowerNap: Eliminating Server Idle Power.” ASPLOS. March 2009. Washington DC. http://web.eecs.umich.edu/~meisner/David_Max_Meisner/Home_files/asplos09.pdf © Rochester Institute of Technology October 10, 2012 44 Server Primer detailed the inherent inefficiencies present in the server market. The two organizations used these results to establish the 80 PLUS Standard, which outlines basic efficiency and power factor requirements a PSU must meet to qualify. The basic qualification is 80% efficiency at all load levels; higher ratings (bronze, silver, gold, etc.) are achievable for higher efficiency ratings. A minimum expected power factor is also incorporated into the standard to indicate the amount of real power that the PSU supplies. At the time of this study’s release only three of the sampled PSUs met the Bronze rating; today a much larger number of servers qualify for it, and some models qualify for the newly established Platinum and Titanium ratings.101 The rating levels for the 80 PLUS standard are outlined below:102 Efficiency Load Level Power Factor 10% 20% 50% 100% 10% 20% 50% 100% 80 PLUS ‐ 80% 80% 80% ‐ ‐ ‐ 0.9 80 PLUS Bronze ‐ 81% 85% 81% ‐ ‐ 0.9 0.9 80 PLUS Silver ‐ 85% 89% 85% ‐ ‐ 0.9 0.9 80 PLUS Gold ‐ 88% 92% 88% ‐ ‐ 0.9 0.9 80 PLUS Platinum ‐ 90% 94% 91% ‐ ‐ 0.95 0.95 90% 94% 96% 91% ‐ 0.95 0.95 0.95 80 PLUS Titanium 7.6.2. ProcessorEnergyUse
After electricity enters the server, over half of it is used by the processor(s) and attached onboard memory (SDRAM). The processor will step through various stages of utilization, a metric based on the percentage of SDRAM being used at any point in time. When a processor has 0% utilization, it is considered to be idle; at 100% utilization, it is being fully used. A processor’s energy‐use profile scales from its idle power draw to its maximum power draw. Most processors average 10‐22% utilization during regular use.103 The energy use of the latest Intel processors was provided by Intel during the Technical Committee meeting. (Figure 17) 101
“Efficient Power Supplies for Data Center and Enterprise Servers.” ECOS and EPRI. Feb 2008. http://www.etcc‐ca.com/images/stories/pdf/ETCC_Report_467.pdf 102
“80 PLUS Certified Power Suppliers and Manufacturers.” Plug Load Solutions, May 2012. Web. 29 May 2012. http://www.plugloadsolutions.com/80PlusPowerSupplies.aspx 103
Koomey, J., Belady, C., Wong, H., Snevely, R., Nordman, B., Hunter, E., Lange, K., Tipley, R., Darnell, G., Accapadi, M., Rumsey, P., Kelley, B., Tschudi, B., Moss, D., Greco, R., Brill K. “Server Energy Measurement Protocol.” 2006. http://www.energystar.gov/ia/products/downloads/Finalserverenergyprotocol‐v1.pdf © Rochester Institute of Technology October 10, 2012 45 Server Primer Figure 17 – Intel Processor Performance – provided by Henry Wong, Server Technical Committee Meeting, Houston, Texas, July 31, 2012. There has been increasing interest in CPU voltage and frequency scaling (DVFS), a technology that allows the clock speed of a server to decrease in proportion to its load level. A study by Fan, Weber, and Barroso suggests that such technology could offer substantial energy savings at both low and high loads. Their study evaluated a DVFS protocol that scaled the CPU’s power output from idle to 5%, 20%, and 50% load levels, representing a passive, active, and aggressive DVS setup, respectively. Using the passive 5% cutoff for scaling, a 10% reduction in peak power can be realized, while an aggressive 50% cutoff could realize a reduction of 18%. These values correlate to energy savings of greater than 12% and 22%, respectively.104 This study also investigated the effects of DVFS on a server when it is idle and assumed that idle power draw would be reduced to about 10% of peak power. This would create a total reduction of 30% in the realized peak power and a reduction of 50% in the required energy. At the time of writing, several server 104
Fan, X., Weber, W., Barroso, L. “Power Provisioning for a Warehouse‐sized Computer.” ISCA. June 2007. San Diego, CA. doi:10.1145/1273440.1250665. http://reference.kfupm.edu.sa/content/p/o/power_provisioning_for_a_warehouse_sized_83189.
pdf © Rochester Institute of Technology October 10, 2012 46 Server Primer manufacturers use processors with this technology, in contrast, in 2007, only about 10% of servers had the ability to use DVFS.105 7.7.
Packaging
Publically available information concerning packaging of servers is very sparse and varies from product to product and by the number and configuration of servers ordered by the client. Many servers are also shipped to value added resellers (VARs) which, after providing some service, may be repackaged and shipped to the client. This topic was discussed in the Server Technical Committee, Houston, Texas, July 31, 2012 meeting and general information was provided by members of the committee. Members generally agreed that single server units are shipped in corrugated boxes with padding, which could be foam or corner bracing. These boxes are then generally stacked on reusable pallets to be shipped to the customer. An alternate configuration would be to pre‐install the servers into the racks, and then build custom crates to protect the rack and server assembly. Typically packaging is not returned to the manufacturer. One manufacturer noted that many of the servers are built in China and it would be uneconomical to return the packaging back to the manufacturing site. Due to the travel distance, the question was asked whether it would be better to reuse the packaging or to recycle the packaging. No committee member knew the answer and further study would be required. 105
“Report to Congress on Server and Data Center Efficiency, Public Law 109‐431.” US EPA. 2007. http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_
Congress_Final1.pdf © Rochester Institute of Technology October 10, 2012 47 Server Primer 8. ServerEnvironmentalStandardsandLabels
Environmental labels are often used by manufacturers to communicate to customers that their products meet certain environmental standards. These standards can be developed by private entities, by public agencies, or jointly by stakeholders and experts from the public and private sectors. They can vary in number of required criteria, data detail and reporting requirements. Very few environmental standards address servers directly, and therefore the search for environmental standards was expanded to include other similar IT equipment. The following table contains environmental standards that apply to computers, servers, and data centers. This is a summary table from this report’s companion document, “Master list of server standards June 2012.xlsx.” The Master List was built upon the foundational document “Server ecolabel comparison 8 15 2011.xls” provided by Holly Elwood of the U.S. EPA. The Master list document also contains details on the various criteria per environmental standard. EU Ecolabel COMMISSION DECISION , June 2011, establishing the ecological criteria for the award of the EU Ecolabel for personal computers N
(2011/337/EU), Application form and guidance document for notebook computers Version 1.0, 2012
Blue Angel RAL‐UZ 161 Energy‐
Conscious Data Centers, July Y L N http://www.blauer‐engel.de/ 2011 Blue Angel RAL‐UZ 78a Personal Computers (Desktop Computers, Integrated Desktop Computers, Workstations, thin Clients, January 2012 N
N
N
Y http://ec.europa.eu/environmen
t/ecolabel/ Y http://www.blauer‐engel.de/ © Rochester Institute of Technology October 10, 2012 LCA Noise 8) Packaging 7) Corporate 6) EOL Management 5) Energy Conservation 4) Longevity 3) Design for EOL Y Y http://www.geca.org.au/ 2) Material Selection N
Website 1) Sensitive Materials Servers GECA 24‐2008 The Good Australian Ecolabel Environmental Program, Good Choice Environmental Choice Australia Australia Standard Computers Standard / Label Computers Standard / Document Data Center 6) EOL Management 7) Corporate 5) Energy Conservation 2) Material Selection Y http://www.greencouncil.org/en
g/index.asp Japan eco mark Eco Mark Product Category No. 119, Personal Computers Version 2.7, October 2011 N
N
Y http://www.ecomark.jp/english/
The New Zealand Environmental Ecolabelling Trust Choice New N
Licence Criteria for Personal Zealand Computers EC‐27‐05 N
Y http://www.enviro‐
choice.org.nz/ http://www.svanen.se/en/Nordi
c‐Ecolabel/ Nordic Ecolabelling of Computers: Version 6.1 • 8 June 2009 – 30 June 2012 GREEN CHOICE PHILIPPINES
Green Choice NELP‐GCP ‐ 20080022 Philippines DESKTOP COMPUTER N
N
Y http://www.pcepsdi.org.ph/ TCO TCO Certified Desktops 3.0, Development March 2010 N
N
Y GES / CO / 2011 Global International Sustainability Environmental and Environmental Product Standard Standard, Computers N
http://www.tcodevelopment.co
m/ Y Y http://www.globalenvstandard.o
rg IEEE Std 1680.1™‐2009 ‐ IEEE Standard for Environmental Assessment N
of Personal Computer Products, 9 December 2009 N
http://www.epeat.net/resources
/criteria‐verification/ The Eco Declaration (TED) Standard ECMA‐370: 4th Edition / June 2009 The Eco Declaration Energy Star ENERGY STAR® Program Requirements for Computers N
Version 5.0 (effective date ‐ July 1, 2009) N
ENERGY STAR® Program Requirements for Computer N
Servers Draft 2 Version 2.0 (2010‐04‐12) http://www.energystar.gov/ia/p
artners/prod_development/revis
Y N ions/downloads/computer_serv
ers/Servers_Draft_2_v2_Specific
ation.pdf?04f6‐df81 Energy Star Y Noise N
LCA N
8) Packaging Hong Kong Green Label GL‐006‐001 Hong Kong Green Label Scheme Product Environmental Criteria for Personal Computers (excluding monitor) 4) Longevity Website 3) Design for EOL Servers Computers Data Center Standard / Document EPEAT Standard / Label Nordic Swan 1) Sensitive Materials 48 Server Primer www.ecma‐
international.org/publications/st
andards/Ecma‐370.htm http://www.energystar.gov/ia/p
artners/prod_development/revis
Y ions/downloads/computer/Versi
on5.0_Computer_Spec.pdf © Rochester Institute of Technology October 10, 2012 http://www.greenpeace.org/inte
rnational/Global/international/p
ublications/climate/2011/Cool%
N
20IT/greener‐guide‐nov‐
2011/Guide%20Ranking%20Crite
ria.pdf?id= Noise LCA 8) Packaging 5) Energy Conservation 4) Longevity 7) Corporate Guide to Greener Electronics N N
Ranking Criteria Explained August 2011, v. 17 onwards http://www.eicc.info/documents
/EICCCodeofConductEnglish.pdf 3) Design for EOL Y Y 6) EOL Management Greenpeace Guide to Greener Electronics N
2) Material Selection EICC Code of EICC Code of Conduct, Conduct Version 4.0 (2012) N
http://www.ul.com/global/eng/
pages/corporate/aboutul/public
Y N
ations/newsletters/hightech/vol
2issue3/4par/ Website 1) Sensitive Materials UL 2640 Servers PAR4 Standard / Document Computers Standard / Label Data Center 49 Server Primer © Rochester Institute of Technology October 10, 2012 50 Server Primer 8.1.
AC KeyAcronyms
Alternating current ASHRAE American Society of Heating, Refrigerating and Air‐Conditioning Engineers BFRs Brominated flame retardants CRAC Computer room air conditioner CPU Central processing unit DC Direct current DIMM Dual in‐line memory module DVFS Dynamic voltage and frequency scaling ECC Error correcting code Gb gigabyte GHG Greenhouse gas GRI Global Reporting Initiative HDD Hard disk drive IEEE Institute of Electrical and Electronics Engineers, Inc. IDC International Data Corporation ISO International Organization for Standardization IT Information technology LCA Life cycle assessment LED Light emitting diode OS Operating system PCF product carbon footprint PCI Peripheral component interconnect PSU Power supply unit REACH European Union Regulation (EC) No 1907/2006: Registration, Evaluation, Authorization and Restriction of Chemicals RoHS European Union, European Council former Directive 2002/95/EC as amended by 2005/618/EC and 2011/65/EU of the European Parliament and of the Council on the restriction of the use of certain hazardous substances in electrical and electronic equipment ROI Return on investment SDRAM Synchronous dynamic random‐access memory © Rochester Institute of Technology October 10, 2012 51 Server Primer SSD Solid state drive SVHC Substances of very high concern TRI U.S. EPA Toxics Release Inventory U rack unit height (1.75 inches) USB Universal serial bus U.S. EPA United States Environmental Protection Agency WEEE Waste electrical and electronic equipment © Rochester Institute of Technology October 10, 2012 
Download