Intel Corporation's Intel Corporation s High Density Data Center An

Intel Corporation’s
Corporation s
High Density Data Center
An Operational Review
Paul Vaccaro / Intel – Data Center Architect
David Seger / IDC Architects – Principal Mechanical Technologist
Legal Notices
This presentation is for informational purposes only.
INTEL MAKE NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
* Other names and brands may be claimed as the property of others.
Copyright © 2013, Intel Corporation. All rights reserved.
JFS1 High Density Data Center
HPC for Silicon Design
Gl b l Design
Global
D i
Computing
C
i
E i
Environment
User System
CCC Systems
User System
CCC Systems
Batch cycles
from Hub
Interactive
Compute
Servers
Interactive & Batch
Compute Cycles
Interactive
Compute
Servers
Interactive
Large Memory
Compute Servers
Latency/
Bandwidth
IInfrastructure
f
t
t
Servers
Infrastructure
Servers
Large Memory
Compute Servers
(Interactive)
Storage
& Backup
Medium Site
Datacenter
Batch Compute
Servers
User System
Storage
& Backup
Large Site Datacenter (Hub)
Latency/
Bandwidth
CCC Systems
Interactive
Batch cycles
from Hub
Storage
Infrastructure (Backup)
Servers
Small Site
Closet
HPC for Silicon Design
Growth of Design
2006
2012
Intel Design Computing Capacity
EDA-MIPS
75,392
533,554
Silicon Design Compute and
Storage Demand vs. Utilization
Linux
i
C
Compute
S
Servers
62 13
62,137
38 92
38,927
Number of Cores
132,282
451,990
Compute Batch
Utilization
~58%
~86%
This is growth specific to Intel silicon design
engineering environment and does not include
overall corporate IT demand.
Compute Servers (K)
EDA MPS (10K)
Storage (PB)
Data Center Compute Dashboard
Site and Facility Configuration
Generators
Generators
Module Module Module
E
D
C
Office
Circulation Spine
Chilled Water Storage Tanks
Low Temp Chiller Plant
Module Module
A
B
High-Temp Chiller Plant
Chilled Water Plant
Waterside Economizer
Cooling
Tower
Chiller Condenser
Condenser
Water Pump
Heat
Exchanger
Chiller
Chiller Evaporator
From
Data Center
Chilled
Water Pump
To
Data Center
Chilled Water Plant Economization
2,000,000 1,800,000 1 600 000
1,600,000 1,200,000 1,000,000 ,
,
800,000 600,000 400,000 200,000 ‐
23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79
Outside Air Wetbulb (Deg F)
Mechanical Cooling
Economizer Cooling
(6,415,500 Ton-Hours)
(14,608,500 Ton-Hours)
Ton‐‐Hours
1,400,000 Data Center Modules
Recirculation Air Handlers
Cooling Coils & Filtration
Electrical Distribution
Air Pathway to IT space
Hot Aisle Isolation
Cold Aisle
Hot Aisle
Non-Isolated Low Power IT
Airflow Model – White Space
>100
91.25
82.5
73.75
Network Racks
<65
Temperature (deg F)
Airflow Model – Utility Level
>600
450
300
150
<0
Speed (ft/min)
Temperature & Pressure Control Zones
Cascading Energy Efficiency
 Hot aisle containment
 Increased supply air temperature
 High
Hi h temperature chilled
hill d water system
 Variable speed infrastructure fans
 Infrastructure fan output tracks IT airflow demand
 Water side economizer
 Variable speed chilled water pumps
 Variable speed cooling tower fans
 Adiabatic humidification
Energy Efficiency - PUE
Operational PUE Range*
1.35 without economizer
1 21 With 100% Free Cooling
1.21
* Calculated
JFS1
Current PUE*
*
Total Load
6,217 kW
IT Load
5,114 kW
Facility Load
1,103 kW
Current PUE
1 22
1.22
Current DCIE
0.812
1.50
1.25
1.00
Source: US Energy Information Administration, http://www.eia.gov/consumption/commercial/census-maps.cfm
* April 12, 2013
1.75
2 00
2.00
JFS1 Power Demand Forecast
14
12
8
Customer Load Projections
75% of Customer Projections
6
50% of Customer Projections
PGE Demand Requirements
4
Actual Load Peak values TOTAL
2
Dec--11
Dec--09
Dec--07
0
Dec--05
Dema
and (MW)
10
Paul Vaccaro
Data Center Architect
Intel
David Seger, PE, LEED AP
Principal Mechanical Technologist
IDC Architects / CH2M Hill
503.872.4492
Thank You