Uploaded by Liem Dang

10896804

advertisement
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
Mobile and Modular Data Center
Montri Wiboonrat
College of Graduate Study in Management Khon Kaen University, Bangkok Campus
25 Bangkok Insurance Bld 25thF South Sathon Rd. Bangkok 10210 Thailand
Tel. 081-1738200 Fax 02-6774145
E-mail: montwi@kku.ac.th, mwiboonrat@gmail.com
Abstract
The evolution of technologies in the 21st century are driven
the next generation data center design. The new design must
answer business needs such as less time deployment, high
efficiency and effectiveness, and less investment. This
research paper is concentrated on modified feature of
mobility and modularity through data center without
adversely impact of availability, capacity, efficiency,
performance, and security function of data center. The case
study of 21 data centers consolidation is investigated and
formulated container concept of mobile and modular data
center (M2DC). The concept of M2DC benefits to help
organization reduce total cost of owner ship (TCO), project
deployment, business risks, and increase equipment and
facility utilization, and energy and system efficiency.
1. Introduction
The neo society in which we live is facing an "intelligence"
era. With the continuing development and evolution of ICT
comes a hastily escalating amount of transactions and data,
and as a result the data centers responsible for creating,
processing, storing, exchanging, and distributing these data
have become a vital part of business and social
infrastructure.
On the other hand, with the increasing amount of data,
power consumption has also continued to exaggerate, and
energy conservation has become a new business and social
intention from the perspective of environmental-
considerations. Operating the data center will become more
and more complicated in the future as the requirement to
available, flexible, reliable, and secured handle
unpredictable vacillations in the amount of data.
CIO needs more smart solution for next generation
data center for instance consolidation and virtualization.
The challenge for CIO is how to minimize the total cost of
ownership (TCO) for the data center infrastructure without
unfavorably impacting the availability, capacity, efficiency,
performance, and security of critical systems [7]. Data
center consolidation is not only saving the present
investment by sharing basic facility infrastructure and
utilized resource to meet maximum efficiency of equipment
design but also saving the future costs of electrical billing,
operations management, and maintenance.
The research question came up after December
2011, since the World Bank has estimated 1,425 billion
Baht in economic damages and losses due to flooding, as of
1 December 2011. Many of businesses, especially banking,
telco, and retail segments, had suffered from this crisis. A
ton of data cannot recover and destroy during flooding.
Legacy data center must shut down operations and transfer
to disaster site. What will be happened if they have only one
data center? They need to transfer data or servers to other
site and start operations. Sometime without facility
infrastructure, it took a few weeks or a few months to
recover business. This is not included business losses during
data center downtime.
RECEIVED 2 January, 2013
ACCEPTED 28 June, 2013
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
Researcher which encourages the smart
community practically urbanized the mobile and modular
data center (M2DC) which can solve various problems such
as mobility, availability, security, short time
implementation, reducing investment, space utilization,
increasing energy efficiency and system reliability. The case
study of consolidation 21 data centers was deploying the
advantage concepts of mobile, modular, and consolidation
to create M2DC prototype. The research results propose
comprehensive and miscellaneous solutions of M2DC in
order to meet the business needs.
Table 1 Correlation between availability and downtime per
year
Level of availability Downtime per year
90%
36.5 days
95%
18.25 days
99%
3.65 days
99.9%
8.76 hours
99.99%
50 minutes
99.999%
5 minutes
1.2 Investment
1.1 Downtime
Resiliency is among the most common explanations for data
center investments. Obsolete, low-quality data centers are
often an unacceptable business risk. Nevertheless upgrading
facilities typically isn’t always the most direct or even the
most effective means of making applications more
available. Availability is a critical facet of data center
environment. Availability is the measure of how often you
data and application are ready for you to access them when
you need them. At the margin, investments to improve
system designs and operations may consent better returns
than investments in physical facilities [12]. Downtime
devastatingly begets from application and system failures,
not facility outages. The Table 1 shows the amount of
downtime expected for different levels of availability.
Investments in data center capacity are a fact of business
life. Businesses require new applications to interact with
customers, manage supply chains, process transactions, and
analyze market trends. Those applications and the data they
use must be hosted in secure, mission-critical facilities. To
date, the largest enterprises have needed their own facilities
for their most important applications and data. How much
data center capacity you need and when you need it,
however, depends not only on the underlying growth of the
business but also on a range of decisions about business
projects, application architectures, and system designs
spread out across many organizations that don’t always take
data center capital into account. As a result, it’s easy to
build too much or to build the wrong type of capacity.
Data centers are suitable for organizational
investors for long-term investment. As data center is a
mercantile real estate facility used to host computer systems
and networking components. Data centers vary from typical
business office and industrial buildings because they are
designed to accomplish the mission critical by provide
reliable facility infrastructure such as power, cooling, and
network communication equipment.
48
วิศวกรรมสารฉฉบับวิจยั และพัฒนา
น ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCHH AND DEVELO
OPMENT JOURN
RNAL VOLUME 24
2 NO.4, 2013
suppplies, backup power suppliess, fiber optic coommunication
connnections, environmental conttrols, and securrity devices.
The sinngle line diaggram of poweer distribution
sysstem of Tier 2 and Tier 3 iss shown in Fiigure 1 and 2
respectively [2].
A new model for managiing data ceenter
amalgamates factory-style productivity
p
too keep costs doown
and short tim
me implementedd with a more agile, innovatiionfocused apprroach to adapt to rapid chaange. Data ceenter
infrastructuree managers beelieve that coonstant innovaation
represents thheir only chancce of meeting user expectatiions
and businesss demands witthin budgetary constraints. DData
center innovvation is a critical part of the direction for
enterprise inffrastructure funnctions.
2.11 Power usage effectiveness ((PUE)
A Power
P
Usage Effectiveness
E
(PPUE) is measuurement metric
forr data centers inn order to bencchmark and coompare energy
connsumption between
b
datata centers (Information
Tecchnology: IT) and to baseline
ne consumptionn of facilitated
infrfrastructure aggainst, as shoown in Figurre 3., which
impprovements in energy efficienncy may be measured [3].
2. Backgroound
Most data center desiggn is based on TIA 9942:
Telecommunnications Infraastructure Staandard for DData
Centers, 15 April 2005 and
a Uptime Institute
I
standaards
which the coommon references as Tier 1, Tier 2, Tier 3 and
Tier 4 [1]. TThey is a neww standard forr data center bbest
practice calleed BICSI just annnounce in Junne 21, 2010 [133].
mpare system sttandard of Tierss
Table 2 Com
Tier 1
Tier 2
Tier 3
Tie r 4
Performance
99.749
99.6711
99.982
99.9995
Availability (%)
22
28.8
1.6
0..4
Annual downtimee (Hours)
3-6
Months to implem
ment
3
15-20
15--20
Architecture
Dual system Dual system
Single systtem Single system D
Power source
N+1
N+
N+1
N+1
+1
Component redunndancy
1 Active +
2 Acctive
1 Standby
1
Power distributionn paths
1
Compartmentalisaation
No
No
Yes
Yees
No
Concurrently main
intainable
No
Yes
Yees
Fault tolerance too a single
No
point of failure
Yes
Yees
No
Figgure 3 How to measure PUE
Thiis standard are coveredd, requiremeents,
recommendattions and addittional informattion that shouldd be
considered wwhen planning and building a data center e.g.
site selectionn, layout anaalysis, thermall systems, poower
systems and security. (In this research we
w are discusssion
only Tier 2 annd Tier 3)
Tierr is categorized by system reeliability standdard,
as seen in Taable 2. Reliabiility is accompplished by creaating
redundancy, to prevent a single
s
point off failure, in poower
Most daata centers operrate at PUE ≥ 2 and some at
PUUE > 3. PUE = 3 means that ffor every watt consumed by
thee computers, 2 watts are beinng consumed by
b data centre
(orr computer room) air-conditiooning (CRAC)), lighting and
powwer distributedd system. The goal is thus to get PUE to
appproach 1.
49
วิศวกรรมสารฉฉบับวิจยั และพัฒนา
น ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCHH AND DEVELO
OPMENT JOURN
RNAL VOLUME 24
2 NO.4, 2013
andd high reliability. The neww challenge off design next
genneration data center in side 200 feet or 40 feeet container is
spaace limitation. How to optimmize and utilizee space is the
nexxt challenge of
o M2DC. Tabble 3 is shownn the general
dim
mension designn and loaded ccontainer size 20 feet. The
inteerior conceptuaal design layouut of M2DC is demonstrated
in Figure
F
4.
Taable 3 Limitatioon of 20 feet coontainer
Figure 1 Tierr 2 Data centerr
Figgure 4 Concepttual design of M 2DC
2.33 Legacy power design probllems
The outset powerr capacity desiign is equal too the ultimate
installed power capacity whicch is 100% built from the
begginning. At thee beginning off actual load will
w start at the
estimated startup power requirrement aroundd 30-40% and
risee up to the expected powwer requiremeent, which is
claassically equaal to the design powwer capacity.
Nevertheless, the actual starttup power reequirement is
chaaracteristically lower than thhe estimated startup
s
power
reqquirement. The power requireement may takke a few years
to reach expectedd power requirrement or may not, which is
Figure 2 Tierr 3 Data centerr
2.2 New techhnology vs. spaace limitation
The challengge of design datta center has been changed siince
2010. The evvolution of powwer distribution, computer rooom
air condition (CRAC), and network comm
munication systetems
have more effficiency, smalller footprint, simple
s
installattion,
50
วิศวกรรมสารฉฉบับวิจยั และพัฒนา
น ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCHH AND DEVELO
OPMENT JOURN
RNAL VOLUME 24
2 NO.4, 2013
considerable less than thhe design power capacity, as
depicted in Figure 5.
Room-bbased cooling is the legacyy method for
acccomplishing data
d center cooling. Infoormation and
expperience demonnstrate that thiis traditional coooling system
is effective
e
when the average poower density inn data center is
less then 10kW per
p rack [5]. Thhe layout deignn is presented
in Figure
F
6. The legacy
l
design ccapacity considders from each
racck, row and rooom.
The raack-oriented cooling is shown the
connsistently low of
o electrical coosts, because the CRAC units
aree closely coupled to the load,, and right sizeed to the load
[4]. However, thhis efficiency wwill not perfoorm when the
rooom size is overr designed cappacity of CRACC units and/or
mixxing among different coolingg rack loaded e.g.
e put 10kW
raccks with 5kW racks. This wwill make disfiggured flow of
theermal distributtion. Unnecesssary of traveling airflow
distance is avoideed.
The effeect of power ddensity and annnual electrical
cossts of the threee cooling arcchitectures dem
monstrates in
Figgure 7.
Figure 5 Leggacy power cappacity design
TThe surplus capacity design traanslates directlly to
surplus invesstment. In addittional investmeent correlated wwith
power distribbution system,, electrical billling, maintenaance
costs, and sysstem efficiencyy. The average legacy data ceenter
is eventuallyy oversized by three times inn design capaccity,
and over foour times in nameplate,
n
maanufacturer lissting
specific inforrmation [6].
2.4 Legacy ccooling design problems
Thailand hass a tropical monsoon
m
climatte and the highhest
average tem
mperature of any countryy in the woorld.
Temperaturess in Thailand regularly stay well above 3 0°C
throughout tthe year. Theerefore, data center designn in
Thailand needs more energyy for heat rejecction systems.
Figgure 7 Room roow rack orienteed cooling
3. Research methodology
The research is separated
s
into three parts off investigation
andd assessment the existing ddata centers. First,
F 34 data
cennters have been
b
surveys subject to: number of
appplications; nuumber of seervers, storagges, network
equuipments; sizing of data ccenter; and loaded power
connsumption. Seecond, researccher was conducted direct
inteerviews with senior
s
or projeect managers. Last, product
Figure 6 Layyout of room roow rack design
51
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
and technology assessment are to define the ideas, assess
the development potential and establish the individuality.
The exploration research proposes are trying to
find out more information to answer about the utilization of
space, loaded power, and cooling sizing. How to optimize
and utilize by applying the latest technologies for improve
energy efficiency and effectiveness are expected results of
this research.
Table 4 34 data centers survey
34 Agencies: Utilization Survey
No.
1
2
3
4
5
6
7
8
9
4. Analysis and solution
The survey result from 34 agencies, as shown in Table 4, is
intended to analysis each data center subject to space,
server, and application utilization. Only 21 agencies have
data centers and total data center space is 1,563 sqm.
Duplication in date center investment, operations, and
management is costly. Data center consolidation and
optimization seem to be the best option to answer the
economy and efficiency of business expectations.
Agency
Agency 1
Agency 2
Agency 3
Agency 4
S izing Data
Center (sqm.)
152
756
Shared with
other
No. of
S ervers
No. of
No. of
Ulilization
Networks Applications
Level
1
10
9
1
25
-
1
35
N/A
N/A
3
2
N/A
N/A
17
4
9
7
1
1
2
2
10
4
N/A
N/A
1
1
N/A
2
N/A
N/A
3
3
75.65
3
15
5
12
1
9
N/A
2
38
47
1
N/A
N/A
N/A
N/A
N/A
15
8
1
N/A
N/A
N/A
3
2
2
Agency
Agency
Agency
Agency
Agency
10 Agency
11 Agency
12 Agency
13 Agency
5
6
7
8
9
10
11
12
13
62.4
89
14
15
16
17
18
19
20
Agency
Agency
Agency
Agency
Agency
Agency
Agency
14
15
16
17
18
19
20
30
No Data Center
Hosting outside
Hosting outside
36
31.5
31.5
39
5
23
13
3
23
5
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Agency 21
9
1
1
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
Agency
22
23
24
25
26
27
28
29
30
31
32
33
34
21 Data Centers
44
Hosting outside
Hosting outside
16
54
Hosting outside
No respond
15
50
25
No respond
40.5
88.5
Not given info
16
30
1,563
1
3
9
2
6
2
3
85
7
5
2
15
4
N/A
N/A
N/A
N/A
5
N/A
N/A
N/A
N/A
N/A
15
4
4
2
3
1
N/A
N/A
2
N/A
2
N/A
3
N/A
1
N/A
N/A
2
3
363
High U
Medium U
Low U
When we interviewed senior data center
infrastructure executives about their innovation priorities
and capabilities, many emphasized that constant innovation
represented their only chance of meeting user expectations
while supporting—within budgetary constraints—everincreasing business demands for computation, data storages,
and connectivity.
In most organizations, Data center began as a
support function, leading to a multi-dimensional
management approach. However, technology-enabled
products, interactive communications, and 24x7 online
information environments have thrust data center to the
forefront, with critical implications for business growth and
customer engagement. In addition, established practices,
such as lean-management techniques, have highlighted the
value of data center in reducing waste and increasing
52
วิศวกรรมสารฉฉบับวิจยั และพัฒนา
น ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCHH AND DEVELO
OPMENT JOURN
RNAL VOLUME 24
2 NO.4, 2013
productivity and efficiency. This deeper recognition
r
of ddata
center’s poteential has given rise to a new managemment
model categoory de facto data
d center. Dee facto data ceenter
comprises thhe bulk of an organization’s data ceenter
activities, appplying lessons from the prroduction rackk to
room, standdardization, and simplificcation to ddrive
efficiency, opptimize deliverry, and lower unnit costs.
4.1 Data cennter consolidattion
The survey results from Table
T 4, sincee 34 agencies are
under the sam
me roof and more than 85% of
o data centerss are
under utilizaation. Data ceenter consolidation is the bbest
option to minimize ecconomy and efficiency. The
summarizatioon of equipmeents and powerr requirements for
data center coonsolidation is illustrated in Table
T 5.
Figgure 9 Modularr data center
4.33 High density cooling systemms
The ability to deliver reduundancy in high density
circcumstances with
w fewer addditional CRACC units is a
straategic benefit tot the in-row ccooling orienteed architecture
(cloosed-loop coolling that can hhandle power density up to
40kkW per rack, as
a shown in FFigure 10.) andd presents it a
siggnificant total cost of ownerrship (TCO) advantage
a
and
eneergy efficiency (PUE), as depipicted in Table 6.
4.2 Flexible llayout, expanssion, and transportation
Mobile data center is flexiible expansion and mobilizattion.
According too the operatingg situation is possible
p
using the
adjustable laayout. Since thhe shape and size are the saame
standard as a container, trransportation is very easy. TThis
enables quickk response to sudden transffer in case of any
unexpected ssituation. Morreover, it is easy to expandd as
requirements within short tiime.
Figgure 10 Coolinng strategies basased on server density/power
d
perr rack
Figure 8 Mobile data centerr
53
วิศวกรรมสารฉฉบับวิจยั และพัฒนา
น ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCHH AND DEVELO
OPMENT JOURN
RNAL VOLUME 24
2 NO.4, 2013
4.55 M2DC 20’ forr constructionn
Spaace utilization is the key of M 2DC design. Since
S M2DC is
depployed conceptt of server connsolidation. Theerefore, heavy
draaw-out power density and hheat generatingg from server
connsolidation is cannot avoiddable. The most
m common
prooblem of data center
c is heat reejection. As thee concept of in
roww cooling is thee best option oof PUE impact from Table 5.
The case study from Table 4 requires 99,2225 Watts for
powwer requiremennt and 338,5600 BTUs or 112,,853 Watts for
coooling requirem
ments. Design for 113kW for 20 foot
conntainer is missiion possible. TThe 3 racks of 40
4 kW in row
coooling systems are deployedd to handle 113kW heat
rejeection. By crreating a cloosed-loop heatt trap, it is
inccreasing efficienncy of return aair flow system, as illustrated
in Figure
F
12.
Unique 45U rack systeem, Figure 14, installation is
reqquired to proteect data by abbsorbed vibration, moveable
spaace for balanciing central graavity point durring life it up,
andd maintenance or installationn from back andd front sides.
Table 6 Coolling strategies: design for PUEE impact
Unavailability
Investment
Reliabiliity v.s. Investment
Unavaillability v.s. Reliability
M 2DC
Tier I
Tier II
Tier III
Optiimal Availability
Point
Tier IV - IV+
Inverse
Reliability
Point
Reliability Leveel
Und
der Reliability
Optimal Reliability
Range
Inverse
Reliability
Figure 11 M2DC design conncerns unavailaability v.s.
reliability
4.4 Reliable power distribution systems
The strategicc approaches too M2DC infrasttructure designn are
based on bestt practices of TIA
T 942 [8], Upptime Institute [9],
BICSI [13] and effectivve balance among
a
efficienncy,
availability & reliability, flexibility, annd economy. The
modification concept of M2DC is concernn on the idea ppoint
of complex ssystem is increaasing unavailabbility level [10]], as
demonstratedd in Figure 11. As a result, thee modified Tierr III
M2DC powerr distribution syystem is tryingg to eliminate mmore
connections and equipmennts, as illustratted in Figure 13.
Especially, tthe M2DC deesign is tried to eliminate the
redundant eqquipment to fit with limitationn space in 200 feet
container. Geenerator designn is N sets and UPS
U design is 22N.
Figgure 14 45U raacking system
4.66 Project manaagement
M2DC deploymennt will take lesss time more than
t a half of
leggacy constructioon data center, as shown in Fiigure 15.
54
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
and energy efficiency. M2DC design is a relative new field
that addresses a dynamic and evolving technology. The
most efficient and effective M2DC designs use relatively
new design fundamentals to create the required high energy
density (consolidation), high reliability environment,
mobility and modularity. The above best practices capture
many of the basis standard approaches used as a starting
point by successful and efficient M2DC.
Figure 15 Project management efficiency
References
The report from Schneider whitepaper [11] is
estimated around 54 weeks saving time during deployment
containerized data center. The 54 weeks mean save more
other costs such as labors, materials, operations,
opportunities, and can build more M2DCs. The estimate
saving investment infrastructure (building construction)
between legacy data center and M2DC per square meter is
about four times since building construction is around
200,000 baht per sqm. but M2DC is around 50,00 baht per
sqm. Moreover, M2DC is saving electrical bill in long term
operations because the different PUE ratio between legacy
data center and M2DC is 0.8 (PUE 2-1.2= 0.8) [14].
[1] W. Pitt Turner IV et al., “Tier Classifications Define
Site Infrastructure Performance”, Uptime Institute,
Whitepaper, 2008, pp.1-19.
[2] W. Pitt Turner IV et al., “Industry Standard Tier
Classifications Define Site Infrastructure Performance”,
Uptime Institute, Whitepaper, 2005, pp.1-4.
[3] A. Rawson et al., “Green Grid Data Center Power
Efficiency Metrics: PUE and DCIF”, The Green Grid,
Whitepaper, 2008, pp.1-9.
[4] K. Dunlap and N. Rasmussen, “The Advantages of Row
and Rack-Oriented Cooling Architectures for Data
Centers”, Whitepaper, APC, 2006, pp.1-21.
[5] HP, “Cooling Strategies for IT Equipment”,
www.hp.com/go/getconnected
, September 2010.
[6] N. Rasmussen, “Avoiding Costs from Oversizing Data
Center and Network Room Intrastructure”, Whitepaper 37,
Schneider Electric, 2011, pp.1-9.
[7] Emerson, “Maximizing Data Center Efficiency,
Capacity and Availability through Integrated
Infrastructure”, Whitepaper, Emerson, 2011, pp.1-16.
[8] TIA 942, “Telecommunication Infrastructure Standard
for Data Center”, ANSI/TIA-942-2005, April 12, 2005.
[9] Uptime Institute, “Data Center Site Infrastructure Tier
Standard: Topology”, Uptime Institute, LLC, 2010.
5. Conclusion
The mobile and modular data center (M2DC) design can be
used to identify cost-effective saving opportunities in
operating facility infrastructure. Even though, no design
guide can offer “silver bullet” to design a data center, but
M2DC offer efficient design suggestions that provide
efficiency benefits in a wide variety of; space limited data
center design situations; mobility incase of any natural
disaster or manmade; and modular expansion (use as
needed). The result from consolidation 21 data centers
reveals benefits of M2DC subject to reduce investing facility
infrastructure, space occupier, electrical billing, and
operations, but increase space and IT equipment utilization,
55
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
[10] M. Wiboonrat, “An Optimal Data Center Availability
and Investment Trade-Offs”, Software Engineering,
Artificial Intelligence, Networking, and Parallel/Distributed
Computing, 2008. SNPD '08. Ninth ACIS International
Conference on, pp.712-719.
[11] D. Bouley and W. Torell, “Containerized Power and
Cooling Modules for Data Centers”, Whitepaper 163,
Schneider Electric, 2012, pp. 1-15.
[12] M. Wiboonrat, “Data Center Design of Optimal
Reliable Systems”, Quality and Reliability (ICQR), 2011
IEEE International Conference on, pp. 350-354.
[13] BICSI, Data Center Design and Implementation Best
Practices, BICSI 2002-2010, June 21, 2010.
[14] M. Wiboonrat, “Controversial Substantiation between
Disaster Risk Reduction and Hedging Investment Risk,”
The 2nd International Conference on Informatics &
Applications, Poland, September 23-25, 2013. (accepted for
publication)
56
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
Table 5 Total data center consolidation power requirements
HARDWARE
No.
1
2
3
4
5
6
7
Name
EQUIPMENTS
Model
WAN Router
Cisco ASR 1002
Core Switch
Cisco Nexus 7009
Service Node Switch Cisco Catalyst 6506
Cisco UCS 6248 Fabric
Fabric Interconnect
Interconnect
Cisco MDS 9124 24-Port
SAN Switch
Multilayer Fabric Switch
Cisco UCS 5108 Blade Server
Blade Server Chassis
Chassis
Storage
NetApp FAS 3240
- Storage Controller Enclosure
- Disk Shelf Enclosure
Total
PER UNIT REQUIREMENTS (SET)
TOTOAL REQUIREMENTS (SUM)
Power
Rack
Total
Total Power
Weight / Consumption Heat / Unit
Total Heat
Mount
Qty Weight Consumption
Unit (kg)
/ Unit
(BTU/Hour)
(BTU/Hour)
Unit (RU)
(kg)
(Watt)
(Watts)
2
16.75
1,120
3,821
2
34
2,240
7,642
14
136.00
12,000
40,945
2
272
24,000
81,890
12
69.60
12,000
40,945
2
139
24,000
81,890
1
15.88
700
2,388
2
32
1,400
4,776
1
8.40
600
2,047
2
17
1,200
4,094
6
115.66
10,000
34,121
4
463
40,000
136,484
3
4
43
67.20
49.90
479.39
956
639
38,015
3,262
2,180
129,709
2
7
23
134
349
1,440
1,912
4,473
99,225
(Watts)
6,524
15,260
338,560
112,853
Closed-Loop Hot Trap
Hot Air
MDB
Cool Air
UPS
Battery
In row
Cooling
40kW
Networks
Servers
Storages
In row
Cooling
40kW
Networks
Servers
Storages
In row
Cooling
40kW
Networks
Servers
Storages
Tier I
Figure 12 20 feet container design for M2DC
57
วิศวกรรมสารฉบับวิจยั และพัฒนา ปี ที่ 24 ฉบับที่ 4 พ.ศ. 2556
RESEARCH AND DEVELOPMENT JOURNAL VOLUME 24 NO.4, 2013
Figure13 Modified Tier III M2DC power distribution system
58
Download