Evaluation and Analysis of GreenHDFS: A Self-Adaptive, Energy-Conserving Variant of the Hadoop Distributed File System Rini T. Kaushik University of Illinois, Urbana-Champaign kaushik1@illinois.edu Milind Bhandarkar Yahoo! Inc. milindb@yahoo-inc.com Klara Nahrstedt University of Illinois, Urbana-Champaign klara@cs.uiuc.edu Abstract With the increase in the sheer volume of the data that needs to be processed [37], storage and server demands of computing workloads are on a rapid increase. Yahoo!’s compute infrastructure already hosts 170 petabytes of data and deploys over 38000 servers [5]. Over the lifetime of IT equipment, the operating energy cost is comparable to the initial equipment acquisition cost [8] and constitutes a significant part of the total cost of ownership (TCO) of a datacenter. Hence, energy-conservation of the extremely large-scale, commodity server farms has become a priority. Scale-down (i.e., transitioning servers to an inactive, low power consuming sleep/standby state) is an attractive technique to conserve energy as it allows energy proportionality with non energy-proportional components such as the disks [18] and significantly reduces power consumption. 1 However, scale-down cannot be done naively as discussed in Section 3.2. One technique is to scale-down servers by manufacturing idleness by migrating workloads and their corresponding state to fewer machines during periods of low activity [7, 10, 12, 13, 26, 30, 34, 36]. This can be relatively easy to accomplish when servers are state-less (i.e., serving data that resides on a shared NAS or SAN storage system). However, servers in a Hadoop cluster are not state-less. Hadoop’s data-intensive computing framework is built on a large-scale, highly resilient object-based cluster storage managed by Hadoop Distributed File System (HDFS) [25]. HDFS distributes data chunks and replicas across servers for resiliency, performance, load-balancing and data-locality reasons. With data distributed across all nodes, any node may be participating in the reading, writing, or computation of a data-block at any time. Such data placement makes it hard to generate significant periods of idle- We present a detailed evaluation and sensitivity analysis of an energy-conserving, highly scalable variant of the Hadoop Distributed File System (HDFS) called GreenHDFS. GreenHDFS logically divides the servers in a Hadoop cluster into Hot and Cold Zones and relies on insightful data-classification driven energy-conserving data placement to realize guaranteed, substantially long periods (several days) of idleness in a significant subset of servers in the Cold Zone. Detailed lifespan analysis of the files in a large-scale production Hadoop cluster at Yahoo! points at the viability of GreenHDFS. Simulation results with realworld Yahoo! HDFS traces show that GreenHDFS can achieve 24% energy cost reduction by doing power management in only one top-level tenant directory in the cluster and meets all the scale-down mandates in spite of the unique scale-down challenges present in a Hadoop cluster. If GreenHDFS technique is applied to all the Hadoop clusters at Yahoo! (amounting to 38000 servers), $2.1million can be saved in energy costs per annum. Sensitivity analysis shows that energy-conservation is minimally sensitive to the thresholds in GreenHDFS. Lifespan analysis points out that one-size-fits-all energy-management policies won’t suffice in a multi-tenant Hadoop Cluster. 1 Introduction Cloud computing is gaining rapid popularity. An increasing number of companies and academic institutions have started to rely on Hadoop [1] which is an open-source version of Google’s Map-Reduce [15] framework for their data-intensive computing needs [2]. Data-intensive computing needs range from advertising optimizations, userinterest predictions, mail anti-spam, and data analytics to deriving search rankings. 1 idle power draw of 132.46W vs. sleep power draw of 13.16W in a typical server as shown in Table 1. 1 ness in the Hadoop clusters and renders usage of inactive power modes infeasible [27]. Recent research on scale-down in GFS and HDFS managed clusters [4, 27] propose maintaining a primary replica of the data on a small covering subset of nodes that are guaranteed to be on. However, these solutions suffer from degraded write-performance as they rely on write-offloading technique [31] to avoid server wakeups at the time of writes. Write-performance is an important consideration in Hadoop and even more so in a production Hadoop cluster as discussed in Section 3.1. We took a different approach and proposed GreenHDFS, an energy-conserving, self-adaptive, hybrid, logical multizoned variant of HDFS in our paper [24]. Instead of an energy-efficient placement of computations or using a small covering set for primary replicas as done in earlier research, GreenHDFS focuses on data-classification techniques to extract energy savings by doing energy-aware placement of data. GreenHDFS trades cost, performance and power by separating cluster into logical zones of servers. Each cluster zone has a different temperature characteristic where temperature is measured by the power consumption and the performance requirements of the zone. GreenHDFS relies on the inherent heterogeneity in the access patterns in the data stored in HDFS to differentiate the data and to come up with an energy-conserving data layout and data placement onto the zones. Since, computations exhibit high data locality in the Hadoop framework, the computations then flow naturally to the data in the right temperature zones. The contribution of this paper lies in showing that the energy-aware data-differentiation based data-placement in GreenHDFS is able to meet all the effective scale-down mandates (i.e., generates significant idleness, results in few power state transitions, and doesn’t degrade write performance) despite the significant challenges posed by a Hadoop cluster to scale-down. We do a detailed evaluation and sensitivity analysis of the policy thresholds in use in GreenHDFS with a trace-driven simulator with real-world HDFS traces from a production Hadoop cluster at Yahoo!. While some aspects of GreenHDFS are sensitive to the policy thresholds, we found that energy-conservation is minimally sensitive to the policy thresholds in GreenHDFS. The remainder of the paper is structured as follows. In Section 2, we list some of the key observations from our analysis of the production Hadoop cluster at Yahoo!. In Section 3, we provide background on HDFS, and discuss scale-down mandates. In Section 4, we give an overview of the energy management policies of GreenHDFS. In Section 5, we present an analysis of the Yahoo! cluster. In Section 6, we include experimental results demonstrating the effectiveness and robustness of our design and algorithms in a simulation environment. In Section 7, we discuss related work and conclude. 2 Key observations We did a detailed analysis of the evolution and lifespan of the files in in a production Yahoo! Hadoop cluster using one-month long HDFS traces and Namespace metadata checkpoints. We analyzed each top-level directory separately in the production multi-tenant Yahoo! Hadoop cluster as each top-level directory in the namespace exhibited different access patterns and lifespan distributions. The key observations from the analysis are: • There is significant heterogeneity in the access patterns and the lifespan distributions across the various top-level directories in the production Hadoop cluster and one-size-fits-all energy-management policies don’t suffice across all directories. • Significant amount of data amounting to 60% of used capacity is cold (i.e., is lying dormant in the system without getting accessed) in the production Hadoop cluster. A majority of this cold data needs to exist for regulatory and historical trend analysis purposes. • We found that the 95-98% files in majority of the toplevel directories had a very short hotness lifespan of less than 3 days. Only one directory had files with longer hotness lifespan. Even in that directory 80% of files were hot for less than 8 days. • We found that 90% of files amounting to 80.1% of the total used capacity in the most storage-heavy top-level directory were dormant and hence, cold for more than 18 days. Dormancy periods were much shorter in the rest of the directories and only 20% files were dormant beyond 1 day. • Access pattern to majority of the data in the production Hadoop cluster have a news-server-like access pattern whereby most of the computations to the data happens soon after the data’s creation. 3 Background Map-reduce is a programming model designed to simplify data processing [15]. Google, Yahoo!, Facebook, Twitter etc. use Map-reduce to process massive amount of data on large-scale commodity clusters. Hadoop is an opensource cluster-based Map-reduce implementation written in Java [1]. It is logically separated into two subsystems: a highly resilient and scalable Hadoop Distributed File System (HDFS), and a Map-reduce task execution framework. HDFS runs on clusters of commodity hardware and is an 2 object-based distributed file system. The namespace and the metadata (modification, access times, permissions, and quotas) are stored on a dedicated server called the NameNode and are decoupled from the actual data which is stored on servers called the DataNodes. Each file in HDFS is replicated for resiliency and split into blocks of typically 128MB and individual blocks and replicas are placed on the DataNodes for fine-grained load-balancing. 3.1 to the class of data residing in that zone. Differentiating the zones in terms of power is crucial towards attaining our energy-conservation goal. Hot zone consists of files that are being accessed currently and the newly created files. This zone has strict SLA (Service Level Agreements) requirements and hence, performance is of the greatest importance. We trade-off energy savings in interest of very high performance in this zone. In this paper, GreenHDFS employs data chunking, placement and replication policies similar to the policies in baseline HDFS or GFS. Cold zone consists of files with low to rare accesses. Files are moved by File Migration policy from the Hot zones to the Cold zone as their temperature decreases beyond a certain threshold. Performance and SLA requirements are not as critical for this zone and GreenHDFS employs aggressive energy-management schemes and policies in this zone to transition servers to low power inactive state. Hence, GreenHDFS trades-off performance with high energy savings in the Cold zone. For optimal energy savings, it is important to increase the idle times of the servers and limit the wakeups of servers that have transitioned to the power saving mode. Keeping this rationale in mind and recognizing the low performance needs and infrequency of data accesses to the Cold zone; this zone will not chunk the data. This will ensure that upon a future access only the server containing the data will be woken up. By default, the servers in Cold zone are in a sleeping mode. A server is woken up when either new data needs to be placed on it or when data already residing on the server is accessed. GreenHDFS tries to avoid powering-on a server in the Cold zone and maximizes the use of the existing powered-on servers in its server allocation decisions in interest of maximizing the energy savings. One server woken up and is filled completely to its capacity before next server is chosen to be transitioned to an active power state from an ordered list of servers in the Cold zone. The goal of GreenHDFS is to maximize the allocation of the servers to the Hot zone to minimize the performance impact of zoning and minimize the number of servers allocated to the Cold zone. We introduced a hybrid, storageheavy cluster model in [24] paper whereby servers in the Cold zone are storage-heavy and have 12, 1TB disks/server. We argue that zoning in GreenHDFS will not affect the Hot zone’s performance adversely and the computational workload can be consolidated on the servers in the Hot zone without exceeding the CPU utilization above the provisioning guidelines. A study of 5000 Google compute servers, showed that most of the time is spent within the 10% - 50% CPU utilization range [6]. Hence, significant opportunities exist in workload consolidation. And, the compute capacity of the Cold zone can always be harnessed under peak load Importance of Write-Performance in Production Hadoop Cluster Reduce phase of a Map-reduce task writes intermediate computation results back to the Hadoop cluster and relies on high write performance for overall performance of a Mapreduce task. Furthermore, we observed that the majority of the data in a production Hadoop cluster has a news-server like access pattern. Predominant number of computations happen on newly created data; thereby mandating good read and write performance of the newly created data. 3.2 Scale-down Mandates Scale-down, in which server components such as CPU, disks, and DRAM are transitioned to inactive, low power consuming mode, is a popular energy-conservation technique. However, scale-down cannot be applied naively. Energy is expended and transition time penalty is incurred when the components are transitioned back to an active power mode. For example, transition time of components such as the disks can be as high as 10secs. Hence, an effective scale-down technique mandates the following: • Sufficient idleness to ensure that energy savings are higher than the energy spent in the transition. • Less number of power state transitions as some components (e.g., disks) have limited number of start/stop cycles and too frequent transitions may adversely impact the lifetime of the disks. • No performance degradation. Steps need to be taken to amortize performance penalty of power state transitions and to ensure that load concentration on the remaining active state servers doesn’t adversely impact overall performance of the system. 4 GreenHDFS Design GreenHDFS is a variant of the Hadoop Distributed File System (HDFS) and GreenHDFS logically organizes the servers in the datacenter in multiple dynamically provisioned Hot and Cold zones. Each zone has a distinct performance, cost, and power characteristic. Each zone is managed by power and data placement policies most conducive 3 scenarios. 4.1.2 4.1 The Server Power Conserver Policy runs in the Cold zone and determines the servers which can be transitioned into a power saving standby/sleep mode in the Cold Zone as shown in Algorithm 2. The current trend in the internetscale data warehouses and Hadoop clusters is to use commodity servers with 4-6 directly attached disks instead of using expensive RAID controllers. In such systems, disks actually just constitute 10% of the entire power usage as illustrated in a study performed at Google [22] and CPU and DRAM constitute of 63% of the total power usage. Hence, power management of any one component is not sufficient. We leverage energy cost savings at the entire server granularity (CPU, Disks, and DRAM) in the Cold zone. The GreenHDFS uses hardware techniques similar to [28] to transition the processors, disks and the DRAM into a low power state. GreenHDFS uses the disk Sleep mode 2 , CPU’s ACPI S3 Sleep state as it consumes minimal power and requires only 30us to transition from sleep back to active execution, and DRAM’s self-refresh operating mode in which transitions into and out of self refresh can be completed in less than a microsecond in the Cold zone. The servers are transitioned back to an active power mode in three conditions: 1) data residing on the server is accessed, 2) additional data needs to be placed on the server, or 3) block scanner needs to run on the server to ensure the integrity of the data residing in the Cold zone servers. GreenHDFS relies on Wake-on-LAN in the NICs to send a magic packet to transition a server back to an active power state. Energy-management Policies Files are moved from the Hot Zones to the Cold Zone as their temperature changes over time as shown in Figure 1. In this paper, we use dormancy of a file, as defined by the elapsed time since the last access to the file, as the measure of temperature of the file. Higher the dormancy lower is the temperature of the file and hence, higher is the coldness of the files. On the other hand, lower the dormancy, higher is the heat of the files. GreenHDFS uses existing mechanism in baseline HDFS to record and update the last access time of the files upon every file read. 4.1.1 File Migration Policy The File Migration Policy runs in the Hot zone, monitors the dormancy of the files as shown in Algorithm 1 and moves dormant, i.e., cold files to the Cold Zone. The advantages of this policy are two-fold: 1) leads to higher spaceefficiency as space is freed up on the hot Zone for files which have higher SLA requirements by moving rarely accessed files out of the servers in these zones, and 2) allows significant energy-conservation. Data-locality is an important consideration in the Map-reduce framework and computations are co-located with data. Thus, computations naturally happen on the data residing in the Hot zone. This results in significant idleness in all the components of the servers in the Cold zone (i.e., CPU, DRAM and Disks), allowing effective scale-down of these servers. Server Power Conserver Policy Coldness > ThresholdFMP Hot Zone Active Cold Zone Wake-up Events: File Access Bit Rot Integrity Checker File Placement File Deletion Inactive Server Power Conserver Policy: Coldness > ThresholdSCP Hotness > ThresholdFRP Figure 1. State Diagram of a File’s Zone Allocation based on Migration Policies Figure 2. Triggering events leading to Power State Transitions in the Cold Zone Algorithm 1 Description of the File Migration Policy which Algorithm 2 Server Power Conserver Policy Classifies and Migrates cold data to the Cold Zone from the Hot Zones {For every Server i in Cold Zone} for i = 1 to n do coldnessi ⇐ max0≤j≤m last access timej if coldnessi ≥ T hresholdSP C then Si ⇐ INACTIVE STATE end if end for {For every file i in Hot Zone} for i = 1 to n do dormancyi ⇐ current time − last access timei if dormancyi ≥ T hresholdF M P then {Cold Zone} ⇐ {Cold Zone} ∪ {fi } {Hot Zone} ⇐ {Hot Zone} / {fi }//filesystem metadata structures are changed to Cold Zone end if end for 2 In the Sleep mode the drive buffer is disabled, the heads are parked and the spindle is at rest. 4 4.1.3 after they have been dormant in the system for a longer period of time. This would be an overkill for files with very short Lif espanC LR (hotness lifespan) as such files will unnecessarily lie dormant in the system, occupying precious Hot zone capacity for a longer period of time. T hresholdSCP : A high T hresholdSCP increases the number of the days the servers in the Cold Zone remain in active power state and hence, lowers the energy savings. On the other hand, it results in a reduction in the power state transitions which results in improved performance of the accesses to the Cold Zone. Thus, a trade-off needs to be made between energy-conservation and data access performance in the selection of the value for T hresholdSCP . T hresholdF RP : A relatively high value of T hresholdF RP ensures that files are accurately classified as hot-again files before they are moved back to the Hot zone from the Cold zone. This reduces data oscillations in the system and reduces unnecessary file reversals. File Reversal Policy The File Reversal Policy runs in the Cold zone and ensures that the QoS, bandwidth and response time of files that becomes popular again after a period of dormancy is not impacted. If the number of accesses to a file that is residing in the Cold zone becomes higher than the threshold T hresholdF RP , the file is moved back to the Hot zone as shown in 3. The file is chunked and placed unto the servers in the Hot zone in congruence with the policies in the Hot zone. Algorithm 3 Description of the File Reversal Policy Which Monitors temperature of the cold files in the Cold Zones and Moves Files Back to Hot Zones if their temperature changes {For every file i in Cold Zone} for i = 1 to n do if num accessesi ≥ T hresholdF RP then {Hot Zone} ⇐ {Hot Zone} ∪ {fi } {Cold Zone} ⇐ {Cold Zone} / {fi }//filesystem metadata are changed to Hot Zone end if end for 4.1.4 5 Policy Thresholds Discussion Analysis of a production Hadoop cluster at Yahoo! We analyzed one-month of HDFS logs 3 and namespace checkpoints in a multi-tenant cluster at Yahoo!. The cluster had 2600 servers, hosted 34 million files in the namespace and the data set size was 6 Petabytes. There were 425 million entries in the HDFS logs and each namespace checkpoint contained 30-40 million files. The cluster namespace was divided into six main top-level directories, whereby each directory addresses different workloads and access patterns. We only considered 4 main directories and refer to them as: d, p, u, and m in our analysis instead of referring them by their real names. The total number of unique files that was seen in the HDFS logs in the onemonth duration were 70 million (d-1.8million, p-30million, u-23million, and m-2million). The logs and the metadata checkpoints were huge in size and we used a large-scale research Hadoop cluster at Yahoo! extensively for our analysis. We wrote the analysis scripts in Pig. We considered several cases in our analysis as shown below: A good data migration scheme should result in maximal energy savings, minimal data oscillations between GreenHDFS zones and minimal performance degradation. Minimization of the accesses to the Cold zone files results in maximal energy savings and minimal performance impact. For this, policy thresholds should be chosen in a way that minimizes the number of accesses to the files residing in the Cold zone while maximizing the movement of the dormant data to the Cold zone. Results from our detailed sensitivity analysis of the thresholds used in GreenHDFS are covered in Section 6.3.5. T hresholdF M P : Low (i.e., aggressive) value of T hresholdF M P results in an ultra-greedy selection of files as potential candidates for migration to the Cold zone. While there are several advantages of an aggressive T hresholdF M P such as higher space-savings in the Cold zone, there are disadvantages as well. If files have intermittent periods of dormancy, the files may incorrectly get labeled as cold and get moved to the Cold zone. There is high probability that such files will get accessed in the near future. Such accesses may suffer performance degradation as the accesses may get subject to power transition penalty and may trigger data oscillations because of file reversals back to the Hot zone. A higher value of T hresholdF M P results in a higher accuracy in determining the really cold files. Hence, the number of reversals, server wakeups and associated performance degradation decreases as the threshold is increased. On the other hand, higher value of T hresholdF M P signifies that files will be chosen as candidates for migration only • Files created before the analysis period and which were not read or deleted subsequently at all. We classify these files as long-living cold files. • Files created before the analysis period and which were read during the analysis period. 3 The inode data and the list of blocks belonging to each file comprise the metadata of the name system called the image. The persistent record of the image is called a checkpoint. HDFS has the ability to log all file system access requests, which is required for auditing purposes in enterprises. The audit logging is implemented using log4j and once enabled, logs every HDFS event in the NameNode’s log [37]. We used the above-mentioned checkpoint and HDFS logs for our analysis. 5 • Files created before the analysis period and which were both read and deleted during the analysis period. • FileLifetime. This metric helps in determining the lifetime of the file between its creation and its deletion. • Files created during the analysis period and which were not read during the analysis period or deleted. 5.1.1 • Files created during the analysis period and which were not read during the analysis period, but were deleted. The F ileLif eSpanCF R distribution throws light on the clustering of the file reads with the file creation. As shown in Figure 3, 99% of the files have a F ileLif eSpanCF R of less than 2 days. • Files created during the analysis period and which were read and deleted during the analysis period. 5.1.2 To accurately account for the file lifespan and lifetime, we handled the following cases: (a) Filename reuse. We appended a timestamp to each file create to accurately track the audit log entries following the file create entry in the audit log, (b) File renames. We used an unique id per file to accurately track its lifetime across create, rename and delete, (c) Renames and deletes at higher level in the path hierarchy had to be translated to leaf-level renames and deletes for our analysis, (d) HDFS logs do not have file size information and hence, did a join of the dataset found in the HDFS logs and namespace checkpoint to get the file size information. 5.1 F ileLif eSpanCF R F ileLif eSpanCLR Figure 4 shows the distribution of F ileLif eSpanCLR in the cluster. In directory d, 80% of files are hot for less than 8 days and 90% of the files amounting to 94.62% storage, are hot for less than 24 days. The F ileLif eSpanCLR of 95% of the files amounting to 96.51% storage in the directory p is less than 3 days and the F ileLif eSpanCLR of the 100% of files in directory m and 98% of files in directory a is as small as 2 days. In directory u, 98% of files have F ileLif eSpanCLR of less than 1 day. Thus, majority of the files in the cluster have a short hotness lifespan. 5.1.3 File Lifespan Analysis of the Yahoo! Hadoop Cluster F ileLif eSpanLRD F ileLif eSpanLRD indicates the time for which a file stays in a dormant state in the system. The longer the dormancy period, higher is the coldness of the file and hence, higher the suitability of the file for migration to the cold zone. Figure 5 shows the distribution of F ileLif eSpanLRD in the cluster. In directory d, 90% of files are dormant beyond 1 day and 80% of files, amounting to 80.1% of storage exist in dormant state past 20 days. In directory p, only 25% files are dormant beyond 1 day and only 20% of the files remain dormant in the system beyond 10 days. In directory m, only 0.02% files are dormant for more than 1 day and in directory u, 20% of files are dormant beyond 10 days. The F ileLif eSpanLRD needs to be considered to find true migration suitability of a file. For example, given the extremely short dormancy period of the files in the directory m, there is no point in exercising the File Migration Policy on directory m. For directories p, and u, T hresholdF M P less than 5 days will result in unnecessary movement of files to the Cold zone as these files are due for deletion in any case. On the other hand, given the short F ileLif eSpanCLR in these directories, high value of T hresholdF M P won’t do justice to space-efficiency in the Cold zone as discussed in Section 4.1.4. A file goes to several stages in its lifetime: 1) file creation, 2) hot period during which the file is frequently accessed, 3) dormant period during which file is not accessed, and 4) deletion. We introduced and considered various lifespan metrics in our analysis to characterize a file’s evolution. A study of the various lifespan distributions helps in deciding the energy-management policy thresholds that need to be in place in GreenHDFS. • F ileLif eSpanCF R metric is defined as the File lifespan between the file creation and first read access. This metric is used to find the clustering of the read accesses around the file creation. • F ileLif eSpanCLR metric is defined as the File lifespan between creation and last read access. This metric is used to determine the hotness profile of the files. • F ileLif eSpanLRD metric is defined as the File lifespan between last read access and file deletion. This metric helps in determine the coldness profile of the files as this is the period for which files are dormant in the system. 5.1.4 • F ileLif eSpanF LR metric is defined as the File lifespan between first read access and last read access. This metric helps in determining another dimension of the hotness profile of the files. File Lifetime Analysis Knowledge of the FileLifetime further assists in the migration file candidate selection and needs to be accounted for in addition to the F ileLif eSpanLRD and 6 d p m u d % of Tota tal Used Capacity % of Total Tot File Count 102% 100% 98% 96% 94% 92% 90% 1 3 p m u 120% 100% 80% 60% 40% 20% 0% 5 7 9 11 13 15 17 19 21 FileLifeSpanCFR (Days) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifeSpanCFR (Days) Figure 3. F ileLif eSpanCF R distribution. 99% of files in directory d and 98% of files in directory p were accessed for the first time less than 2 days of creation. p m u d % of Tota tal Used Capacity % of To Total File Count d 105% 100% 95% 90% 85% 80% 75% 70% 65% 60% p m u 120% 100% 80% 60% 40% 20% 0% 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifeSpanCLR (Days) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifeSpanCLR (Days) Figure 4. F ileLif eSpanCLR Distribution in the four main top-level directories in the Yahoo! production cluster. F ileLif eSpanCLR characterizes the lifespan for which files are hot. In directory d, 80% of files were hot for less than 8 days and 90% of the files amounting to 94.62% storage, are hot for less than 24 days. The hotness lifespan of 95% of the files amounting to 96.51% storage in the directory p is less than 3 days and the hotness lifespan of the 100% of files in directory m and in directory u, 98% of files are hot for less than 1 day. d p m d u % if Tota tal Used Capacity % of Total To File Count 120% 100% 80% 60% 40% 20% 0% p m u 120% 100% 80% 60% 40% 20% 0% 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifeSpanLRD (Days) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifeSpanLRD (Days) Figure 5. F ileLif eSpanLRD distribution of the top-level directories in the Yahoo! production cluster. F ileLif eSpanLRD characterizes the coldness in the cluster and is indicative of the time a file stays in a dormant state in the system. 80% of files, amounting to 80.1% of storage in the directory d have a dormancy period of higher than 20 days. 20% of files, amounting to 28.6% storage in directory p are dormant beyond 10 days. 0.02% of files in directory m are dormant beyond 1 day. p m d u % of Total To File Count 100% 80% 60% 40% 20% 0% 7 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 FileLifetime (Days) % of Tota tal Used Capacity d 120% p m u 120% 100% 80% 60% 40% 20% 0% 0 2 4 6 8 1012141618202224262830 FileLifetime(Days) Figure 6. FileLifetime distribution. 67% of the files in the directory p are deleted within one day of their creation. Only 23% files live beyond 20 days. On the other hand, in directory d 80% of the files have a FileLifetime of more than 30 days. % of Total File Count % of Total Used Storage 40.00% 35.00% 30.00% 25.00% 20.00% 15.00% 10.00% 5.00% 0.00% d p u Figure 7. File size and file count percentage of long-living cold files. The cold files are defined as the files that were created prior to the start of the observation period of one-month and were not accessed during the period of observation at all. In case of directory d directory, 13% of the total file count in the cluster which amounts to 33% of total used capacity is cold. In case of directory p, 37% of the total file count in the cluster which amounts to 16% of total used capacity is cold. Overall, 63.16% of total file count and 56.23% of total used capacity is cold in the system p d u p u 7 100% File Co Count (Millions) 80% 60% 40% 20% 6 5 4 3 2 1 0 0% 10 d % of Tot otal Used Storage Capacity C 10 20 40 60 80 100 120 140 Dormancy > than (Days) p 40 60 80 100 120 140 Dormancy > than (Days) d u p u 3500 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% 10 20 20 40 60 80 100 120 140 Dormancy > than (Days) Used Storage Ca Capaicty (TB) % off Total T File Count d 3000 2500 2000 1500 1000 500 0 10 20 40 60 80 100 120 140 Dormancy > than (Days) Figure 8. Dormant period analysis of the file count distribution and histogram in one namespace checkpoint. Dormancy of the file is defined as the elapsed time between the last access time recorded in the checkpoint and the day of observation. 34% of the files in the directory p and 58% of the files in the directory d were not accessed in the last 40 days. 8 • What is the sensitivity of the various policy thresholds used in GreenHDFS on the energy savings results? F ileLif eSpanCLR metrices covered earlier. As shown in Figure 6, directory p only has 23% files that live beyond 20 days. On the other hand, 80% of files in directory d live for more than 30 days and 80% of the files have a hot lifespan of less than 8 days. Thus, directory d is a very good candidate for invoking the File Migration Policy. 5.2 • How many power state transitions does a server go through in average in the Cold Zone? • What is the number of accesses that happen to the files in the Cold Zone, the days servers are in an active power state and the number of migrations and reversals observed in the system? Coldness Characterization of the Files In this section, we show the file count and the storage capacity used by the long-living cold files. The long-living cold files are defined as the files that were created prior to the start of the observation period and were not accessed during the one-month period of observation at all. As shown in Figure 13, 63.16% of files amounting to 56.23% of the total used capacity are cold in the system. Such long-living cold files present significant opportunity to conserve energy in GreenHDFS. 5.3 The following evaluation sections answer these questions, beginning with a description of our methodology, and the trace workloads we use as inputs to the experiments. 6.1 We evaluated GreenHDFS using a trace-driven simulator. The simulator was driven by real-world HDFS traces generated by a production Hadoop cluster at Yahoo!. The cluster had 2600 servers, hosted 34 million files in the namespace and the data set size was 6 Petabytes. We focused our analysis on the directory d as this directory constituted of 60% of the used storage capacity in the cluster (4PB out of the 6PB total used capacity). Just focusing our analysis on the directory d cut down on our simulation time significantly and reduced our analysis time 5 . We used 60% of the total cluster nodes in our analysis to make the results realistic for just directory d analysis. The total number of unique files that were seen in the HDFS traces for the directory d in the one-month duration were 0.9 million. In our experiments, we compare GreenHDFS to the baseline case (HDFS without energy management). The baseline results give us the upper bound for energy consumption and the lower bound for average response time. Simulation Platform: We used a trace-driven simulator for GreenHDFS to perform our experiments. We used models for the power levels, power state transitions times and access times of the disk, processor and the DRAM in the simulator. The GreenHDFS simulator was implemented in Java and MySQL distribution 5.1.41 and executed using Java 2 SDK, version 1.6.0-17. 6 Table 1 lists the various power, latency, transition times etc. used in the Simulator. The simulator was run on 10 nodes in a development cluster at Yahoo!. Dormancy Characterization of the Files The HDFS trace analysis gives information only about the files that were accessed in the one-month duration. To get a better picture, we analyzed the namespace checkpoints for historical data on the file temperatures and periods of dormancy. The namespace checkpoints contain the last access time information of the files and used this information to calculate the dormancy of the files. The Dormancy metric defines the elapsed time between the last noted access time of the file and the day of observation. Figure 8 contains the frequency histograms and distributions of the dormancy. 34% of files amounting to 37% of storage in the directory p present in the namespace checkpoint were not accessed in the last 40 days. 58% of files amounting to 53% of storage in the directory d were not accessed in the last 40 days. The extent of dormancy exhibited in the system again shows the viability of the GreenHDFS solution.4 6 Evaluation methodology Evaluation In this section, we first present our experimental platform and methodology, followed by a description of the workloads used and then we give our experimental results. Our goal is to answer five high-level sets of questions: 6.2 • What much energy is GreenHDFS able to conserve compared to a baseline HDFS with no energy management? Simulator Parameters The default simulation parameters used by in this paper are shown in Table 2. • What is the penalty of energy management on the average response time? 5 An important consideration given the massive scale of the traces performance and energy statistics were calculated based on the information extracted from the datasheet of Seagate Barracuda ES.2 which is a 1TB SATA hard drive, a Quad core Intel Xeon X5400 processor 6 Both, 4 The number of files present in the namespace checkpoints were less than half the number of the files seen in the one-month trace. 9 Cold Zone $35,000 # Migrations $20,000 $15,000 $10,000 $5,000 30 Coun unt (x100000) $25,000 Days ays Server ON Energy Costs E Hot Zone 35 $30,000 25 20 15 10 5 $0 0 5 10 15 20 File Migration Policy Interval (Days) 1 3 5 7 9 11 13 15 17 19 21 23 25 27 Cold Zone Servers File Migration Policy (Days) # Reversals 8 7 6 5 4 3 2 1 0 Figure 9. (Left) Energy Savings with GreenHDFS and (Middle) Days Servers in Cold Zone were ON compared to the 45 40 35 30 25 20 15 10 5 0 400 300 8 200 100 6 4 2 Server Number 6/30 6/28 6/26 6/24 6/22 5 10 15 20 File Migration Policy Interval (Days) 6/20 - 6/18 - Days Figure 10. Capacity Growth and Utilization in the Hot and Cold Zone compared to the Baseline and Daily Migrations. GreenHDFS substantially increases the free space in the Hot Zones by migrating cold data to the Cold Zones. In the left and middle chart, we only consider the new data that was introduced in the data directory and old data which was accessed during the 1 month period. Right chart shows the number and total size of the files migrated daily to the Cold zone with T hresholdF M P value of 10 Days. Table 2. Simulator Parameters Table 1. Power and power-on penalties used in Simu- Parameter NumServer NumZones IntervalF M P T hresholdF M P IntervalSP C T hresholdSP C IntervalF RP T hresholdF RP NumServersPerZone lator Component CPU (Quad core, Intel Xeon X5400 [23]) DRAM DIMM [29] NIC [35] SATA HDD (Seagate Barracuda ES.2 1TB [17] PSU [3] Hot server (2 CPU, 8 DRAM DIMM, 4 1TB HDD) Cold server (2 CPU, 8 DRAM DIMM, 12 1TB HDD) Active Power (W) 80-150 Sleep Power (W) 3.4 Powerup time 30 us 0.2 1 us 0.7 11.16 Idle Power (W) 12.020.0 1.82.5 0.3 9.29 0.3 0.99 NA 10 sec 50-60 445.34 25-35 132.46 0.5 13.16 300 us 534.62 206.78 21.08 3.5-5 6.3 Simulation results 6.3.1 Energy-Conservation Value 1560 2 1 Day 5, 10, 15, 20 Days 1 Day 2, 4, 6, 8 Days 1 Day 1, 5, 10 Accesses Hot 1170 Cold 390 ply by doing power management in one of the main tenant directory of the Hadoop Cluster. The cost of electricity was assumed to be $0.063/KWh. Figure 9(Left) shows a 24% reduction in energy consumption of a 1560 server datacenter with 80% capacity utilization. Extrapolating, $2.1million can be saved in the energy costs if GreenHDFS technique is applied to all the Hadoop clusters at Yahoo (upwards of 38000 servers). Energy saving from off-power servers will be further compounded in the cooling system of a real datacenter. For every Watt of power consumed by the compute infrastructure, a modern data center expends an- In this section, we show the energy savings made possible by GreenHDFS, compared to baseline, in one month sim10 File Co ount (x 1000) FileCount 10 6/16 Policy5 500 6/14 Policy10 FileSize 12 6/12 Policy15 600 File Size Si (TB) Baseline Cold Zone Zo Used Capacity (TB) 500 450 400 350 300 250 200 150 100 50 - 1 93 185 277 369 461 553 645 737 829 921 1013 1105 1197 1289 1381 1473 Used Stor torage Capacity (GB) Baseline. Energy Cost Savings are Minimally Sensitive to the Policy Threshold Values. GreenHDFS achieves 24% savings in the energy costs in one month simply by doing power management in one of the main tenant directory of the Hadoop Cluster. (Right) Number of migrations and reversals in GreenHDFS with different values of the T hresholdF M P threshold. understand the sensitivity of these thresholds on storageefficiency, energy-conservation and number of power state transitions. A discussion on the impact of the various thresholds is done in Section 4.1.4. T hresholdF M P : We found that the energy costs are minimally sensitive to the T hresholdF M P threshold value. As shown in Figure 9[Left], the energy cost savings varied minimally when the T hresholdF M P was changed to 5, 10, 15 and 20 days. The performance impact and number of file reversals is minimally sensitive to the T hresholdF M P value as well. This behavior can be explained by the observation that majority of the data in the production Hadoop cluster at Yahoo! has a news-server-like access pattern. This implies that once data is deemed cold, there is low probability of data getting accessed again. The Figure 9 (right-most) shows the total number of migrations of the files which were deemed cold by the file migration policy and the reversals of the moved files in case they were later accessed by a client in the one-month simulation run. There were more instances (40,170, i.e., 4% of overall file count) of file reversals with the most aggressive T hresholdF M P of 5 days. With less aggressive T hresholdF M P of 15 days, the number of reversals in the system went down to 6,548 (i.e., 0.7% of file count). The experiments were done with a T hresholdF RP value of 1. The number of file reversals are substantially reduced by increasing the T hresholdF RP value. With a T hresholdF RP value of 10, zero reversals happen in the system. The storage-efficiency is sensitive to the value of the T hresholdF M P threshold as shown in Figure 10[Left]. An increase in the T hresholdF M P value results in less efficient capacity utilization of the Hot Zones. Higher value of T hresholdF M P threshold signifies that files will be chosen as candidates for migration only after they have been dormant in the system for a longer period of time. This would be an overkill for files with very short F ileLif espanCLR as they will unnecessarily lie dormant in the system, occupying precious Hot zone capacity for a longer period of time. T hresholdSCP : As Figure 12(Right) illustrates, increasing the T hresholdSCP value, minimally increases the number of the days the servers in the Cold Zone remain ON and hence, minimally lowers the energy savings. On the other hand, increasing the T hresholdSCP value results in a reduction in the power state transitions which improves the performance of the accesses to the Cold Zone. Thus, a trade-off needs to be made between energy-conservation and data access performance. Summary on Sensitivity Analysis: From the above evaluation, it is clear that a trade-off needs to be made in choosing the right thresholds in GreenHDFS based on an enterprise’s needs. If Hot zone space is at a premium, more other one-half to one Watt to power the cooling infrastructure [32]. Energy-saving results underscore the importance of supporting access time recording in the Hadoop compute clusters. 6.3.2 Storage-Efficiency In this section, we show the increased storage efficiency of the Hot Zones compared to baseline. Figure 10 shows that in the baseline case, the average capacity utilization of the 1560 servers is higher than that of GreenHDFS which just has 1170 servers out of the 1560 servers provisioned to the Hot second Zone. GreenHDFS has much higher amount of free space available in the Hot zone which tremendously increases the potential for better data placement techniques on the Hot zone. More aggressive the policy threshold, more space is available in the Hot zone for truly hot data as more data is migrated out to the Cold zone. 6.3.3 File Migrations and Reversals The Figure 10 (right-most) shows the number and total size of the files which were migrated to the Cold zone daily with a T hresholdF M P value of 10 Days. Every day, on average 6.38TB worth of data and 28.9 thousand files are migrated to the Cold zone. Since, we have assumed storage-heavy servers in the Cold zone where each server has 12, 1TB disks, assuming 80MB/sec of disk bandwidth, 6.38TB data can be absorbed in less than 2hrs by one server. The migration policy can be run during off-peak hours to minimize any performance impact. 6.3.4 Impact of Power Management on Response Time We examined the impact of server power management on the response time of a file which was moved to the Cold Zone following a period of dormancy and was accessed again for some reason. The files residing on the Cold Zone may suffer performance degradation in two ways: 1) if the file resides on a server that is not powered ON currently– this will incur a server wakeup time penalty, 2) transfer time degradation courtesy of no striping on the lower Zones. The file is moved back to Hot zone and chunked again by the file reversal policy. Figure 11 shows the impact on the average response time. 97.8% of the total read requests are not impacted by the power management. Impact is seen only by 2.1% of the reads. With a less aggressive T hresholdF M P (15, 20 days), impact on the Response time will reduce much further. 6.3.5 Sensitivity Analysis We tried different values of the thresholds for the File Migration policy and the Server Power Conserver policy to 11 1000000 100000 10000 1000 100 10 1 13-101 1012 9013-1001 0012 18013-1901 9012 27013-2801 8012 36013-3701 7012 45013-4601 6012 54013-5501 5012 66013-6701 7012 78013-7901 9012 88013-8901 9012 98013-9901 9012 110013-11101 1012 122013-12301 3012 132013-13301 3012 File le Count C in Log Scale 13-101 1012 8013-901 9012 16013-1701 7012 24013-2501 5012 32013-3301 3012 40013-4101 1012 48013-4901 9012 56013-5701 7012 67013-6801 8012 78013-7901 9012 87013-8801 8012 95013-9601 6012 105013-10601 6012 117013-11801 8012 127013-12801 8012 137013-13801 8012 % of Total File Reads 120% 100% 80% 60% 40% 20% 0% Read Response Time (msecs) Read Response Time (msecs) Figure 11. Performance Analysis: Impact on Response Time because of power management with a T hresholdF M P of 10 days. 97.8% of the total read requests are not impacted by the power management. Impact is seen only by 2.1% of the reads. With a less aggressive T hresholdF M P (15, 20), impact on the Response time will reduce much more. Used Capacity Hot (TB) Policy5 185 180 Policy10 175 Policy15 170 Policy20 165 160 155 150 timesOn Used Capacity Cold (TB) 600 daysOn 200 500 150 400 Count 190 Used Storag age Capacity (TB) Used Cold ld Zone Servers 195 300 200 100 50 100 0 0 5 10 15 20 File Migration Policy Interval (Days) 4 6 8 Server Power Conserver Policy Interval (Days) Figure 12. Sensitivity Analysis: Sensitivity of Number of Servers Used in Cold Zone, Number of Power State Transitions and Capacity per Zone to the Migration File Policy’s Age Threshold and the Server Power Conserver Policy’s Access Threshold. 7 aggressive T hresholdF M P needs to be used. This can be done without impacting the energy-conservation that can be derived in GreenHDFS. 6.3.6 Related Work Management of energy, peak power, and temperature of data centers and warehouses are becoming the targets of an increasing number of research studies. However, to the best of our knowledge, none of the existing systems exploit data classification-driven data placement to derive energyefficiency nor have a file system managed multi-zoned, hybrid data center layout. Most of the prior work focuses on workload placement and migration to cut energy costs in a data center [7, 10, 12, 13, 30, 34, 36]. As discussed in the Introduction, workload migration based techniques are not feasible in the stateful data-intensive compute clusters such as the Hadoop cluster or the Google cluster. These clusters present significant challenges to prior scale-down techniques. Chase et. al. [11] do an energy-conscious provisioning which configures switches to concentrate request load on a minimal active set of servers for the current aggregate load level. Bash et al. [7] allocate heavy computational, long running workloads onto servers that are in more thermally- Number of Server Power Transitions The Figure 13 (Left) shows the number of power transitions incurred by the servers in the Cold Zones. Frequently starting and stopping disks is suspected to affect disk longevity. The number of start/stop cycles a disk can tolerate during its service life time is still limited. Making the power transitions infrequently reduces the risk of running into this limit.The maximum number of power state transitions incurred by a server in a one-month simulation run is just 11 times and only 1 server out of the 390 servers provisioned in the Cold Zone exhibited this behavior. Most of the disks are designed for a maximum service life time of 5 years and can tolerate 500,000 start/stop cycles. Given the very small number of transitions incurred by a server in the Cold Zone in a year, GreenHDFS has no risk of exceeding the start/stop cycles during the service life time of the disks. 12 Numbe ber of Power State Transitions 12 10 8 6 4 2 0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 Servers in Cold Zone Figure 13. Cold Zone Behavior: Number of Times Servers Transitioned Power State with T hresholdF M P of 10 Days. We only show those servers in the Cold zone that either received newly cold data or had data accesses targeted to them in the one-month simulation run. efficient places. Le et. al. [26] focus on a multi-datacenter internet service. They exploit the inherent heterogeneity in the datacenters in electricity pricing, time-zone differences and collocation to renewable energy source, to reduce energy consumption without impacting SLA requirements of the applications. Chun et. al. [14] propose a hybrid datacenter comprising of low power Atom processors and high power, high performance Xeon processors. However, they do not specify any zoning in the system and focus more on task migration rather than data migration. Narayanan et. al. [31] use a technique to offload write workload to one volume to other storage elsewhere in the data center. Meisner et al. [28] reduce the power costs by transitioning the servers to a ”powernap” state whenever there is a period of low utilization. In addition, there is research on hardware-level techniques such as dynamic-voltage scaling as a mechanism to reduce peak power consumption in the datacenters [9, 16] and Raghavendra et al. [33] coordinate hardware-level power capping with virtual machine dispatching mechanisms. Managing temperature is the subject of the systems proposed in [21]. These techniques can be incorporated in the Hot zone to derive additional energy cost savings in GreenHDFS. Recent research on increasing energy-efficiency in GFS and HDFS managed clusters [4, 27] propose maintaining a primary replica of the data on a small covering subset of nodes that are guaranteed to be on and which represent lowest power setting. Remaining replicas are stored in larger set of secondary nodes. Performance is scaled up by increasing number of secondary nodes. However, these solutions suffer from degraded write-performance and increased DFS code complexity. These solutions also do not do any data differentiation and treat all the data in the system alike. Existing highly scalable file systems such as Google file system [20] and HDFS [37] do not do energy management. Recently, an energy-efficient Log Structured File System was proposed by Hakim et. al. [19]. However, their so- lution aims to concentrate load on one disk at a time and hence, may have availability and performance implications. We take an almost opposite approach and chunk and replicate the hot data across the nodes in the Hot zone to ensure high performance. Only the cold data is concentrated on a few servers in the Cold zone. 8 Conclusion and Future Work We presented the detailed evaluation and sensitivity analysis of GreenHDFS, a policy-driven, self-adaptive, variant of Hadoop Distributed File System. GreenHDFS relies on data classification driven data placement to realize guaranteed, substantially long periods of idleness in a significant subset of servers in the datacenter. Detailed experimental results with real-world traces from a production Yahoo! Hadoop cluster show that GreenHDFS is capable of achieving 24% savings in the energy costs of a Hadoop cluster by doing power management in only one of the main tenant top-level directory in the cluster. These savings will be further compounded in the savings in the cooling costs. Detailed lifespan analysis of the files in a large-scale production Hadoop cluster at Yahoo! points at the viability of GreenHDFS. Evaluation results show that GreenHDFS is able to meet all the scale-down mandates (i.e., generates significant idleness in the cluster, results in very few power state transitions, and doesn’t degrade write performance) in spite of the unique scale-down challenges present in a Hadoop cluster. 9 Acknowledgement This work was supported by NSF grant CNS 05-51665 and an internship at Yahoo!. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NSF or the U.S. government. 13 References [21] T. Heath, A. P. Centeno, P. George, L. Ramos, Y. Jaluria, and R. Bianchini. Mercury and freon: temperature emulation and management for server systems. In Proceedings of the 12th international conference on Architectural support for programming languages and operating systems, ASPLOS-XII, pages 106– 116, New York, NY, USA, 2006. ACM. [1] http://hadoop.apache.org/. [2] http://wiki.apache.org/hadoop/PoweredBy. [22] U. Hoelzle and L. Barroso. The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines. Morgan and Claypool Publishers, May 29, 2009. [3] Introduction to power supplies. National Semiconductor, 2002. [4] H. Amur, J. Cipar, V. Gupta, G. R. Ganger, M. A. Kozuch, and K. Schwan. Robust and flexible power-proportional storage. In Proceedings of the 1st ACM symposium on Cloud computing, SoCC’10, pages 217–228, New York, NY, USA, 2010. ACM. [23] Intel. Quad-core intel xeon processor 5400 series. 2008. [24] R. T. Kaushik and M. Bhandarkar. Greenhdfs: towards an energy-conserving, storage-efficient, hybrid hadoop compute cluster. In Proceedings of the 2010 international conference on Power aware computing and systems, HotPower’10, pages 1–9, Berkeley, CA, USA, 2010. USENIX Association. [5] E. Baldeschwieler. Hadoop Summit, 2010. [6] L. A. Barroso and U. HoĢlzle. The case for energy-proportional computing. Computer, 40(12), 2007. [25] S. Konstantin, H. Kuang, S. Radia, and R. Chansler. The hadoop distributed file system. MSST, 2010. [7] C. Bash and G. Forman. Cool job allocation: Measuring the power savings of placing jobs at cooling-efficient locations in the data center. In USENIX Annual Technical Conference, ATC’07, pages 363–368, 2007. [26] K. Le, R. Bianchini, M. Martonosi, and T. Nguyen. Cost- and energy-aware load distribution across data centers. In HotPower, 2009. [8] C. Belady. In the data center, power and cooling costs more than the it equipment it supports. Electronics Cooling, February, 2010. [27] J. Leverich and C. Kozyrakis. On the energy (in)efficiency of hadoop clusters. SIGOPS Oper. Syst. Rev., 44:61–65, March 2010. [9] D. Brooks and M. Martonosi. Dynamic thermal management for highperformance microprocessors. In Proceedings of the 7th International Symposium on High-Performance Computer Architecture, HPCA ’01, pages 171–, Washington, DC, USA, 2001. IEEE Computer Society. [28] D. Meisner, B. T. Gold, and T. F. Wenisch. Powernap: eliminating server idle power. In Proceeding of the 14th international conference on Architectural support for programming languages and operating systems, ASPLOS ’09, pages 205–216, New York, NY, USA, 2009. ACM. [10] J. S. Chase, D. C. Anderson, P. N. Thakar, A. M. Vahdat, and R. P. Doyle. Managing energy and server resources in hosting centers. In Proceedings of the eighteenth ACM symposium on Operating systems principles, SOSP ’01, pages 103–116, New York, NY, USA, 2001. ACM. [29] Micron. Ddr2 sdram sodimm. 2004. [30] J. Moore, J. Chase, P. Ranganathan, and R. Sharma. Making scheduling ”cool”: temperature-aware workload placement in data centers. In Proceedings of the annual conference on USENIX Annual Technical Conference, ATEC ’05, pages 5–5, Berkeley, CA, USA, 2005. USENIX Association. [11] J. S. Chase and R. P. Doyle. Balance of power: Energy management for server clusters. In HotOS’01: Workshop on Hot Topics in Operating Systems HotOS. [31] D. Narayanan, A. Donnelly, and A. Rowstron. Write off-loading: Practical power management for enterprise storage. ACM Transaction of Storage, 4(3):1– 23, 2008. [12] G. Chen, W. He, J. Liu, S. Nath, L. Rigas, L. Xiao, and F. Zhao. Energy-aware server provisioning and load dispatching for connection-intensive internet services. In Proceedings of the 5th USENIX Symposium on Networked Systems Design and Implementation, NSDI’08, pages 337–350, Berkeley, CA, USA, 2008. USENIX Association. [32] C. Patel, E. Bash, R. Sharma, and M. Beitelmal. Smart cooling of data centers. In Proceedings of PacificRim/ASME International Electronics Packaging Technical Conference and Exhibition, IPACK’03, 2003. [13] Y. Chen, A. Das, W. Qin, A. Sivasubramaniam, Q. Wang, and N. Gautam. Managing server energy and operational costs in hosting centers. In Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems, SIGMETRICS ’05, pages 303–314, New York, NY, USA, 2005. ACM. [33] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, and X. Zhu. No ”power” struggles: coordinated multi-level power management for the data center. In ASPLOS XIII: Proceedings of the 13th international conference on Architectural support for programming languages and operating systems, pages 48–59, New York, NY, USA, 2008. ACM. [14] B.-G. Chun, G. Iannaccone, G. Iannaccone, R. Katz, G. Lee, and L. Niccolini. An energy case for hybrid datacenters. SIGOPS Oper. Syst. Rev., 44:76–80, March 2010. [34] R. K. Sharma, C. E. Bash, C. D. Patel, R. J. Friedrich, and J. S. Chase. Balance of power: Dynamic thermal management for internet data centers. IEEE Internet Computing, 9:42–49, 2005. [15] J. Dean, S. Ghemawat, and G. Inc. Mapreduce: Simplified data processing on large clusters. In Proceedings of the 6th conference on Symposium on Operating Systems Design and Implementation, OSDI’04. USENIX Association, 2004. [35] SMSC. Lan9420/lan9420i single-chip ethernet controller with hp auto-mdix support and pci interface. 2008. [16] M. E. Femal and V. W. Freeh. Boosting data center performance through nonuniform power allocation. In ICAC ’05: Proceedings of the Second International Conference on Automatic Computing, Washington, DC, USA, 2005. IEEE Computer Society. [36] N. Tolia, Z. Wang, M. Marwah, C. Bash, P. Ranganathan, and X. Zhu. Delivering energy proportionality with non energy-proportional systems: optimizing the ensemble. In Proceedings of the 2008 conference on Power aware computing and systems, HotPower’08, pages 2–2, Berkeley, CA, USA, 2008. USENIX Association. [17] S. ES.2. http://www.seagate.com/staticfiles/support/disc/manuals/nl35 series & bc es series/barracuda es.2 series/100468393e.pdf. 2008. [37] T. White. Hadoop: The Definitive Guide. O’Reilly Media, May, 2009. [18] X. Fan, W.-D. Weber, and L. A. Barroso. Power provisioning for a warehousesized computer. In ISCA ’07: Proceedings of the 34th annual international symposium on Computer architecture, pages 13–23, New York, NY, USA, 2007. ACM. [19] L. Ganesh, H. Weatherspoon, M. Balakrishnan, and K. Birman. Optimizing power consumption in large scale storage systems. In Proceedings of the 11th USENIX workshop on Hot topics in operating systems, pages 9:1–9:6, Berkeley, CA, USA, 2007. USENIX Association. [20] S. Ghemawat, H. Gobioff, and S.-T. Leung. The google file system. In Proceedings of the nineteenth ACM symposium on Operating systems principles, SOSP ’03, pages 29–43, New York, NY, USA, 2003. ACM. 14