WattDB

advertisement
Daniel Schall, Volker Höfner, Prof. Dr. Theo Härder
TU Kaiserslautern
Outline
 Energy efficiency in database sytems
 Multi-Core vs. Cluster
 WattDB
 Recent
 Current
 Future
Work
2
Motivation
 More and more data
 Bigger servers
 In-memory technology
 Electricity Cost
3
Power Breakdown
 Load between 0 – 50 %
 Energy Consumption: 50 – 90%!
‘‘Analyzing the Energy Efficiency of a Database Server“,
D. Tsirogiannis, S. Harizopoulos, and M. A. Shah
SIGMOD 2010
‘‘Distributed Computing at Multi-dimensional Scale“,
Alfred Z. Spector
Keynote on MIDDLEWARE 2008
4
Growth of Main Memory makes it worse
Power
(Watt)
%
power@utilization
100
80
60
40
energyproportional
behavior
20
0
20
40
60
80
100 %
System utilization
 In-memory data management assumes continuous peak loads!
 Energy consumption of memory linearly grows with size and
dominates all other components across all levels of system
utilization
Mission: Energy-Efficiency!
 Energy cost > HW and SW cost
 Energy Efficiency =
Work
Energy Consumption
 ‚‚Green IT‘‘
7
Average Server Utilization
 Google Servers: load at about 30 %
 SPH AG: load between 5 and 30 %
8
Energy Efficiency - Related Work
 Software
 Delaying queries
 Optimize external storage access patterns
 Force sleep states
 „Intelligent“ data placement
 Hardware
 Sleep states
 Optimize energy consumption when idle
 Select energy-efficient hardware
 Dynamic Voltage Scaling
 Narrow approaches
 Only small improvements
9
Goal: Energy-Proportionality
Power
(Watt)
%
power@utilization
100
80
2
60
1
40
energyproportional
behavior
20
0
20
40
60
80
100 %
System utilization
1) reduce idle power consumption
2) eliminate disproportional energy consumption
From Multi-Core to Multi-Node
Power
Core
CPU
Core
CPU
Core
CPU
Core
CPU
L1Cache
Cache
L1Cache
Cache
L1Cache
Cache
L1Cache
Cache
Main
L2 Cache
memory
Main
L2 Cache
memory
power@utilization
100
80
60
L3 Cacheswitch
1Gb ethernet
Main
L2 Cache
memory
(Watt) %
Main
L2 Cache
memory
L1Cache
Cache
L1Cache
Cache
L1Cache
Cache
L1Cache
Cache
Core
CPU
Core
CPU
Core
CPU
Core
CPU
40
20
0
11
20
40
60
80
100 %
System utilization
A dynamic cluster of wimpy nodes
 energy-proportional DBMS
Load
Time
12
Cluster Overview
 Light-weighted nodes, low-power hardware
 Each node
 Intel Atom D510 CPU
 2 GB DRAM
 80plus Gold power supply
 1Gbit Ethernet interconnect
 23 W (idle) - 26 W (100% CPU)
 41 W (100% CPU + disks)
 Considered Amdahl-balanced
 Scale down the CPUs to the disks and network!
13
…
14
Shared Disk AND Shared Nothing
 Physical hardware layout: Shared Disk
 every node can access every page
 local vs. remote latency
 Logical implementation: Shared Nothing:
 data is mapped to node n:1
 exclusive access
 transfer of control
 Combine the benefits of both worlds!
15
Recent Work
 SIGMOD 2010 Programming Contest
 First prototype
 distributed DBMS
 BTW 2011 Demo Track
 Master node powering cluster up/down acc. to load
 SIGMOD 2011 Demo Track
 Energy-proportional query processing
16
Current Work
 Incorporate GPU-Operators
 improved energy-efficiency?
 more tuples/Watt?
 Monitoring & Load Forecasting
 For management decisions
 act instead of react
 Energy-Proportional Storage
 storage needs vs. processing needs
17
Future Work
 Policies for powering up / down nodes
 Load distribution and balancing among nodes
 Which use cases fit for the proposed
architecture, which don‘t?
 Alternative hardware configurations
 Heterogeneous HW environment
 SSDs, other CPUs
 Energy-efficient self-tuning
18
Current Work
Table
Partition
Partition
Partition
Node1
Node2
Node3
19
Future Work
Table
Partition
Partition
Partition
Node1
Node2
Node3
20
Conclusion
 Energy consumption matters!
 Current HW is not energy-proportional
 Systems most of the time at 20% - 50% utilization
 WattDB as a prototype for an energy-proportional DBMS
 Several challenges ahead
21
Thank You!
Energy Proportionality on a Cluster Scale
22
Download