Power Management in Cloud Computing using Green Algorithm

advertisement
Power Management in
Cloud Computing using
Green Algorithm
-Kushal Mehta
COP 6087
University of Central Florida
Motivation
• Global warming is the greatest environmental challenge today which is
caused by carbon emissions.
• A large number of cloud computing systems waste a tremendous
amount of energy and emit a considerable amount of carbon dioxide.
• Modern data centers operating under the cloud, host a variety of
applications.
• Green cloud computing aims to achieve not only efficient allocation of
resources but also minimize energy consumption and hence ensure that
the future growth of the cloud computing is sustainable.
What is green computing?
• Green computing is defined as the study and practice of designing , manufacturing, using,
and disposing of computers, servers, and associated sub systems efficiently and
effectively with minimal or no impact on the environment.
• Research continues into key areas such as making the use of computers as energy
efficient as possible, and designing algorithms and systems for efficiency related
computer technologies.
• The different approaches to green computing are as follows:
-Product longevity
-Algorithmic Efficiency
-Resource allocation
-Virtualization
-Power management
Architecture of a Green Cloud Computing
Platform
• Consumers/Brokers: submit service requests to the cloud.
• Green service allocator: acts as in interface between the cloud and the consumers
-Green negotiator: Negotiates with consumers/brokers to finalize the SLA.
-Service analyzer: Interprets and analyzes the service requirements of a
submitted request before deciding to accept or reject it.
-Consumer profiler: Gathers specific characteristics of the consumers.
-Pricing: Decides how service requests are charged.
-Energy monitor: Determines which physical machine to power on/off.
-Service scheduler: Assigns requests to VMs and also decides when VMs are needed to be
added or removed.
-VM manager: Keeps track of the available VMs and their resource entitlements.
-Accounting: Maintains actual usage of resources to compute user costs.
• Virtual machines: VMs can be dynamically started or stopped on a physical machine to meet
accepted service requests.
• Physical machines: The underlying physical computing servers provide the hardware
infrastructure for creating virtual resources to meet service demands.
Approaches to make cloud computing more green
• Dynamic voltage frequency scaling techniques (DVFS): The operating frequency of
the clock is adjusted so that the supply voltage is regulated. But this method
heavily depends on hardware and is not controllable according to varying needs.
The power savings are also low compared to other methods.
• Resource allocation or virtual machine migration techniques: This method
involves the transferring of VMs in such a way that the power increase is the
least. The most power efficient nodes are selected and the VMs are transferred
across them.
• Algorithmic approach: It has been experimentally determined that an idle server
consumes about 70% of the power utilized by a fully utilized server.
• The author has devised an energy model on the basis that the processor
utilization has a linear relationship with the energy consumption. That is, for a
particular task, the information on its processing time and processor utilization is
sufficient to measure the energy consumption for that task.
• For a resource ri at any given time, the utilization Ui is defined as:
Where n is the number of tasks running at that time and ui,j is the resource usage of task tj.
• The energy consumption Ei of a resource ri at any given time is defined as:
Where pmax is the power consumption at peak load and pmin is the power consumption at
minimum load.
• The scheduling and resource allocation approach is primarily enabled using ‘slack reclamation’
with the support of dynamic voltage/frequency scaling incorporated into many recent
commodity processors. This technique temporarily decreases voltage supply level at the
expense of lowering processing speed.
• Slack reclamation is made possible primarily by recent DVFS enabled processors and the
parallel nature of the deployed tasks. For example, when the execution of a task is dependent
on two predecessor tasks and these two tasks have different completion times, the
predecessor task with an earlier completion time can afford additional run time (slack).
• Since most DVFS based energy aware scheduling and resource allocation
techniques are static (offline) algorithms with an assumption of tight coupling
between tasks and resources (i.e., local tasks and dedicated resources),their
application to our cloud scenario is not apparent, if not possible.
• The task consolidating algorithm proposed has the main performance goal of
maximization of resource utilization and reduction of energy consumption.
• Unlike other task consolidation techniques, the proposed technique uses two
interesting strategies namely memory compression and request discrimination.
• The former enables the conversion of CPU power into extra memory capacity to
allow more (memory intensive) tasks to be consolidated, whereas the latter
blocks useless/unfavorable requests (coming from Web crawlers) to eliminate
unnecessary resource usage.
Task consolidation Algorithm
• Task consolidation is the process of allocating tasks to resources without violating
time constraints aiming to maximize resource utilization.
• Two energy-conscious task consolidation algorithms are proposed by the author
in the paper. 1) ECTC (Energy Consolidation and Task Consolidation) 2) MaxUtil
(Maximum Utilization)
• Both ECTC and MaxUtil follow similar steps in algorithm description with the
main difference being their cost functions.
• For a given task, two heuristics check every resource and identify the most
energy efficient resource for that task. The evaluation of the most energy
efficient resource is dependent on the used heuristic, or more specifically the
cost function employed by the heuristic.
• The cost function of ECTC computes the actual energy consumption of the
current task subtracting the minimum energy consumption (pmin) required to
run a task if there are other tasks running in parallel with that task.
• The value fi,j of a task tj on a resource ri obtained using the cost function of ECTC
is defined as:
Where
Is the difference between pmax and pmin.
uj is the utilization rate of tj and Ϯ0,Ϯ1,Ϯ2 are total processing time of tj.
• The cost function of MaxUtil is devised with the average utilization during the
processing time of the current task as its core component. This function aims to
increase consolidation density; and its advantage is two fold. The first and
obvious advantage is energy consumption is reduced. And, the second benefit is
that MaxUtil's cost function implicitly decreases the number of active resources
since it tends to intensify the utilization of a small number of resources compared
with ECTC's cost function.
• The value fi,j of a task tj on a resource ri using the cost function of MaxUtil is
defined as
Experimental evaluation
• The performance of ECTC and MaxUtil was thoroughly evaluated with a large
number of experiments using a diverse set of tasks. In addition to task
characteristics, three algorithms (random, ECTC and MaxUtil) were used. Variants
of these three algorithms were further implemented incorporating task
migration.
• The total number of experiments conducted is 100 – 5000 different numbers of
tasks (between 100 and 5,000 at intervals of 100), 10 mean inter-arrival times
(between 10 and 100 with a random uniform distribution), and three kinds of
resource usage patterns. Three different kinds of resource usage patterns are
random, low and high.
Results
Conclusion
• We can see from the graph that the two proposed algorithms have energy
saving capabilities
• ECTC and MaxUtil outperform other random algorithms by 18% and 13%
respectively.
• Tasks with lower source usage are more suitable for task consolidation as
shown from the graphs.
• The algorithm proposed by the author successfully reduces the power
utilization in cloud infrastructures
• This is experimentally verified.
• It can also imply possible savings (with better resource provisioning) in
other operational costs.
• It also reduces the carbon footprint of cloud systems.
QUESTIONS??
Download