A Study on Energy Efficiency and Cost Efficiency in Cloud Computing

advertisement
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
A Study on Energy Efficiency and Cost Efficiency
in Cloud Computing: A Survey
P.Manivel pandian1 ,PG Scholar,
Department of CSE,
PSNA college of Engineering and Technology,
Dindigul - 624619, India.
Abstract: Cloud computing is on demand provisioning
of virtual resources aggregated together so that by
specific contracts users can lease access to their
combined power. Cloud computing is a new service
model for sharing pool of computing resources that can
be rapidly accessed based on converged infrastructure.
Cloud allows benefits in terms of elasticity, maintenance
cost, and economics of scale and virtualization
flexibility. The issues arise in the area of cloud
computing are cost optimization, energy consumption,
Issues in privacy. If the energy consumption is reduced,
cost will also get reduced .In order to address this issue,
a survey on energy consumption and cost optimization
is done.
I.INTRODUCTION
Cloud computing is defined as a type of
computing that relies on sharing computing
resources rather than having local servers or personal
devices to handle applications. Cloud computing is
comparable to grid computing, a type of computing
where unused processing cycles of all computers in a
network are harnesses to solve problems too intensive
for any stand-alone machine. Data centers are
becoming increasingly popular for the pro-visioning
of computing resources. The cost and operational
expenses of data centers have skyrocketed with the
increase in computing capacity. The need for energy
consumption can be explained by [1] following facts
A typical 5,000 Ft2 data center demands 1.127 MW
electrical power. In 2013, data centers in the US
consumed approximately 91 billion KWH of
electricity, equivalent to the annual output of 34 large
(500 MW) coal-fired power plants. Data center
electricity [2] consumption is projected to increase to
roughly 140 billion KWH annually by 2020;
A.Sathya Sofia2, AP,
Department of CSE,
PSNA college of Engineering and Technology,
Dindigul - 624619, India.
equivalent to the annual output of 50 power plants,
costing American businesses 13 billion US dollars
annually in electricity bills, and emitting nearly
100million metric tons of carbon pollution per year
Electricity consumption cost has become an
increasingly significant fraction of the total cost of
ownership of current and future data centers. Another
fact is that servers are only busy 10-30 percent of the
time on average. As cloud computing is predicted to
grow, substantial power consumption will result in
not only huge operational cost but also tremendous
amount of car-bon dioxide (CO2) emissions.
Therefore energy and cost efficiency in cloud
computing has become a vital on.
II. CLOUD COMPUTING AND ITS
SERVICES
Cloud computing is on-demand Provisioning of
virtual resources aggregated together so that by
specific contracts users can lease access to their
combined power. The cloud allows benefits in terms
of elasticity, maintenance costs, and economics of
scale and virtualization flexibility. Furthermore,
many studies have been affected to find the nature of
the HPC applications suitable to be executed on
cloud platforms.
These services are classified into three main
services delivery models: infrastructure as service
(IaaS), Platform as service (PaaS) and software as
services (Saas). IaaS refers to the practice of
delivering on demand IT infrastructure as a
commodity to customers. PaaS provides a
development platform in which customers can create
and execute their own applications. SaaS endows the
55
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
user with an Integrated Service Comprising
hardware, development platforms and applications.
Typically, a cloud service provider signs contracts
with his customers in the form of service-level
agreements (SLAs), which can concern many aspects
of cloud computing service. The contract defines the
agreed upon service fees for the total virtual
resources negotiated by the client as well as the
associated service credit if the provider fails to
deliver the level of service.
III.
NEED FOR ENERGY
CONSUMPTION
that the Data centers are becoming increasingly
popular for the pro-visioning of computing resources.
The cost and operational expenses of data centers
have skyrocketed with the increase in computing
capacity. There occurs a condition for consumption
of energy in cloud especially in data centers .
IV.
SURVEY REGARDING ENERGY
EFFICIENCY
A. Algorithms For Cost- And Deadline-Constrained
Provisioning For Scientific Workflow Ensembles In
Iaas Clouds [3]
IaaS clouds are characterized by on-demand resource
provisioning capabilities and a pay-per-use model. A
new problem concerning the efficient management of
such ensembles under budget and deadline
constraints on Infrastructure as a Service (IaaS) is
discussed. To solve this problem assess novel
algorithms based on static and dynamic strategies for
both task scheduling and resource provisioning is
developed. The evaluation is done via simulation
using a set of scientific workflow ensembles with a
broad range of budget and deadline parameters,
taking into account such as provisioning delays, and
failures, uncertainties in task runtime estimations,
task granularity. The most important factor of this
algorithm is that determining the performance of an
algorithm by its ability to decide which workflows in
an ensemble to admit or reject for execution. An
example of the application that uses scientific
workflow is Cyber Shake Each workflow in a Cyber
Shake ensemble generates a hazard curve for a
particular geographic location, and several hazard
curves are combined to create a hazard map. In a
2013 study, Cyber Shake was used to generate a set
of hazard maps over 286 sites that required an
ensemble of 288workflows.Problem of scheduling
and resource provisioning for scientific workflow
ensembles on IaaS clouds is addressed in this paper
.The goal of this work is to maximize the number of
user-prioritized workflows that can be completed
given budget and deadline constraints. We developed
three algorithms to solve this problem: two dynamic
algorithms, DPDS and WA-DPDS, and one static
algorithm, SPSS. The algorithms were evaluated via
simulation on ensembles of synthetic workflows,
which were generated based on statistics from real
scientific applications. Results show that an
admission procedure based on workflow structure
and estimates of task runtimes can significantly
improve the quality of solutions.
B. Cost-Aware Challenges For Workflow Scheduling
Approaches In Cloud Computing Environments:
Taxonomy And Opportunities [4]
The main objective of this paper is to facilitate
researchers in selecting appropriate cost-aware
(WFS) approaches from the available pool of
alternatives. To achieve this objective, we conducted
an extensive review to investigate and analyze the
underlying concepts of the relevant approaches. The
cost-aware relevant challenges of WFS in cloud
computing are classified based on Quality of Service
(QoS) performance, system functionality and system
architecture, which ultimately result in a taxonomy
set. Workflow Scheduling (WFS) mainly focuses on
task allocation to achieve the desired workload
balancing by pursuing optimal utilization of available
resources. to solve specific WFS problems in cloud
computing by providing different services to cloud
users on pay-as-you-go and on-demand basis the
challenges affecting WFS execution cost have been
discussed prior work did not consider such challenges
collectively. main benefits of migrating workflow to
cloud computing are (i) enable the utilization of
various cloud services to facilitate the automation of
distributed large-scale work flow execution;(ii)
significant reduction of hardware expenditure for
56
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
work flow execution by sharing and providing
resources in cloud systems; and (iii) increased user
satisfaction along with reduced execution cost and
time by obtaining the pay as you-go business model.
WFS provide the ability to get access to other cloud
ser-vices and facilitate the Service Level Agreements
(SLA). For cost-aware Workflow Scheduling (WFS)
challenges. Firstly, it presents the sub-taxonomy of
cost-aware challenges. Then, it depicts the correlation
of these challenges with key aspects of cloud
workflow system. Finally, it provides grouping of
reviewed approaches based on the profitability by
extracting their association with cost-aware
challenges. Cost-aware Workflow Scheduling (WFS)
remained an active area of research since emergence
of cloud and grid computing for workflow
applications. Cost-aware WFS challenges such
system functionality, and system architecture. To
classify the current state-of-the-art cost-aware WFS
approaches, we devised three tax-anomies covering
various aspects of WFS including cost-aware
challenges, cloud workflow system, and cost-aware
profitability. Consideration of aforementioned
aspects can further improve the robustness and
flexibility of WFS approaches to design a costeffective solution. Furthermore multi-criteria based
cost optimization can help in providing optimal
(WFS) solutions and hence the presented work can be
helpful in improving the body of evidence in the field
of (WFS). The findings of this review provide a
roadmap for developing cost-aware models which
will motivate researchers to propose better costaware approaches for service consumers and/or
utility providers in cloud computing.
C.Optimizing Energy Consumption With Task
Consolidation In Clouds [5]
To make best use of utilization of cloud
computing resources one of the best ways is Task
consolidation. Maximize source exploitation provides
various benifits such as the rationalization of
maintenance IT service customization, and (QoS) and
reliable services However Maximize source
exploitation does not mean efficient energy use.
Much of the creative writing shows that energy
consumption and resource exploitation in clouds are
highly united. Some of the creative writing aims to
decrease resource exploitation in order to save
energy, while others try to reach a balance between
resource exploitation and energy consumption. In
order to solve this problem a best technique energyaware task consolidation (ETC) technique that
minimizes energy consumption. ETC achieves this by
restricting CPU use below a species peak threshold.
The proposed technique solves this problem by
consolidating tasks amongst virtual clusters. In
addition, the energy cost model considers network
latency when a task migrates to another virtual
cluster. To evaluate the performance of ETC we
compare it against Max Utilization (Max Util). a
recently urbanized greedy algorithm that aims to
maximize cloud computing resources is the (Max
Util). Energy consumption varies according to CPU
utilization. Higher CPU utilization usually implies
greater energy consumption. However, higher CPU
utilization does not equate to energy efficiency. The
task consolidation strategy uses the best-fit strategy to
optimize resource utilization The best strategy
achieves this by migrating tasks to whichever VM
will most closely approach the target CPU utilization
threshold. The CPU utilization threshold depends on
hardware architecture and may differ on different
cloud systems. Considering the architecture (ETC) of
most cloud systems, a default CPU utilization
threshold of 70% is used to demonstrate task
consolidation management amongst virtual clusters.
Idle state of virtual machines and network
transmission are assumed to be a constant ratio of
basic energy consumption these values can be
adjusted on different cloud systems in order to get
better performance from the (ETC) method. ETC is
designed to work in a data center for VC and VMs
that reside on the same rack or on racks where
network band-width is relatively constant. The
simulation results show that ETC can significantly
reduce power consumption when managing task
consolidation for cloud systems. ETC has up to 17%
improvement over a recent work that reduces energy
consumption by maximizing resource utilization.
D. Analyzing Hadoop Power Consumption And
Impact On Application Qos [6]
57
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
One of the key reasons for migrating to Cloud
environments is often identified as Energy efficiency.
It is assured that a data center in the surroundings of
the Cloud to achieve greater energy efficiency at a
reduced cost compared to a local operation. In this
effort we inspect and measure energy consumption of
a number of virtual machines running the Hadoop
system. Our idea is to understand the tradeoffs
between energy efficiency and performance for such
a workload. From our results we generalize and
speculate on how such an analysis could be used as a
basis to establish a Service Level Agreement (SLA)
with a Cloud provider in particular where there is
likely to be a high level of inconsistency both in
performance and energy use. The quality of services
(QoS) related metrics especially latency are one of
the most challenging to support. This effort provides
some effort to find close relationship between power
consumption and QoS related metrics, describing
how a combined consideration of these two metrics
could be supported for a particular workload. It is
also useful to note that the business case for
migrating to Cloud computing systems has often
cantered on the cost savings that would arise due to
reduced use of energy at a client site It is often stated
that due to the economies of scale, the ability to
negotiate cheaper energy tariffs and the use of
renewable energy sources, data centre operators are
able to offer both cost and energy efficient
operational systems. The purpose of this work has
been to measure and distinguish power consumption
for high throughput workloads by means of Hadoop.
Such dimension can be used as the basis for
developing a workload power utilization model for
analyzing social media data. The main conclusion is
that there is a non-linear relationship between the
number of virtual machines, the workloads that these
VMs execute and the power utilization seen on the
physical machine. Identifying how many VMs are
needed to achieve a particular throughput at a given
power usage profile can be undertaken based on the
results reported in this work. Variability such as
(such as, sudden drops or peaks is power usage that
cannot be easily explained) in power consumption
over multiple runs of the same workload is also
considered. This work provides some insight on the
relationship between power consumption and QoS
related metrics, describing how a combined
consideration of these two metrics could be supported
for a particular workload. Our experiments describe
when it is desirable to increase the number of
resources allocated to a particular application, and
when such allocation is unlikely to lead to any
significant performance improvement, but still lead to
high power usage. By applying the power
characterization described in this work to handle
Cloud computing environments in an optimized way
in terms of power saving and/or performance.
E. Energy Efficient Scheduling Of Virtual Machines
In Cloud With Deadline Constraint [7]
Now a day’s Virtualization is widely used in cloud
computing and extremely large amount of electricity
is consumed to maintain these virtual machines. As a
result the profit of the service providers gets reduced
and also harms the environment. If the physical
machines(PMs) are heterogeneous the existing
energy efficient scheduling methods of virtual
machines (VMs) in cloud cannot work well and
typically do not use the energy saving technologies of
hardware, such as dynamic voltage and frequency
scaling (DVFS).In order to avoid these hazards we
propose
a
energy
efficient
scheduling
algorithm(EEVS) of VMs in cloud . A work of
fiction conclusion is conducted that there exists best
possible frequency for a (PM) to process certain
(VM) based on which the notion of optimal
performance power relation is defined to weight the
homogeneous PMs. The Physical machines with
higher optimal performance–power ratio will be
assigned to VMs first to save energy. each of the
cloud (VM) are allocated to proper (PM) and each
active core operates on the most favorable frequency
After specific time period the cloud should be
reconfigured to consolidate the computation
resources to further reduce the power consumption .
Virtualization is an important technology typically
adopted in cloud to consolidate the resources and
support the pay-as-you-go service paradigm. the main
challenges for energy efficient scheduling of VMs in
cloud computing such as heterogeneity of the PMs,
the total power consumption of each PM and the
adoption of some energy saving technologies for
58
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
hardware such as DVFS are overcome by EEVS, an
energy efficient scheduling algorithm of virtual
machines to reduce the total energy consumed by the
cloud, which also supports DVFS well. EEVS
consumes less energy and processes more (VM)
successfully than the existing methods in most cases,
there are still some shortcomings. (PM) and workload
are simulated though some information are checked.
The simulation results show that our proposed
scheduling algorithm achieves over 20% reduction of
energy and 8% increase of processing capacity in the
best cases.
F. Data Center Energy-Efficient Network-Aware
Scheduling [8]
Data centers are becoming increasingly popular for
the pro-visioning of computing resources. The cost
and operational expenses of data centers have
skyrocketed with the increase in computing capacity,
As a fact Energy consumption is a growing concern
for data centers operators It is becoming one of the
main entries on a data center operational expenses
(OPEX) bill. The existing work states that in data
centers energy optimization is focusing only on job
distribution between computing servers based on
workload or thermal profiles. So in this work (DENS)
an approach that combines energy efficiency and
network awareness which under-lines the role of
communication fabric in data center energy
consumption was introduced. Data center energy
efficient network-aware scheduling (DENS) works
by balancing the energy consumption of a data
center, individual job performance, and traffic
demands. Optimization of the trade-off between job
consolidation (to minimize the amount of computing
servers) and distribution of traffic patterns (to avoid
hotspots in the data center network) is done by the
DENS. Implementation and testing of DENS
methodology in realistic setups using test beds.
(DENS) methodology is particularly relevant in data
centers running data-intensive jobs which require low
computational load, but produce heavy data streams
directed to the end-users. The DENS operation
details and its ability to maintain the required level of
QoS for the end-user at the expense of the minor
increase in energy consumption is shown by the
simulation results obtained for three-tier data center
architecture.
G.Towards High Available And Energy Efficient
Virtual Computing Environments In Cloud [9]
The construction of flexible and elastic
computing environments are enabled by cloud
infrastructures which is Empowered by virtualization
skill provides an opportunity for energy and resource
cost optimization by means of enhancing system
availability and achieving high performance. The
basic requirement for effective consolidation is the
ability to efficiently utilize system resources for highavailability computing and power-efficiency
optimization to reduce operational costs and carbon
footprints
in
the
environment.
Algorithms(POFAME)and (POFARE) are proposed
in this work to readjust and to dynamically construct
virtual clusters to enable the execution of users’ jobs
.To detect and mitigate energy inefficiencies, our
decision-making algorithms influence virtualisation
tools to provide proactive fault-tolerance and energyefficiency to virtual clusters with an energy
optimizing mechanism simulations are conducted by
injecting random synthetic jobs and jobs using the
latest version of the Google cloud trace logs. The
results indicate that our strategy improves the work
per Joule ratio by approximately 12.9% and the
working efficiency by almost 15.9% compared with
other state-of-the-art algorithms. The objective is to
maximize the useful work performed by the
consumed energy in cases where the infrastructure
nodes are subject to failure. Two dynamic VM
allocation algorithms, POFAME and POFARE,
which use two different methods to provide energyefficient virtual clusters to execute tasks within their
deadlines, are used to achieve this objective.
POFAME algorithm tries to reserve the maximum
required resources to execute tasks, POFARE
leverages the cap parameter from the Xen credit
scheduler to execute tasks with the minimum
required re-sources.the simulation results shows that
The improvement in energy efficiency of POFARE
over OBFIT is 23.6%, 16.9%,and 72.4% for the
average task length ratios of 0.01, 0.1, and 1,
respectively and The improvement in working
59
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
efficiency of POFARE over OBFIT is 26.2%, 20.3%,
and 219.7% for the ratios of 0.01, 0.1, and1,
respectively . Another relevant problem consists of
processing workflows in the cloud, such that jobs
where tasks may have precedence also considered.
H. Energy - Efficient Deadline Scheduling For
Hetero Geneous Systems [10]
Power aware scheduling algorithms with
deadline constraints for heterogeneous systems is
proposed in this paper by formulating the problem of
extending the traditional multiprocessor scheduling
and design approximation algorithms with analysis
on the worst-case performance a pricing scheme for
tasks in the way that the price of a task varies as its
energy usage as well as largely depending on the
tightness of its deadline is also presented The
extended online algorithm also outperforms the EDF
(Earliest Deadline First)-based algorithm with an
average up to 26% of energy saving and 22% of
deadline satisfaction. Both the static and online
Energy-Efficient Scheduling (EES) algorithm for
independent tasks with deadline constraints in
heterogeneous systems unit cost metric is introduced.
EES algorithm has almost as good energy
minimization and deadline satisfaction ability as the
optimal solution while the online EES algorithm has
a much better performance than the EDF algorithm
by the experimental results both user and provider
can try to control their own parameters to maximize
the respective interests, which has good
mercerization application. The proposed algorithm
achieves near-optimal energy efficiency, on average
16.4%better for synthetic workload and 12.9% better
for realistic workload than the EDD (Earliest Due
Date)-based algorithm. It is experimentally shown as
well that the pricing scheme provides a flexible tradeoff between deadline tightness and price.
V.
portions of Energy wastage in idle time. The growing
crisis in power shortages has brought up the need for
optimizing energy and cost.
REFERENCES
[1] Author: Keqinli , “Improving Multi core Server
Performance and Reducing Energy Consumption by
Workload Dependent Dynamic Power Management”
(IEEETRANSACTIONS
ON
CLOUD
COMPUTING, VOL.2, April 2015).
[2] Author: Ivanoe De Falco, Umberto Scafuri,
Ernesto Tarantino, “Mapping of time-consuming
multitask applications on a cloud system by multi
objective
Differential
Evolution”
(IEEE
TRANSACTIONS ON CLOUD COMPUTING,
VOL. 4, NO. 2, April 2015).
[3] Author: Maciej Malawski Gideon Juve Ewa
Deelmanb Jarek Nabrzyski, “Algorithms for costand deadline-constrained provisioning for scientific
workflow ensembles in IaaS clouds” (IEEE
TRANSACTIONS ON CLOUD COMPUTING,
VOL. 3, NO. 2, October 2014).
[4] Author: Ehab Nabiel Alkhanak, Sai Peck Lee,
Saif Ur Rehman Khan, “Cost-aware challenges for
workflow scheduling approaches in cloud computing
environments: Taxonomy and opportunities” Faculty
of Computer Science and Information Technology,
University
of
Malaya,
Kuala
Lumpur,
Malaysia(Article history Received 25 May 2014
Received in revised form 10 December
2014Accepted 19 January 2015 Available online 2
February 2015).
[5] Authors: Ching-Hsien Hsu, Kenn D. Slagter,
Shih-Chang Chen , “Optimizing energy consumption
with task consolidation in clouds” (Article published
on December 2012)
CONCLUSION
This survey paper underlines the need for
energy and cost optimization in cloud especially in
datacenters and cloud servers The Energy
consumption of different types of cloud servers was
investigated and it was shown that there are great
[6] Authors: Javier Conejero, Omer Rana, Peter
Burnap, Jeffrey Morgan., “Analyzing Hadoop power
consumption and impact on application QoS” Article
published
on
journal
homepage:
www.elsevier.com/locate/fgcs 9 March 2015).
60
International Research Journal of Emerging Trends in Multidisciplinary
ISSN 2395 - 4434
Volume 1, Issue 8 October 2015
www.irjetm.com
[7] Constraint Author: Youwei Ding, Xiaolin Qin,
Liang Liu, “Energy efficient scheduling of virtual
machines in cloud with deadline”. Journal homepage:
www.elsevier.com/locate/fgcs 11 February 2015)
[8] Author: Dzmitry Kliazovich, Pascal Bouvry,
Samee Ullah Khan, “Data center energy-efficient
network-aware scheduling”. (Published on Cluster
Comput DOI 10.1007/s10586-011-0177-4 April
2011)
[9] AUTHOR: Altino M. Sampaio, Jorge G, Barbosa,
“Towards high-available and energy-efficientvirtual
computing environments in the cloud” (Article
published on 7 July 2014).
[10] Author: Luna Mingyi Zhang, Keqin Li, Yanqing
Zhang, “Article: Energy-efficient task scheduling
algorithms on heterogeneous computers with
continuous and discrete speeds” (published on 28
January 2013).
61
Download