Enhancing the Scalability of Virtual Machines in Cloud Chippy.A , Ashok Kumar.P

advertisement
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 5 - Mar 2014
Enhancing the Scalability of Virtual Machines
in Cloud
Chippy.A#1, Ashok Kumar.P#2, Deepak.S#3, Ananthi.S#4
#
Department of Computer Science and Engineering, SNS College of Technology
Coimbatore, Tamil Nadu, India
Abstract- In cloud computing, the overloaded hosts
are managed by allocating the Virtual Machines from
that host to another. This increases the resource
utilization. The proposed model introduces a load
balanced model for the cloud based on the dynamic
resource allocation concept with a switching
technique to select different methods for different
scenarios. The algorithm applies the game theory for
implementing resource allocation strategy to improve
the efficiency of the cloud environment.
Index Terms- Cloud computing, load balancing, green
computing, overload detection.
I.
INTRODUCTION
Cloud computing has become an
increasingly popular model in which
computing resources are made available
on-demand to the user as required. The
unique value of cloud computing creates
new opportunities to align IT and business
motives. Cloud computing works only
with the help of internet for delivering ITEnabled capabilities ‘as a service’ to any
needed users i.e. through cloud computing
we can access anything that we want from
anywhere to any computer without
worrying about anything like about their
storage, cost, management and so on.
Clouds are large pools of easily usable and
accessible virtualized resources. These
resources can be dynamically scaled up or
down to adjust to a different load, allowing
maximum utilization of resource. It’s a
pay-per-use model in which the service
Provider with the help of Service Level
Agreements (SLAs) provides a pool of
computing resources. Any organizations
and individuals can be benefited from this
ISSN: 2231-5381
mass computing and storage centers,
provided by large companies with stable
and strong cloud facilities. The concept
behind cloud computing is virtualization.
On-demand deployment, Internet delivery
of services, and open source software is
the important characteristics of cloud
computing. From one point of view, cloud
computing is nothing new because the
concepts used in it is existing. From other
view, cloud computing is new because of
its flexibility, update-ability, deployment
techniques. The applications and its
information in cloud computing are
maintained and updated with the help of
internet and remote servers. For using any
application from cloud computing the
users need not install it in their physical
device. They can access their files from
any device with access to internet. Cloud
computing provides more efficient
computing techniques by increased
bandwidth, memory, storage and security
for files [1]. The cloud has a great impact
on businesses of all sizes-from small and
midsized businesses to large enterprisesand it’s showing no signs of slowing
down. There are three cloud service
models. IaaS provides the entire
infrastructure for computing such that the
users need not worry about hardware,
power and cooling system to protect this
hardware. Computer resources can be
provisioned on demand as a utility. PaaS
takes us to the next level in the stack it
provides the operating system, database,
http://www.ijettjournal.org
Page 208
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 5 - Mar 2014
application server, and programming
language for developing a software or
application. SaaS is the next level in the
stack. SaaS provides the application or
service through internet connection. In this
service model, the consumer only needs to
focus on administering users to the system.
Load balancing is an efficient method for
distributing workloads across various
computing resources. Load balancing
focuses on increasing response time,
throughput and to overcome overloading
of any resources [2]. More work can be
executed in minimum amount of time
when deploying load balancing. Load
balancing is the process of dividing the
loads between n numbers of computers
which helps in performing some operation
efficiently. Because of this all users get
served faster. Load balancing can be
implemented with hardware, software, or a
combination of both. Typically, load
balancing is the main reason for computer
server clustering. In this paper we propose
a model for avoiding the overloading of
the servers by load balancing. The idle
servers that is which do not have any
virtual machines running on it can be
turned off or made to go to sleep mode
thus saving energy.
II.
Sandpiper sorts the list of PMs
based on their volumes and the VMs in
each PM in their volume to size ratio [7]. It
abstracts away critical information needed
when making the migration decision and
considers the PMs and the VMs in the presorted order. Another method in which it
uses VM and data migration to mitigate
hot spots not just on the servers, but also
on network devices and the storage nodes
as well [8]. A method using skewness was
also used for resource allocation
dynamically [9]. This measures the uneven
utilization of resources on a server. Load
prediction algorithm is used to identify the
hotspots and the cold spots. Hotspots occur
when servers are overloaded. Cold spot is
when the servers are in idle state without
performing any operation. The algorithm
then migrates the VMs from the servers
coming under the category of hotspot to
server in idle state. It helps in dynamic
resource allocation of resources.
III.
RELATED WORKS
Dynamic resource allocation of web based
applications was already carried out. The
web
applications
were
scaled
automatically. Each server has the copies
of all the web applications in the system in
MUSE [3]. Some resource allocation
methods were based on network flow
algorithms to allocate the load of an
application [4]. Quincy adopts min-cost
flow model in task scheduling to maximize
ISSN: 2231-5381
data locality while keeping fairness among
different jobs [5]. Dynamic priorities to
jobs and users were assigned to achieve
resource allocation [6]. Live migration of
VM is used for dynamic resource
allocation.
PROPOSED MODEL
In the system architecture each physical
machine runs the Virtual Machine Monitor
such as Xen hypervisor. The virtual
machine contains more number of
applications running in it. There is
backend storage for these physical
machines. The interoperability of virtual
machines to physical machines are being
managed. Every physical machine has a
local node manager. This local node
manager is used for collecting the resource
http://www.ijettjournal.org
Page 209
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 5 - Mar 2014
utilization levels of all the virtual machines
running in that physical machine. The
memory, storage and bandwidth usage can
be analyzed using the scheduling
techniques used in the Virtual Machine
Monitor. The utilization of memory is not
being identified by the hypervisor. This
can be managed by identifying the storage
shortage in virtual machine.
The information gathered at each
physical machine is send to the controller
which is responsible for the scheduling in
the virtual machines. The local node
manager invokes the scheduler in virtual
machines regarding the history of demand,
load of physical machines.
A predictor is used for foretelling the
resource demands of virtual machines and
in identifying the loads in the physical
machines based on previous analysis. The
physical machines load is calculated by
monitoring the resource utilization of the
virtual machines. The local node manager
tries to meet all the demands by allocating
the virtual machines which has mutual
sharing of same Virtual Machine Monitor.
The hypervisor can replace the CPU
allocation between virtual machines by
altering the weights in the scheduler. The
virtual machine scheduler has hot spot
predictor. It monitors whether the resource
utilization of physical machine has gone
above the threshold. If any occurs, then
any virtual machines running in the
physical machine is migrated for reducing
the load of the physical machine and
increasing its performance. The scheduler
also has a cold spot identifier. This is used
for checking the average or normal
utilization of active physical machines is
below the threshold or not. If any occurs,
then these physical machines can be
moved to shut down mode by moving all
ISSN: 2231-5381
its virtual machines. This migration list is
then forwarded to the controller by the
local node manager.
In order to identify the future
resource requirements of virtual machines,
it is necessary to view the application level
usage of the virtual machines. For
performing this needs modification of the
virtual machine. This is a tedious process.
Another approach is to identify the
previous activities of the virtual machines.
The CPU loads on the physical machines
are determined as discussed previously.
A. MITIGATION SPOTS
Fig.1 Handling hot spot and cold spot
The algorithm calculates the resource
utilization in all the physical machines. It
also evaluates the resource allocation
depending on the calculated future
resource demands of the virtual machines.
A server or a physical machine is being
described as a hot spot only when the
resource utilization is above the threshold
as described above. This means that the
server or physical machine is in overload
state and number of virtual machines
running in it more. This leads to result that
these virtual machines should be migrated
to any other physical machine having same
hypervisor.
Similarly a server or physical
machine is being described as a cold spot
only when the resource utilization is below
the threshold, as the name implies. This
means that the server or physical machine
http://www.ijettjournal.org
Page 210
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 5 - Mar 2014
is not performing any operation; else to
say in simple terms the server or the
physical machine is in idle state. Such
nodes can be made to move to sleep mode.
A node is said to be in active state if it has
one virtual machine running (minimum).
Finally a server or the physical machine is
said to have a warm spot when the level of
resource utilization is high to have the
server performing its application and not
too high to change into hot spot, which
will in turn affect the resource demands.
These thresholds vary according to various
types of resources. Consider the instance,
the threshold for CPU usage be defined as
85%. Then it becomes a hot spot if it goes
beyond 85%. When a hot spot is identified
in the system it must be migrated. These
hot spots are listed according to the
temperature i.e., the hottest is first handled.
Trying to remove all the hot spots are not
possible, atleast their temperature must be
brought down. While migrating the virtual
machines, initially which virtual machine
is to be migrated must be decided. If any
ties occur then the virtual machine which
can minimize the uneven resource
utilization is being selected. All the virtual
machines to be migrated are stored in a
list. For the stored virtual machines,
availability of destination is checked. It is
also to be noted that on migrating the
virtual machine from a hot spot server, the
receiving server must not change to hot
spot. Such servers are identified and
updated. Destinations are found for all
such virtual machines in the list. This
overcomes the overloaded state of the
server or the physical machine.
B. ENERGY CONSERVATION
When a server or physical machines is in
cold spot as mentioned earlier, the virtual
ISSN: 2231-5381
machines in that node is migrated to some
other active servers. Then that server or the
physical machine is switched off. This can
help a lot in energy conservation. This is
achieving green computing. The main goal
in green computing is to minimize the
count of servers in the active state which is
not having a load or not performing any
operation currently or in future. Similar to
the list of virtual machines in hot spot
server, a list is maintained here also which
is the exact opposite of the previous list.
Here the list is sorted with minimum
temperature. The virtual machines in cold
spot servers are assigned a new
destination. These destinations are selected
in such a way that these must be in warm
spot state. Thus the energy is saved even to
a accepting level.
C. GAME THEORY
As mentioned earlier, the virtual machines
are to be migrated if the server is in hot
spot or in cold spot. The virtual machines
are migrated from hot spot server in order
to avoid or overcome overloading.
Similarly the virtual machines are
migrated from cold spot server in order to
save energy and achieve green computing
by turning off the idle servers. These
migrations are carried out based on the
game theory strategies. Each server is
considered as the players and all have
given equal priorities [10]. One server
takes an action and all other available
servers react accordingly. If a server goes
to hot spot state then it tries to manage the
overload or checks for the nearby server
availability. Various servers are selected
for virtual machine migration. The most
nearer server and the server with warm
threshold is selected. The server for
migrating the virtual machine does not
wait for monitoring any other server’s
http://www.ijettjournal.org
Page 211
International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 5 - Mar 2014
activities and just migrate its virtual
machine to the server with warm
threshold.
IV.
CONCLUSION
Overloading is the one of the important
issue. It must be managed efficiently to
achieve better performance. All the
resource demands must be meet at the
required time. If overloading occurs then
this is not possible. Load balancing is an
important factor in meeting resource
demands. Various other techniques can be
used for load balancing. Here the activities
of all the servers are monitored
continuously. Identifying hot spot, cold
spot and warm spot is an important
process. Analyzing these spots helps a lot
in migrating virtual machines and in
deciding which virtual machines to be
migrated to which destination.
[8] A. Singh, M. Korupolu, and D. Mohapatra, “Server-storage
virtualization: integration and load balancing in data centers,” in
Proc. of the ACM/IEEE conference on Supercomputing, 2008.
[9] Zhen Xiao, Weijia Song and Qi Chen,”Dynamic Resource
Allocation using Virtual Machines for Cloud Computing
Environment” IEEE Transaction on Parallel and Distributed
Systems 2013.
[10] Nageswara S.V. Rao, Chris Y. T. Ma, “ Cloud Computing
Infrastructure Robustness: A Game Theory Approach”
International Conference on Computing, Networking and
Communications(ICNC) 2012.
REFERENCES
[1] Pankaj Arora, Rubal Chaudhry Wadhawan, Er. Satinder Pal
Ahuja “Cloud Computing Security Issues in Infrastructure as a
Service” International Journal of Advanced Research in
Computer Science and Software Engineering Volume 2, Issue 1,
January 2012.
[2] Anton Beloglazov and Rajkumar Buyya,
Overloaded Hosts for Dynamic Consolidation
Machines in Cloud Data Centers under Quality
Constraints” IEEE Transactions on Parallel and
Systems, Vol.24, No.7, July 2013.
”Managing
of Virtual
of Service
Distributed
[3] J. S. Chase, D. C. Anderson, P. N. Thakar, A. M. Vahdat,
and R. P. Doyle, “Managing energy and server resources in
hosting centers,” in Proc. of the ACM Symposium on Operating
System Principles (SOSP’01), Oct. 2001.
[4] C. Tang, M. Steinder, M. Spreitzer, and G. Pacifici, “A
scalable application placement controller for enterprise data
centers,” in Proc. of the International World Wide Web
Conference (WWW’07), May 2007.
[5] M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar,
and A. Goldberg, “Quincy: Fair scheduling for distributed
computing clusters,” in Proc. of the ACM Symposium on
Operating System Principles (SOSP’09), Oct. 2009.
[6] T. Sandholm and K. Lai, “Mapreduce optimization using
regulated dynamic prioritization,” in Proc. of the international
joint conference on Measurement and modeling of computer
systems (SIGMETRICS’09), 2009.
[7] T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif,
“Black-box and gray-box strategies for virtual machine
migration,” in Proc. of the Symposium on Networked Systems
Design and Implementation (NSDI’07), Apr. 2007.
ISSN: 2231-5381
http://www.ijettjournal.org
Page 212
Download