Improved Live Migration Using Compressed Log Files Rakhi k Raj

advertisement
International Journal of Engineering Trends and Technology- Volume4Issue3- 2013
Improved Live Migration Using Compressed
Log Files
Rakhi k Raj #1, Getzi Jeba Leelipushpam*2
PG Scholar, Karunya University
Assistant Professor, Karunya University
Abstract: Live virtual machine migration is the
process of moving virtual machine from one host
to other without disturbance of users. Live
migration is used in the case of proactive
maintenance, power management, load balancing,
energy saving. This paper present and design the
novel approach to reduce the total migration time
and down time during live virtual machine
migration by compressed log files transfer. For
providing the effective and fast migration of the
virtual machine log compression method uses
Huffman encoding and to make the migration
effective we use the parallel multithreading
technique. This method also consumes the less
bandwidth during the time of migration and
provide more security to the data.
storage to store the data. Network connectivity is
transfer by send an ARP request broadcast to all the
VM in the network indicates that the device is moved
to the new location. The following diagram shows
the live VM migration. In the figure 1 it consist of
two host,server1 and server2. Server1 consist of two
virtual machine named VM1 andVM2 and other
server2 consist of one virtual machine VMS1. At the
time of migration the virtual machine VMS1 from
server2 transfer in to the server2 and server2
become offline. After the process of migration the
server1 consist of 3 virtual machines. The virtual
machine provide an isolation to the all program
running on different machine.
Keywords: virtual machine, live migration, total
migration time, down time
1.INTRODUCTION
A virtual machine (VM) is a software
implementation of a computing environment in
which an operating system or program can be
installed and run[1]. Virtual machines are installed on
the virtualization layer or the virtualization layer run
on the top of the client server platform this is called
guest operating system. Using virtualization
technology we can install number of VM on the
physical host and it provide the strong isolation from
the other VMs.
Live VM migration is the process of
moving running virtual machine from one system to
other without any disturbances to the users. Live
migration is achieved by transferring the memory
content, cpu state, storage and network
connections[2]. The memory pages are accessed by
the users and the pages are dirtied, so at the time of
migration memory pages are iteratively transfer from
source to the target machine. Migration of storage
similar to migration of memory but it takes larger
time to transfer the memory so use the common
ISSN: 2231-5381
Fig1:Live migration
The live VM migration is used in the case of
Proactive maintains: If an imminent failure is
suspected, the potential problem can be resolved
before disruption of service occurs[3]. we can replace
the system with the new one by the transfer of
content in to new VM machine location, The users
are unaware about the transfer of the machine .
Load balancing[11]: In which work is shared among
computers in order to optimize the utilization of
available CPU resources. Using the live VM
migration concept
we can reduce the energy
consumption, power usage between the systems.
http://www.internationaljournalssrg.org
Page 373
International Journal of Engineering Trends and Technology- Volume4Issue3- 2013
Energy consumption[12]: Using Live Vm migration
concept we can reduce the energy usage of the
system by reducing the number of resources used for
performing the operations.
The two important performance metrics
for live migration is total migration time and down
time[2].
Total migration time :Total migration time is defined
as the time taken to migrate all the data from the one
system to other.
Down time :Downtime is the duration of time the
services are not available for the users.
In the area of live VM migration all the
researchers are trying to reduce the down time in to
zero and minimal total migration time. In addition to
the total migration time and down time ,the total
pages transferred, consumption of network
bandwidth, migration overhead , security during the
time of migration are the other factors influence the
effectiveness of live VM migration.
The objective of the paper is to propose a novel
approach to reduce total migration time and
downtime during the live virtual machine migration.
This paper is also deals with the reduction of
consumption of bandwidth and provides more
security at the time of migration.
II .RELATED WORK
Number of techniques are used for the
migration. They broadly divided two categories of
migration techniques are Precopy migration and post
copy migration.
In precopy migration[4] the memory
contents are transfer to the destination machine even
as the source node is continuously executing on the
application. During the time of migration the services
are continuously accessed by the users this lead to the
memory pages are dirtied and dirtied memory pages
are iteratively transfer from the source to the
destination. The iterative transfer stopped until it
reaches the smallest writable working set (WWS) has
been identified, or a preset number of iterations is
reached, whichever comes first. This constitutes the
end of the memory transfer phase and the beginning
of service downtime. The VM is then suspended and
its processor state plus any remaining dirty pages are
sent to a target node. Finally, the VM is restart and
the copy at source is destroyed. One of the advantage
of precopy it have reduced total down time ,provide
fault management but have the more total migration
time.
The second broad category of live
migration technique is the post copy migration[5].In
post copy migration the process are suspend at the
destination then the memory content are transfer to
the destination machine. Compare to the precopy
ISSN: 2231-5381
approach the post copy approach have the less total
migration time but the downtime is more. In post
copy approach it consider the dynamic self
ballooning mechanism it helpful for the releasing of
free memory.
Besides of the two category it include a
number of migration control techniques in order to
reduce down time and total migration time Adaptive
memory compression techniques one of the migration
control techniques[6]. This approach is used to
optimize live VM migration based on the pre-copy
approach[5] Here we using the compression to
provide the fast approach. In compression technique
we are using the zero-aware characteristics-based
compression algorithm for live VM migration. Before
the transmission of data, it is compressed and transfer
to the target host. In target host Compressed data is
again decompressed. For doing the compression one
step is necessary that is memory data characteristic
analysis. In memory data analysis we first keep a
dictionary of 16 words which is recently used. Then
finding the word similarity of those pages. Using
word similarity they find they decide the compression
algorithm. The memory pages are classified with
pages having zero bytes, pages having strong
regularity and pages having weak regularity. The
compression algorithm is helpful for fast moving of
memory pages.
The another approach for live vm migration
is LRU and splay tree algorithm[7]. In LRUand Splay
tree it consists of two stacks and counters, top of
stack contain the last recently used pages. Based on
the algorithm they are finding the working set
prediction .it consist of mainly 3 steps pre processing
,push phase, stop and copy phase. During preprocessing phase it will calculates the working set
prediction algorithm and calculate recently used
memory pages. Then they are transferring the page
other than last recently used in first step in the
iteration. If a new process is fall with process ID first
check whether it in the LRU cache or not ,if it is not
in the LRU cache they replaced in the LRU cache
and constructing the splay tree for that. if it is in the
LRU cache then replace in the top of the cache. if
LRU cache is full then they replace the last one in the
cache . In the next step they transfer the memory
pages to the destination. and re execute in the
destination host. Advantages are that it have reduced
migration time because less number of pages are
transferred during the migration. But the performance
is depends on the number of modified pages if the
page modified more then the migration time is less.
Live migration using CPU scheduling is
trying to reduce the number of pages transfer to the
target host by cutting down the cpu performance[8].
http://www.internationaljournalssrg.org
Page 374
International Journal of Engineering Trends and Technology- Volume4Issue3- 2013
They are trying to decrease the speed of cpu hence
number of dirtied pages are very less. In the first step
they assign the vm whole memory number of round
set as zero. Then calculates the count transferring
time, scheduled the cpu time to execute memory. One
advantage is that it have reduced number of pages
transfer. disadvantage is service degradation of
performance.
This paper presents a novel approach to
reduce total migration time and down time during
live virtual machine migration based on the
checkpoint recovery/log replay[9] .To provide the
migration of VM we provide the fast compression
technique. The log files that are transferred from the
source are first compressed and transmitted to the
destination. In destination the compressed files are
decompressed. Using this approach it reduced the
consumption of bandwidth, provide more security
rather than transferring the log files as itself.
Compressed files are take less time to traverse
through the network hence the total migration time is
less. The integrity is also ensured to the transferred
log files.
The objective of this approach is to reduce
the total migration time and down time and also
reduce the consumption of bandwidth ,provide more
security to the log files.
III. PROPOSED WORK
In the analysis of all existing approaches the
checkpoint recovery trace and replay have the less
total migration time and down time but
comparatively it use higher bandwidth. In order to
reduce the bandwidth and provide security to the
data, we provide compression mechanisms to the logs
that are transferred from the source to the target
machine. In order to make the compression are
effective here we use Huffman encoding schemes
[10]. The following are the steps for the transfer of
the logs.
A. Selection
In the cloud Data Center it consist of VM
manager , it monitor and stores all the details of the
VMs that present in the cloud data center.VM
manager it continuously monitors the status of the
virtual machine by sending the quires to the virtual
machines in the list and update the status of the VMs
in the list.. All virtual machines are responds to the
request by sending the cpu utilization, memory,
storage details etc. In the Virtual machine manager it
sets some minimum requirement for all the virtual
machine .If any VM’s status below the threshold it
ISSN: 2231-5381
initiates the VM migration. It is the phase in which
we selected the virtual machines for migration. The
migrating host and target are selected. The
destination host satisfying the requirement of the
source host.
B. Reservation
Reservation is the process it check whether the target
host have the required amount of resource to take
place the migration. After the selection of source and
destination inform all other virtual machine about
migration takes place from source to destination. This
helps to prevent from the transfer of other virtual
machines to the destination machine.
C. Preprocessing
DMTCP [13]has the ability to checkpoint and restart
the virtual machine at the target machine. For
initiates the migration first we fix the check point
.Then the check point is transferred to the destination.
In destination we reconstruct the VM from the
checkpoint. When the user access the source system
the log files are continuously generate at the machine.
Before transmitting the log files to the destination,
the files are compressed and transmitted to the
destination. The compression is lossless then only we
can reconstruct at the destination. The Huffman
encoding is used to provide the compression to the
log files.
HUFFMAN ENCODING
Huffman coding [14] is an entropy encoding
algorithm used for lossless data compression.
Problem Description
Input.
Alphabet A= { a1,a2,….an}, which is the symbol
alphabet
of
size
n.
Set W={w1,w2,……wn}, which is the set of the
(positive) symbol weights (usually proportional to
probabilities), i.e. Wi = weight(ai), 1≤i≤n.
Output.
Code C(A<W)= {c1,c2,….cn} , which is the set of
(binary) code words, where Ci is the code word for
Ai,
1<=i<=n.
Goal.
Let L( C ) = ∑ wi * length(ci) be the weighted path
length to code C . Condition: L(C )≤ L (T) for any
code T(A,W).As defined by Shannon (1948), the
information content h (in bits) of each symbol a i with
non-null probability is h(ai) = log2 1/wi. The entropy
H (in bits) is the weighted sum, across all symbols ai
with non-zero probability wi, of the information
content of each symbol:
http://www.internationaljournalssrg.org
Page 375
International Journal of Engineering Trends and Technology- Volume4Issue3- 2013
As a consequence of Shannon's source
coding theorem, the entropy is a measure of the
smallest LC.It is worked by based on the creating a
binary tree. it can be stored on the regular array .The
size depends on the number of symbols. All nodes
are leaf nodes, which contain the symbol itself, the
weight of the symbol and optionally, a link to a
parent node which makes it easy to read the code
starting from a leaf node. Internal nodes contain
symbol weight, links to two child nodes and the
optional link to a parent node. For tree 0 represent the
left child 1 represent the right child. Create the
priority queue with the alphabet and weight of the
letter. Remove the lowest priority letter from the
queue. Create a new internal node with these two
nodes as children and with probability equal to the
sum of the two nodes' probabilities. Create the tree.
Then starting from root to the nodes provide
Huffman coding to all data.
Decompression
Decompression[14] is the phase in which the
compressed log files are decompressed at the
destination. The data is compressed using canonical
encoding, the compression model can be precisely
reconstructed with just B2b bits of information.
D. Stop and Copy
When the number of log files generated is too small
or it reaches the some predefined condition then we
execute the stop and copy phase. In stop and copy
phase the VM is suspending at the source and resume
at the target.
E. Service handling
After the successful migration of VM, services that
are handled by the source machine are transferred to
the target machine. The target machine continuing
the task without any disturbance to the users. It
advertise the new IP address to the remain device in
the network.
IV .EVALUATION
In this section we evaluate the proposed
concept of live migration using compressed log files
by using various work load schemes and evaluate the
performance metrics for various schemes.
Experimental Environment
In the Experimental section it consists of
one cluster composed of four servers, in which one
system act as the storage server. The remaining
server act as the client for different workloads for
each server ,its configuration include intel core i3
processor, all the systems are connected by the
Gigabit LAN. The VMM is KVM and the guest OS
ISSN: 2231-5381
running on each physical host is ubuntu 10.24.The
design objective is to measure the total migration
time, down time, provide more security, less number
of pages transferred.
Application scenarios
1)Daily use: an ubuntu 10.24 for daily use
2)Static web application : Here we use the apache
2.0.63 to measure the static content web server
performance.
3)Dynamic web workload: A tomcat 5-5.5.23 web
server act as the work load of migrated virtual
machine.
4)Kernal-Compile
The complete Linux 2.6.18 kernal compilation is a
system-call intensive workload, which is expensive to
virtualization.
Total migration time
Here we compared the total migration time with the
various work load. In order to reduce the total
migration time here we provide the paralleled multi
threading compressed technique, multithreading
allow the number of log files are able to compress
parallel .The multithreading help to reduce the energy
consumption and also reduce the CPU usage of the
system, compared to CR/TR motion it have reduced
total migration time.
The figure shows the comparison of total
migration time with various application. The
compressed log files having the less migration time.
For the above workload it reduce the total migration
time by 62.1%,76.2%,74.4%,65.4% respectively.
Figure 2:Total migration time
Down time
http://www.internationaljournalssrg.org
Page 376
International Journal of Engineering Trends and Technology- Volume4Issue3- 2013
We mostly bothered about the downtime during
the live VM migration. The downtime for the
compressed log transfer it is negligible. The
following diagram shows the Downtime for the
various workloads. The diagram shows the downtime
in mille seconds. Compared cr/tr motion the
compressed log files have the less down time.
Figure 3: Down time
Network Bandwidth
The live migration is affected by the network
bandwidth. Consumption of bandwidth is very low
using compressed log file approach .The log files
are compressed then only it traverse through the
network hence it consumes the low bandwidth.
Additionally it
provide more security to the
transferred data, whatever the log files are transferred
to the destination it reaches the same ,compression
also provide the security to the data.
V CONCLUSION AND FUTURE WORK
In this paper we have presented the design,
implementation, and evaluation of log compressed
approach. The log compression technique is the new
concept in the field of live VM migration. The
compression applied to the log files are lossless ,the
log files at the target machines are reconstructed
easily, DMTCP check point is helpful for the
reconstruction of the virtual machine from its initial
state. Compared to the CR/TR motion the log
compressed techniques have the shorter total
migration time, downtime and it consumes the less
bandwidth to transfer the data.
One problem associated with the data
compression method it requires more processing time
to compress the log files.
[1]
NELSON,M., Lim, and Greg Hutchins” Fast transparent
migration for virtual machines”. In Use nix, Anaheim, Ca(2005),pp
25-25.
[2] ] Clarck, C., Fraser, k., Hand, S., Hansen, J., Jule, E., Limpach,
C., Pratti, I., And warfield, A.” Live migration of virtual
machines”. In Network System Design and Implementation
(2005).
[3] tiago c.ferreto,marco A.S Netto, Rodrigo “Server consolidation
with migration control for virtualized data centers” by future
generation computer system 27(2011)1027-1034
[4] Clarck, C., Fraser, k., Hand, S., Hansen, J., Jule, E., Limpach,
C., Pratti, I., And warfield, A.” Live migration of virtual
machines”. In Network System Design and Implementation
(2005).
[5] M. R. Hines and K. Gopalan, “Post-copy based live VM
migration using adaptive pre-paging and dynamic selfballooning”,
in Proceedings of the ACM/Usenix international conference on
Virtual execution environments (VEE’09), 2009, pp. 51–55
[6] H. Jin, L. Deng, S. Wu, X. Shi, and X. Pan.” Live VM
migration with adaptive memory compression”. In Proceedings of
the 2009 IEEE International Conference on Cluster Computing
(Cluster 2009), 2009.
[7] Ei PhyuZaw “improved Live Virtual migration using LRU and
spaly tree algorithm” International journal of computer science and
telecommunications
[8] Hai Jin a,n, Wei Gao a, Song Wua,n, Xuanhua Shi a, Xiaoxin
Wub, Fan Zhou “ Optimizing the live migration of virtual machine
by CPU scheduling” Journal of Network and Computer
Applications 34 (2011) 1088–1096
[9]Haikun Liu,Hai Jin,Xiaofei Liao,Chen Yu,Cheng-Zhong
Xu,”Live Virtual machine migration via Asynchronous Replication
and state Synchronization. IEEE transactions on parallel and
distributed systems.
[10] Mo Yuanbin , Qiu Yubing ; Liu Jizhong ; Ling Yanxia .” A
Data Compression Algorithm Based on Adaptive Huffman Code
for Wireless Sensor Networks “2011 International Conference on
Intelligent Computation Technology and Automation (ICICT), 2829 March 2011
[11] C.Isci ,J Liu, B.Abali, kephart,Kouloheris “Improving server
utilization using fast virtual machine migration” IBM j.RES&DVE
November/December 2010,vol 55
[12] Korir Sammy,ren Shengbing,Cheruiyot Wilson “ Energy
efficient security Preserving VM Live Migration IN Data Centers
For cloud computing” ,IJCSI International Journal Of Computer
science Issues, Vol 9,issue 2,No 3, March 2012
[13]
Jason Ansel and Kapil Arya, DMTCP: transparent
checkpointing for cluster computations and the desktop” Computer
and Information Science Faculty Publications (2009)
[14] Nilkesh patra,Sila siba sankar,” Data Reduction By Huffman
Coding And
Encryption By Insertion Of Shuffled Cyclic
Redundancy Code.
VI. REFERENCES
ISSN: 2231-5381
http://www.internationaljournalssrg.org
Page 377
Download