Study Paper on Cloud computing Virtualisation & Resource

advertisement
Department of Telecommunications
Telecom Engineering Center
Khurshid Lal Bhavan, Janpath, New Delhi - 110011
Study Paper on Cloud computing Virtualisation & Resource
Management
R. Saji Kumar Director, J.M.Suri DDG, I Division, Telecom Engineering Center, Department of
Telecommunications, New Delhi.
Abstract
In cloud computing architecture where centralised resources like CPU, Memory, Disk space,
Input/Output functions etc are shared among multiple users, virtualisation and efficient resource
management are key to its success. Virtualization is the process of decoupling the hardware from the
operating system on a physical machine. Resource Management is this process of managing the
Physical Resources like CPU, Memory, Network etc across various Virtual Machines (VM) based on
policies.
The policies of cloud management critical for the cloud resource management, generic and functional
requirements are elaborated. The various methods adopted for CPU, Memory and IO virtualisation
and resource management are discussed in detail in this paper. This paper also gives a sketch of the
benefits and limitations of virtualisation.
Key words
Cloud computing, Virtualisation, Cloud Resource Management, Host Machine, Virtual Machine,
Virtual Machine Monitor, Hypervisor, CPU scheduling, CPU load balancing, Memory virtualisation, IO
virtualisation.
1.
Introduction
Cloud Computing is a model for enabling service users to have ubiquitous, convenient
and on- demand network access to a shared pool of configurable computing resources
(e.g., networks, servers, storage, applications and services), that can be rapidly
provisioned and released with minimal management effort or service-provider
interaction.
The Cloud will provide IT similar to public utilities providing electricity, gas, and water.
There is no need to have to own the hardware & the staff. There will be multiple
public cloud providers. The generalised architecture of cloud computing is given in
figure-1 below.
Figure-1: Cloud computing Architecture
Figure
Infrastructure as a service refers to the sharing of hardware resources for
executing services, typically using virtualization technology. With this so
so-called
Infrastructure as a Service (IaaS) approach, potentially multiple users use existing
resources. The
he resources can easily be scaled up when demand increases, and are
typically charged for on a per
per-pay-use
use basis. In the Platform as a Service (PaaS)
approach, the offering also includes a software execution environment, such as an
application server. In the
he Software as a Service approach (SaaS), complete applications
are hosted on the Internet..
Virtualization is the
th process of decoupling the hardware from the operating
system on a physical machine. Virtualization can be thought of essentially as a
computerr within a computer, implemented in software. This is true all the way down
to the emulation of certain types of devices, such as sound cards, CPUs, memory, and
physical storage. The Figure
Figure-2
2 below describes how the physical resources like servers,
storage and network are separated from the operating systems and the applications
using the virtual infrastructure.
Figure-2:: Hardware / Software Separation in a Virtualised Environment
Cloud Computing takes Virtualization to the Next Step. Virtual Machines & services
can be rented as needed from a cloud service provider. However for the management
of such a process requires that the resources be managed in an efficient way.
Resource Management is this process of managing the Physical Resources like CPU,
Memory, Network etc across various Virtual Machines (VM) based on policies. The
figure-3 below gives a frame work of the cloud infrastructure with resource
orchestration.
Figure-3: Framework of cloud infrastructure with Resource orchestration.
2.
Virtualisation
Virtualization is the process of decoupling the hardware from the operating system on
a physical machine. An instance of an operating system running in a virtualised
environment is known as a virtual machine. Figure-4 below gives an architecture of
the virtualisation with multiple virtual machines sharing the same hardware
resources.
Figure
Figure-4: Virtualisation Architecture
Virtualisation technologies
ogies allow multiple virtual machines, with heterogeneous
operating systems to run side by side and in isolation
solation on the same physical m
machine.
By emulating a complete hardware system, from processor to network card, each
virtual
rtual machine can share a common set of hardware unaware that this hardware may
also be being used by another virtual machine at the same time. The operating system
running in the virtual machine sees a consistent, normalized set of hardware
regardless of the actual physical hardware components.
Virtualisation is adopted in cloud architecture because of the following reasons.
a. Hardware independence:
pendence: The gguest
uest VM sees the same hardware regardless of the
host hardware
b. Isolation –VM’s
VM’s operating system is isolated from the host operating system
c. Encapsulation–Entire
Entire VM encapsulated into a single file
d. Simplified administration because of Hardware independence & portability,
Increased hardware utilization, Server consolidation, Decreased provisioning
times and Improved security
e. The other reasons include reduced capital expenditure, reduced operating
expenditure, reduced risks of data outage and redu
reduced
ced energy consumption.
2.1
Terminologies used in Virtual
Virtualisation:
Following are some of the terminologies used in virtualisation.
Host Machine:
A host machine is the physical machine running the virtualization software. It contains
the physical resources, su
such as memory, hard disk space, and CPU, and other
resources, such as network access, that the virtual machines utilize.
Virtual Machine:
The virtual machine is the virtualized representation of a physical machine that is run
and maintained by the virtualization software. Each virtual machine, implemented as
a single file or a small collection of files in a single folder on the host system, behaves
as if it is running on an individual, physical, non-virtualized PC.
Virtualization Software:
Virtualization software is a generic term denoting software that allows a user to run
virtual machines on a host machine.
Virtual Disk:
Virtual Disk is the virtual machine’s physical representation on the disk of the host
machine. A virtual disk comprises either a single file or a collection of related files. It
appears to the virtual machine as a physical hard disk.
Shared Folders:
Shared folders enable the virtual machine to access data on the host. Most virtual
machine implementations support the use of shared folders.
Virtual Machine Monitor (VMM):
A virtual machine monitor is the software solution that implements virtualization to
run in conjunction with the host operating system. The virtual machine monitor
virtualizes certain hardware resources, such as the CPU, memory, and physical disk,
and creates emulated devices for virtual machines running on the host machine.
Hypervisor:
In contrast to the virtual machine monitor, a hypervisor runs directly on the physical
hardware. The hypervisor runs directly on the hardware without any intervening help
from the host operating system to provide access to hardware resources. The
hypervisor is directly responsible for hosting and managing virtual machines running
on the host machine.
3.
Resource Management
Resource Management is the process of managing the Physical Resources like CPU,
Memory, Network etc across various Virtual Machines (VM) based on policies. The
figure-4 below gives positioning of resource management in the management and
distributed services in virtualisation architecture.
Figure-4:
4: Management & Distributed Services in Virtualisation Architecture
The Goal of resource management is three fold.
a) Performance isolation: This prevent
prevents VMs from monopolizing resources and
Guarantees predictable service rates
b) Efficient utilization: This is achieved through exploiting under committed resources
and over
ver commit with graceful degradation
c) Support flexible policies:: Absolute service-level agreements are met and relative
importance of VMs is controlled efficiently.
Resources management requires complex policies and decisions for multi-objective
objective
optimization in cloud computing. A cloud is a complex system with a very large
number of shared resources subject to unpredictable requests and affected
affec
by
external events it cannot control. Cloud resource management is extremely
challenging because of the complexity of the system which makes it impossible to
have accurate global state information and because of the unpredictable interactions
with the environment.
3.1
Policies for Cloud management:
Polices for cloud management can be grouped into the following five classes.
1) Admission control: The explicit goal of an admission control policy is to prevent the
system from accepting workload in violation of high-level
level system policies; for
example, a system may not accept additional workload which would prevent it from
completing work already in progress or contracted. Limiting the workload requires
some knowledge of the global state of the system; in a dynami
dynamicc system such
knowledge, when available, is at best obsolete.
2) Capacity allocation: Capacity allocation means to allocate resources for individual
instances; an instance is an activation of a service. Locating resources subject to
multiple global optimization constraints requires a search of a very large search space
when the state of individual systems changes rapidly.
3) Load balancing: The common meaning of the term “load balancing” is that of
evenly distributing the load to a set of servers.
4) Energy optimization: In cloud computing a critical goal is minimizing the cost of
providing the service and, in particular, minimizing the energy consumption. This leads
to a different meaning of the term “load balancing;” instead of having the load
evenly distributed amongst all servers, we wish to concentrate it and use the smallest
number of servers while switching the others to a standby mode, a state where a
server uses very little energy.
5) Quality of service (QoS) guarantees: A service level agreement (SLA) often specifies
the rewards as well as penalties associated with specific performance metrics.
3.2
General Requirements of Resource Management
(i)
Resource Management provides the user with a unified interface for using all
the heterogeneous resources without caring about their real type.
(ii)
It supports computing, storage, and network resources management.
Resource management supports both virtual and physical resources.
(iii)
(iv)
It shields the user from the changing nature (dynamicity) of the performance
of cloud resources.
(v)
It evaluates the performance of each resource to fulfil the QoS of each user
request.
(vi)
It supports a unified resource management interface between different types
of hypervisors and the cloud resource management so as to integrate
different types of heterogeneous resources.
3.3
Functional Requirements of Resource Management
(i)
It provides a unified interface for heterogeneous resource, whether
virtualized or physical, to upper-layers for management and utilization.
(ii)
It provides elastic, dynamic, on-demand and automation management for the
down-layers, based on user-defined policies by providing resource access
control interfaces to the upper-layers.
It provides the capability to describe groups of computing, storage and
(iii)
network resources for easy allocation and deployment to satisfy
application/service resource demand of the upper-layers by use of templates
and their management.
(iv)
It provides unified management of the physical devices, including
configuration information and topology of assets.
(v)
The requirements can be classified as below –
a. Resource encapsulation
b. Resource orchestration and provisioning
c. Assets management
d. Template management
e. Cloud service monitoring
f. User resource environment management
3.3.1
Resource encapsulation
(i)
The heterogeneous resources can be accessible through a unified interface,
which shall be able to create, locate, provision, recover and delete the
resources.
(ii)
(iii)
All the physical and virtual resources are managed in a unified manner
through resource encapsulation.
The attribute of each resource, including resource deployment, status,
capacity, execution, exception, error and interrupt shall be measurable and
searchable.
3.3.2
Resource orchestration and provisioning
(i)
It provides a unified interface to upper-layers for management and execution.
(ii)
All resources are flexible, on-demand and automation orchestrated, deployed,
and provisioned, based on the pre-defined policies that include high
availability, load balance, resource migration, energy efficiency and storage
deployment.
(iii)
It provides on-demand and automation management for the lower-layers
based on pre-defined policies, and access control interfaces to the upperlayers.
(iv)
It is possible to dynamically allocate the resources by real-time monitoring of
applications and SLAs.
(v)
It is also possible for the services to be analyzed and to be translated into
resource requirements and to trigger appropriate actions.
3.3.3
Assets Management
(i)
(ii)
(iii)
Asset management provides unified management of the physical devices,
including asset information management.
Assessment attributes (Hardware: racks, servers, storage devices, network
equipment, and VMs; Software: hypervisors, operating systems, middleware,
databases, applications, licenses, and so on) and topology of physical devices
are managed in a unified manner.
It automatically updates the assessment attribute when the physical devices
are changed.
3.3.4
Template management
(i)
It provides life cycle management of each resource, including creation,
publication, activation, revocation, deletion etc.
(ii)
It provides management of life cycles of templates, including creation,
publication, activation, revocation, deletion, template provision, etc.
3.3.5
Cloud service Monitoring
(i)
All physical and virtual resources are monitored (like physical servers, virtual
monitors/hypervisors, virtual machines, physical and virtual disks, physical
and virtual network and applications).
(ii)
The architecture of resource monitor are multi-layered, including service
instance monitoring, physical resources monitoring, resource pool monitoring,
user connection monitoring, software monitoring, etc.
(iii)
System is able to detect the exceptions or errors of computing, storage,
network equipment and the resources pool without affecting the monitoring
of existing users.
3.3.6
Health monitoring
(i)
It is possible to monitor the health of both the physical and virtual
infrastructure like physical server hardware status, hypervisor status, virtual
machine status, physical and virtual network switches and routers, and
storage systems.
(ii)
It is implemented as a service model, which can be regarded as a map which
displays all of the technology components, including transactions,
applications, web servers, network switches, virtualized components, and
third-party cloud services.
(iii)
The service model provides run-time monitoring of the constantly changing
service infrastructure.
An integrated operation bridge consolidates event and performance data
(iv)
from both physical and virtual sources to reduce duplicate monitoring.
Automatic remediation capabilities reduce mean time to repair (MTTR).
(v)
3.3.7
Performance monitoring
(i)
It looks at the performance of the CPU, memory, storage and network
from the VM guest OS as well as from the hypervisor.
(ii)
The metrics are monitored both in virtualized and non-virtualized
environments.
3.3.8
Capacity monitoring
The key metrics monitored for capacity planning are:
Server utilization: Peak/average server resource utilization - memory
(i)
/CPU/resource, server bottlenecks and correlation with a number of
users/VMs.
(ii)
Memory usage: Memory utilization on each server, capacity bottlenecks and
relationship with number of users/VMs and with different cloud services.
(iii)
Network usage: Peak/average network utilization, capacity/bandwidth
bottlenecks and relationship with a number of users/VMs and with different
cloud services.
(iv)
Storage utilization: Overall storage capacity metrics, VM/virtual disk
utilization, I/O performance metrics, snapshot monitoring and correlation
with a number of users/VMs and with different cloud services.
3.3.9
Security and compliance monitoring
Security and compliance monitoring provides metrics for the following key
functions (i)
(ii)
(iii)
(iv)
VM sprawl: Metrics to monitor the VM activities as they get cloned, copied,
move of network, move to different storage media etc.
Configuration metrics: Virtual server configuration monitoring, VM
configuration monitoring for software licensing policy enforcement. VI Events
that help enforce/detect violations of IT policy, individual security and
organization security policy etc.
Access control: Access control monitoring and reports for role-based access
control enforcement.
Compliance monitoring: Metrics to validate/audit IT for confirming to various
regulatory requirements.
3.3.10 Monitoring and metering for charging and billing
(i)
In a virtualized environment, where the infrastructure is centralized, it
measures resource usage by different business units, groups, and users. This
information can be used to distribute/amortize and, in some cases, recover
the cost correctly across the organization through a proper chargeback
mechanism.
To compute the correct chargeback information in a dynamic virtualized
(ii)
environment, it monitors virtual/physical resource usage, service usage and
allocations.
(iii)
It normalizes the measurement statistics across the cloud infrastructure.
It is also possible to obtain the following metrics (iv)
a. Standard metrics: All chargeable resource metrics like CPU usage,
memory usage, storage usage (volume and time), and network usage
(bandwidth and network traffic).
b. Key Virtual Infrastructure (VI) events: VI events for virtual resource life
cycle events like start date and end date of VM creation and allocation.
c. Configuration monitoring: VM configuration in terms of assigned
resources and reservations and also applications installed to an account
for software licensing costs.
d. VM usage metrics: VM uptime, number of VMs etc.
3.3.11 Application and service monitoring
(i)
The application and service monitoring is important in the cloud computing
environment for the evaluation of SLA/QoS.
(ii)
The system monitors the basic health of application servers, with the help of
application-specific response time and throughput metrics.
(iii)
All kinds of virtualization software (server virtualization, storage virtualization,
network virtualization, etc.,) provide suitable API for collection of metrics.
(iv)
The systems have suitable analytics software for analysing the metrics and
presenting the results through a suitable GUI.
3.3.12 User resource environment management
(i)
User resource environment includes the resources allocated to a user, the
state of the resources (such as running, stopped for a virtual machine), and
the topologies among the resources.
(ii)
Cloud infrastructure provides the secure isolation between different user
resource environments to prevent the management activities in one user
resource environment from impacting other user resource environments.
(iii)
The cloud provider gives a user appropriate control of his user resource
environment.
4.
CPU Resource Management and Virtualisation
The underlying physical resources are used and the virtualization layer runs
instructions only as needed to make virtual machines operate as if they were running
directly on a physical machine.
4.1
CPU Virtualization
(i)
One physical CPU is virtualised into multiple virtualized CPU (vCPU) for
multiple virtual machine instances using time sharing technologies, so that
one instance may obtain at least one vCPU.
(ii)
The system administrator assigns vCPUs for virtual machines.
(iii)
It is possible to guarantee or limit the performance of a virtualized CPU in a
virtual machine instance.
(iv)
The virtual machine (hypervisor) implements the CPU function which
determines the mapping between the vCPUs and physical CPUs managed
by the hypervisor.
(v)
It is possible to increase and decrease the number of CPU resources assigned
to a Virtual machine at a later stage.
4.1.1
Software based Virtualisation:
The guest application code runs directly on the processor, while the guest privileged
code is translated and the translated code executes on the processor. The translated
code is slightly larger and usually executes more slowly than the native version. As a
result, guest programs, which have a small privileged code component, run with
speeds very close to native. Programs with a significant privileged code component,
such as system calls, traps, or page table updates can run slower in the virtualized
environment.
4.1.2
Hardware based Virtualisation:
The guest code, whether application code or privileged code, runs in the guest mode.
There is no need to translate the code. As a result, system calls or trap-intensive
workloads run very close to native speed.
4.1.3
Multi-core Processors:
A dual-core processor usually can provide almost double the performance of a singlecore processor, by allowing two virtual CPUs to execute at the same time. Cores
within the same processor are typically configured with a shared last-level cache used
by all cores, potentially reducing the need to access slower main memory. A shared
memory bus that connects a physical processor to main memory can limit
performance of its logical processors if the virtual machines running on them are
running memory-intensive workloads which compete for the same memory bus
resources. Each logical processor of each processor core can be used independently
by the CPU scheduler to execute virtual machines
4.1.4
Hyper threading:
Hyper threading technology allows a single physical processor core to behave like two
logical processors. The processor can run two independent applications at the same
time. While hyper threading does not double the performance of a system, it can
increase performance by better utilizing idle resources leading to greater throughput
for certain important workload types. Hyper threading performance improvements
are highly application-dependent, and some applications might see performance
degradation with hyper threading because many processor resources (such as the
cache) are shared between logical processors.
4.2
4.3
CPU Scheduling
Maximum CPU utilization is obtained with multiprogramming i.e. Several
processes are kept in memory at one time and every time a running process has to
wait, another process can take over use of the CPU. Scheduling is a critical component
of the cloud resource management. Scheduling is responsible for resource
sharing/multiplexing at several levels; a server can be shared among several virtual
machines, each virtual machine could support several applications, and each
application may consist of multiple threads. CPU scheduling supports the
virtualization of a processor, the individual threads acting as virtual processors; a
communication link can be multiplexed among a number of virtual channels, each one
of them dedicated to a single flow. A scheduling algorithm should be efficient, fair,
and starvation-free. Two distinct dimensions of resource management must be
addressed by a scheduling policy: (a) the amount/quantity of resources allocated; and
(b) the timing when access to resources is granted.
Criteria for selecting a CPU scheduling algorithms is based on the CPU
utilization – percent of time that the CPU is busy executing a process, Throughput –
number of processes that are completed per time unit, Response time – amount of
time it takes from when a request was submitted until the first response occurs (but
not the time it takes to output the entire response), Waiting time – the amount of
time before a process starts after first entering the ready queue (or the sum of the
amount of time a process has spent waiting in the ready queue) and Turnaround time
– amount of time to execute a particular process from the time of submission through
the time of completion. The scheduling mechanism shall also provide accurate ratebased controls, support multi-core, multi-threaded CPUs and support grouping
mechanism.
Scheduling Algorithms
Round-robin, first-come-first-serve (FCFS), shortest-job-first (SJF), and priority
algorithms are among the most common scheduling algorithms for best effort
applications.
4.3.1
First Come First Serve:
The process that requests the CPU first is allocated to the CPU first.
4.3.2
Shortest Job First:
When the CPU becomes available, it is assigned to the process that has the smallest
next CPU burst.
4.3.3
Priority based scheduling:
It associates each process with a priority and makes a scheduling choice or
preemption decision based on the priorities. For example, a process with the highest
priority among ready processes would be chosen, and then the process may preempt
the currently running process if it is higher in the priority.
4.3.4
Proportional-Share Based Algorithm
It associates each process with a share of CPU resource. The entitled resource may not
be fully consumed. When making scheduling decisions, the ratio of the consumed CPU
resource to the entitlement is used as the priority of the process. If there is a process
that has consumed less than its entitlement, the process is considered high priority
and will likely be chosen to run next i.e. schedule VM with smallest virtual time.
4.3.5
Round Robin Algorithm:
In the round robin algorithm, each process gets a small unit of CPU time (a time
quantum), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.
4.3.6
Multi-level queue scheduling:
Multi-level queue scheduling is used when processes can be classified into groups. A
multi-level queue scheduling algorithm partitions the ready queue into several
separate queues. The processes are permanently assigned to one queue, generally
based on some property of the process such as memory size, process priority, or
process type. Each queue has its own scheduling algorithm.
4.3.7
Asymmetric Multi processor scheduling:
One processor handles all scheduling decisions, I/O processing, and other system
activities. The other processors execute only user code. Because only one processor
accesses the system data structures, the need for data sharing is reduced.
4.3.8
Symmetric Multi processor scheduling or Symmetric Multi threading [SMT]:
In SMT, each processor schedules itself. Symmetric multiprocessing systems allow
several threads to run concurrently by providing multiple physical processors. SMT is a
feature provided in the hardware, not the software.
4.4
CPU Load balancing or Inter-processor load balancing:
On multi-processor systems, balancing CPU load across processors or load-balancing
is critical to the performance. Load-balancing is achieved by having a process migrate
from a busy processor to an idle processor. Generally, the process migration improves
the responsiveness of a system and its overall CPU utilization. The features of interprocessor load balancing include per-processor dispatch and run queues, scanning
remote queues periodically for fairness, pull whenever a physical CPU becomes idle,
push whenever a virtual CPU wakes up etc. Multi-processor VM support gives an
illusion of dedicated multi-processor, near-synchronous co-scheduling of VCPUs and
supports hot-addition of VCPUs.
4.4.1
Load balancing in NUMA Systems:
In a NUMA (Non-Uniform Memory Access) system, there are multiple NUMA nodes
that consist of a set of processors and the memory. The access to memory in the same
node is local while the access to the other node is remote. The remote access takes
longer cycles because it involves a multi-hop operation. Due to this asymmetric access
latency, keeping the memory access local or maximizing the memory-locality improves
performance. On the other hand, CPU load-balancing across NUMA nodes is also
crucial to performance.
4.4.2
Shared cache management
Shared cache management allows multi-core processors to share a common cache
memory. This is a hardware feature offering explicit cost-benefit tradeoffs for
migrations and uses hardware cache QoS techniques
4.4.3
Load balancing on Hyper-threading Architecture:
Hyper threading enables concurrently executing instructions from two hardware
contexts in one processor. Although it may achieve higher performance from threadlevel parallelism, the improvement is limited as the total computational resource is
still capped by a single physical processor. Also, the benefit is heavily workload
dependent. A whole idle processor, that has both hardware threads idle, provides
more CPU resource than only one idle hardware thread with a busy sibling thread.
5.
Memory virtualization
It divides the physical memory, allocate memory for virtual machine instances
when starting up, and release memory from virtual machines when shutting down.
Every running instance of OS sees a continuous memory space and is isolated from the
memory space of other instances. The hypervisors are capable of memory address
conversion from the guest instance physical memory address to the machine physical
address. The operating system of a running instance maps the application virtual
memory to guest instance physical memory. It can increase the memory allocation to
a guest OS at a later stage. The virtualization software has commit feature for the
virtual machines.
5.1
Re-claiming unused memory
The memory manager of the hypervisor detects whether the virtual memory
is actually used by the guest OS or not. If not, the hypervisor shall be able to assign the
unused part of the memory to another guest OS, so that the memory can be shared
among the guest OS. Hence this feature is required for memory over-commitment.
This is achieved through traditional method of adding transparent swap layer or using
an implicit co-operation.
Balooning is a method where Guest OS manages memory implicit cooperation
by paging in / out of the virtual disk.
In Page Sharing, multiple VMs running same OS de-duplicate redundant
copies of code, data etc.
5.2
NUMA [Non Uniform Memory Access] scheduling
Periodic rebalancing of the memory usage computes VM entitlements &
memory locality, assign “home” node for each VM and migrate VMs and pages across
nodes
VM migration is to move all VCPUs and threads associated with VM and
migrate to balance load and improve locality
Page migration allocates new pages from home node and carries migration &
replication
6.
I/O device Resource Management and V
Virtualization
Each virtuall machine
ma
is capable to equip
quip its own virtual I/O devices
abstracted from the I/O devices of the phy
physical machine and implements
ts the mapping
of virtual and physical devices. The VM’s do not have any constraint
aint on the number of
virtual I/O devices. The data transferred or stored by the physical I/O devices is never
shared amongst the virtual machine Operating Systems. The virtualization
irtualization software
allows redirection of virtual machine serial ports over a standard
standard network link thereby
enabling solutions such as third
third-party
party virtual serial port concentrators for virtual
machine serial console management or monitoring.
Figure-5: IO Virtualisation
7.
Virtual machine Duplication and Migration
This property allows duplicating the main virtual machine such that the
duplicate virtual machine will have the same operating system and installed
applications as that of the main virtual machine. It is also possible to move an
operating system and its applications between a virtual machine and a physical
machine or between virtual machines on different physical machines, with the
operating system temporarily stopped.
It supports migration
migrat
of virtual machines online to other physical machines
running on the same network and utilizing the same central storage. Virtualization
software allows for taking snapshots of the virtual machines to be able to revert to an
older state if required.
8.
Distributed systems
Typically in cloud architecture, there will be multiple server hardware working
logically as a single server. These servers may be physically located in a single location
or may be hosted at multiple locations. Virtualisation in such a scenario is more
complex. The techniques involve choosing an initial host when VM powers on
on, migrate
running VMs across physical hosts and dynamic load balancing.
Cluster-wide
wide resource management requires Uniform controls, same as
available on single host
host, flexible hierarchical policies & delegation,, configurable
c
automation levels, aggressiveness and configurable VM affinity/anti-affinity
affinity rules
Distributed Power Management Powers off unneeded hosts, power on when needed
again
Distributed IO Management as multiple hosts access the same storage array,
NIC, HBA etc. Host-level I/O scheduling for arbitrate access to local NICs and HBAs,
disk I/O bandwidth management and network traffic shaping.
9.
Benefits of Virtualisation
Reducing hardware and software needs, improving performance and scalability,
and reducing downtime are key factors in managing costs today. Virtual machines
provide the means to achieve the following goals.
• Virtual machines allow more efficient use of resources by consolidating multiple
operating environments on underutilized servers onto a smaller number of
virtualized servers.
• Virtual machines make the manageability of systems easier. For example, you do
not need to shut down servers to add more memory or upgrade a CPU.
• The complexity of overall administration is reduced because each virtual
machine’s software environment is independent from the underlying physical
server environment.
• The environment of a virtual machine is completely isolated from the host
machine and the environments of other virtual machines so you can build out
highly-secure environments that are tailored to your specifications. For example,
you can configure a different security setting for each virtual machine. Also, any
attempt by a user to interfere with the system would be foiled because one
virtual environment cannot access another unless the virtualization stack allows
this. Otherwise, it restricts access entirely.
• You can migrate old operating systems for which it is difficult to obtain
appropriate underlying hardware for a physical machine. Along these same lines,
you can run old software that has not been, or cannot be, ported to newer
platforms.
• You can run multiple, different operating systems from different vendors
simultaneously on a single piece of hardware.
• Because virtual machines are encapsulated into files you can easily save and copy
a virtual machine. You can quickly move fully configured systems from one
physical server to another.
• Virtualization allows you to deliver a pre -configured environment for internal or
external deployment scenarios.
• Virtual machines allow for powerful debugging and performance monitoring.
Operating systems can be debugged without losing productivity and without
having to set up a more complicated debugging environment.
• The virtual machine provides a compatible abstraction so that software written
for it will run on it. For example, hardware –level virtual machine will run all the
software, operating systems, and applications written for the hardware. Similarly,
an operating system – level virtual machine will run applications for that particular
•
10.
operating system, and a high -level virtual machine will run programs written in
the high-level language.
Because virtual machines can isolate what they run, they can provide fault and
error containment. You can insert faults proactively into software to study its
subsequent behavior. You can save the state, examine it, modify it, reload it, and
so on. In addition to this type of isolation, the virtualization layer can execute
performance isolation so that resources consumed by one virtual machine do not
necessarily affect the performance of other virtual machines.
Conclusion
Data centre and desktop computing successfully use virtualization for better
utilization of computing capacity, to balance computing load, manage complexity and
parallelism and improve security by isolation.
However Virtualization may not work well for Resource-intensive applications
where VMs may have RAM/CPU/SMP limitations or situations where custom
hardware devices are required. Some hardware architectures or features are
impossible to be virtualized as certain registers or states are not exposed. Mobile and
embedded computing currently lag behind virtualisation since most hypervisors only
support the x86 platform, require large memories, have poor real-time support and
are inefficient with microkernel Oss. Moreover suitable open source-code hypervisors
are not available
Glossary of Terms
CPU:
FCFS:
HBA:
IaaS:
MTTR:
NIC:
NUMA:
OS:
PaaS:
PC:
QoS:
RAM:
SaaS:
SJF:
SLA:
SMP:
SMT:
vCPU:
VM:
VMM:
Central processing unit in any computing device like personnel computers,
servers etc
First Come First Serve
Host Bus Adapter for Fibre channel interface to Storage Array Networks [SAN]
Infrastructure as a Service
Mean Time to Repair
Network Interface Card which generally has the Ethernet ports equipped
Non-Uniform Memory Access
Operating System like windows xp, Unix, Linux etc
Platform as a Service
Personnel Computers
Quality of service
Random Access Memory
Software as a Service
Shortest Job First
Service Level Agreements
Symmetric Multi Processing
Symmetric Multi Threading
Virtual CPU
Virtual machine
Virtual machine Monitor
References
1. ITU-T document on Cloud TR
2. GR No. TEC/GR/IT/CLI-001/01/OCT-12 on Cloud Infrastructure
3. vSphere Resource Management Guide
4. NIST Cloud computing reference architecture
5. White paper on virtualisation by Virtual management Technologies
6. PCI Data Security Standards Virtualisation guidelines
7. NIST ‘Guide to Security for Full Virtualisation Technologies’
Download