open-source software toolkits for creating and managing distributed

advertisement
OPEN-SOURCE SOFTWARE
TOOLKITS FOR CREATING
AND MANAGING DISTRIBUTED
HETEROGENEOUS
CLOUD INFRASTRUCTURES
A.V. Pyarn
Lomonosov Moscow State University,
Faculty of Computational Mathematics and
Cybernetics
apyarn@gmail.com
Agenda








Aim of paper
Virtualization
Hypervisor architecture
IaaS and cloud toolkits
Requirements
Hypervisors toolstacks
Cloud platforms
Comparison
Aim of paper
 Show use cases of cloud toolkits as
virtual educational polygons for
educational purposes.
 Cloud toolkits design aspects,
architectural features, functionality
capabilities, installation how-to’s,
extension and support capabilities:
Xen, KVM toolkits and OpenNebula,
OpenStack toolkits comparison.
Virtualization
Types of Virtualization
 Emulation:
Fully-emulate the underlying
hardware architecture Oracle VirtualBox
Vmware player, server
 Full virtualization:
Simulate the base hardware
architecture
Vmware ESXi, vSphere,
Hyper-V, KVM, XEN
 Paravirtualization:
Abstract the base architecture
XEN
 OS-level virtualization:
Shared kernel (and architecture),
OpenVZ
separate user spaces
Hypervisor role
 Thin, privileged abstraction layer
between the hardware and operating
systems

Defines the virtual machine that guest
domains see instead of physical
hardware:
− Grants portions of physical resources to
each guest
− Exports simplified devices to guests
− Enforces isolation among guests
Hypervisor architecture
Toolstack
Toolstack = standard Linux tools+specific 3-d party toolkits,
API daemons: libvirt, XEND, XAPI etc.
IaaS
 IaaS = Virtualization (hypervisor
features) + “Amazon Style” Self-Service
Portal and convenient GUI management
+ Billing + Multitenancy
 Hypervisors toolstack and APIs
vs 3rd party open source cloud toolkits
(OpenNebula, OpenStack “datacenter
virtualization” platform, etc.)
What should we use?
Depends on requirements
Requirements
For educational polygons:
 Open-source software: hypervisor and
management subsystem
 NFS or iSCSI independent storage for
virtual disks and images
 Easy installation and support
 GUI: Management center, optional - selfservice portal, monitoring and accounting
tools
Cloud platforms
Storage
NFS/ iSCSI
OS
Management
server(-s):
SSH
• scheduler
• authorization
• monitoring
• web-interface
OS
OS
Hypervisor
Hardware
OS
OS
OS
Worker node
Agentless
SSH
DB
Hypervisor
Hardware
Worker node
They don’t consist of hypervisors
themselves, only management role
KVM-QEMU
KVM-QEMU
SMP hosts
SMP guests (as of kvm-61, max 16 cpu supported)
Live Migration of guests from one host to another
Emulated hardware:
Class
Video card
PCI
Device
Cirrus CLGD 5446 PCI VGA card or dummy VGA card with Bochs
VESA extensions[14]
i440FX host PCI bridge and PIIX3 PCI to ISA bridge[14]
Input device
PS/2 Mouse and Keyboard[14]
Sound card
Sound Blaster 16, ENSONIQ AudioPCI ES1370, Gravis Ultrasound
GF1, CS4231A compatible[14]
Ethernet
AMD Am79C970A (Am7990), E1000 (Intel 82540EM, 82573L,
Network card 82544GC), NE2000, and Realtek RTL8139
Watchdog
timer
RAM
CPU
Intel 6300ESB or IB700
50 MB - 32 TB
1-16 CPUs
KVM-QEMU
• Ease of use +
• Shared storage +
• Live migrations +
• Management GUI + (virtual machine manager)
XEN
Virtualization in Xen
Xen can scale to >255 physical CPUs, 128 VCPUs
per PV guest, 1TB of RAM per host, and up to 1TB
of RAM per HVM guest or 512 GB of RAM per PV
guest.
Paravirtualization:
Uses a modified Linux kernel
• Guest loads Dom0's pygrub or Dom0's kernel
• Front-end and back-end virtual device model
• Cannot run Windows
• Guest "knows" it's a VM and cooperates with hypervisor
Hardware-assisted full virtualization (HVM):
• Uses the same, normal, OS kernel
• Guest contains grub and kernel
• Normal device drivers
• Can run Windows
• Guest doesn't "know" it's a VM, so hardware manages it
Xen – Cold Relocation
Motivation:
Moving guest between hosts without shared storage
or with different architectures or hypervisor
versions
Process: Shut down a guest on the source host
Move the guest from one Domain0's file system to
another's by manually copying the guest's disk
image and configuration files
Start the guest on the destination host
Xen – Cold Relocation
Benefits:
Hardware maintenance with less downtime
Shared storage not required
Domain0s can be different
Multiple copies and duplications
Limitation:
More manual process
Service should be down during copy
Xen – Live Migration
Motivation:
Load balancing, hardware maintenance, and
power management
Result:
Begins transferring guest's state to new host
Repeatedly copies dirtied guest memory (due to
continued execution) until complete
Re-routes network connections, and guest
continues executing with execution and network
uninterrupted
Xen – Live Migration
Benefits: No downtime
Network connections to and from guest often remain
active and uninterrupted
Guest and its services remain available
Limitations:
Requires shared storage
Hosts must be on the same layer 2 network
Sufficient spare resources needed on target machine
Hosts must be configured similarly
Xen Cloud Platform (XCP)
XCP includes:
Open-source Xen hypervisor
Enterprise-level XenAPI (XAPI) management
tool stack
Support for Open vSwitch (open-source,
standards-compliant virtual switch)
Features:
• Fully-signed Windows PV drivers
• Heterogeneous machine resource pool support
• Installation by templates for many different
guest OSes
Xen Cloud Platform (XCP)
XCP includes:
Open-source Xen hypervisor
Enterprise-level XenAPI (XAPI) management
tool stack
Support for Open vSwitch (open-source,
standards-compliant virtual switch)
Features:
• Fully-signed Windows PV drivers
• Heterogeneous machine resource pool support
• Installation by templates for many different
guest OSes
XCP XenAPI Management Tool
Stack
VM lifecycle: live snapshots, checkpoint, migration
Resource pools: live relocation, auto configuration,
disaster recovery
Flexible storage, networking, and power
management
Event tracking: progress, notification
Upgrade and patching capabilities
Real-time performance monitoring and alerting
XCP Installation
XCP Management Software
Xencenter
XCP Toolstack
Command Line Interface (CLI) Tools
Toolstack
xl
XAPI
libvirt
xend
CLI tool
xl
xe
virsh
xm
XCP Toolstack
Toolstack Feature Comparison
Features
Purpose-built for
Xen
Basic VM
Operations
xl
xapi
libvirt
X
X
X
X
X
X
X
Managed Domains
Live Migration
X
X
X
PCI Passthrough
X
X
X
Host Pools
X
Flexible, Advanced
Storage Types
X
Built-in advanced
performance
monitoring (RRDs)
X
Host Plugins
(XAPI)
X
OpenNebula
OpenNebula
What are the Main Components?
• Interfaces & APIs: OpenNebula provides many different interfaces that can be used
to interact with the functionality offered to manage physical and virtual resources.
There are two main ways to manage OpenNebula instances:command line interface and
the Sunstone GUI. There are also several cloud interfaces that can be used to create
public clouds: OCCI and EC2 Query, and a simple self-service portal for cloud
consumers. In addition, OpenNebula features powerful integration APIs to enable easy
development of new components (new virtualization drivers for hypervisor support,
new information probes, etc).
• Users and Groups
• Hosts: The main hypervisors are supported, Xen, KVM, and VMware.
• Networking
• Storage: OpenNebula is flexible enough to support as many different image storage
configurations as possible. The support for multiple data stores in the Storage
subsystem provides extreme flexibility in planning the storage backend and important
performance benefits. The main storage configurations are supported, file system
datastore, to store disk images in a file form and with image transferring using ssh or
shared file systems (NFS, GlusterFS, Lustre…),iSCSI/LVM to store disk images in a
block device form, and VMware datastore specialized for the VMware hypervisor that
handle the vmdk format.
• Clusters: Clusters are pools of hosts that share datastores and virtual networks.
Clusters are used for load balancing, high availability, and high performance
computing.
OpenNebula - installation
Front-end, executes the OpenNebula
services.
Hosts, hypervisor-enabled hosts that provide
the resources needed by the VMs.
Datastores hold the base images of the
VMs.
Service Network, physical network used to
support basic services: interconnection of the
storage servers and OpenNebula control
operations
VM Networks physical network that will
support VLAN for the VMs.
OpenNebula – installation
front-end
sudo apt-get install opennebula
Front-End
The machine that holds the OpenNebula installation is called the frontend. This machine needs to have access to the storage Datastores (e.g.
directly mount or network), and network connectivity to each host.
The base installation of OpenNebula takes less than 10MB.
OpenNebula services include:
• Management daemon (oned) and scheduler (mm_sched)
• Monitoring and accounting daemon (onecctd)
• Web interface server (sunstone)
• Cloud API servers (ec2-query and/or occi)
Note that these components communicate through XML-RPC and
may be installed in different machines for security or performance
reasons
Requirements for the Front-End are:
ruby >= 1.8.7
OpenNebula – installation
hosts
Hosts
The hosts are the physical machines that will run the VMs. During the
installation you will have to configure the OpenNebula administrative
account to be able to ssh to the hosts, and depending on your
hypervisor you will have to allow this account to execute commands
with root privileges or make it part of a given group.
OpenNebula doesn't need to install any packages in the hosts, and the
only requirements for them are:
• ssh server running
• hypervisor working properly configured
• Ruby >= 1.8.7
OpenNebula – installation
storage
Storage
OpenNebula uses Datastores to handle the VM disk Images. VM
Images are registered, or created (empty volumes) in a Datastore. In
general, each Datastore has to be accessible through the front-end using
any suitable technology NAS, SAN or direct attached storage.
When a VM is deployed the Images are transferred from the Datastore
to the hosts. Depending on the actual storage technology used it can
mean a real transfer, a symbolic link or setting up an iSCSI target.
There are two configuration steps needed to perform a basic set up:
First, you need to configure the system datastore to hold images for the
running VMs.
Then you have to setup one ore more datastore for the disk images of
the VMs, you can find more information on setting up Filesystem
Datastores here.
OpenNebula can work without a Shared FS. This will force the
deployment to always clone the images and you will only be able to
do cold migrations.
OpenNebula – installation
networking
The network is needed by the OpenNebula frontend daemons to access the hosts to manage and
monitor the hypervisors; and move image files. It
is highly recommended to install a dedicated
network for this purpose.
To offer network connectivity to the VMs across
the different hosts, the default configuration
connects the virtual machine network interface to
a bridge in the physical host.
You should create bridges with the same name in
all the hosts. Depending on the network model,
OpenNebula will dynamically create network
bridges.
OpenNebula – CLI
OpenNebula – Sunstone
OpenNebula – Sunstone
OpenNebula – Sunstone
OpenStack
Projects, Python
•Compute
• Storage
• Networking
• Dashboard (GUI)
OpenStack - compute
OpenStack - installation
1. Install Ubuntu 12.04 (Precise) or Fedora 16
In order to correctly install all the dependencies, we
assume a specific version of Ubuntu or Fedora to make it
as easy as possible. OpenStack works on other flavors of
Linux (and some folks even run it on Windows!) We
recommend using a minimal install of Ubuntu server or
in a VM if this is your first time.
2. Download DevStack
git clone git://github.com/openstackdev/devstack.gitThe devstack repo contains a script that
installs openstack and templates for configuration files
3. Start the install
cd devstack; ./stack.sh
OpenStack - installation
OpenStack - dashboard
OpenStack - summary
1. Hard to install and maintain
2. Bad logical structure of SW
3. Not stable, nany bugs
Conclusion
Xen
KVM/QEMU
OpenNebula
OpenStack
Download