LXC & support in CloudStack
October 2014
Rahul Rege
© 2014 Sungard Availability Services, all rights reserved
Agenda
• LXC – definition, origin
• Containers terminology
• Hypervizor vs containers
• LXC building blocks
• Why LXC – Benefits, limitations
• Container lifecycle
• Support in CloudStack
• Areas for contribution
• Ideas, exercises
• Peek into Docker world
• Q&A
2
LXC : Definition
• LinuX Containers is a technology comprising of number of
kernel features that enables operating system level
virtualization.
• Allows running multiple isolated systems with a common
kernel on a single host.
• Provides a virtual environment that has its own process and
network space.
• Simple commandline tool as well as various other drivers that
can help create/destroy/list the containers. Faster than
creation of traditional VM
3
Containers : terminology
• LXC brings together the capabilities to isolate process, filesystem,
network, global resources etc, by getting together set of features like
cgroups, namespaces in latest Linux Kernel providing
sophisticated interfaces on the kernel from userspace.
• Operating system level virtualization is not new. Similar
terminologies and implementations existed before..
• chroot in Unix for example is as old as 1982, which segregates a
part of current filesystem as a filesystem on its own, any activity in
new one, does not affect old.
• Jails from FreeBSD 4.0 in 2000 enhanced chroot adding
segregation of network, process, users.
• Solaris containers brought Zones and added support
for cloning and snapshots in 2004
4
Containers : terminology (contd)
• OpenVZ is actually a Linux container framework released in 2005 by
SWSoft (now Parallels) but no actual contribution to Linux kernel,
instead they released the patches for linux kernel enabling them to
implement the solution. Even though not very popular with linux
folks, it has a lot of features for containerization.
• LinuxVServer is similar to OpenVZ but again no support inside
Linux kernel.
• LXC – Utilizes the support from various kernel modules that were
developed over the years like namespaces, cgroups, chroot,
seccomp, apparmor etc. Current stable release LXC 1.0 released
on Feb 2014
5
Hypervizors vs Containers
Type 1 Hyp
Type 2 Hyp
app
app
app
bin/lib bin/lib
vmOS vmOS
Hypervizor
Hardware
containers on host
app
bin/lib bin/lib
vmOS vmOS
Hypervizor
app
app
OS
bin/lib bin/lib
OS
Hardware
Hardware
6
LXC : Building blocks – what's needed
We want ….
Processes, Networks,
IPCs, Users, Disks,
mounting,
hostnaming and stuff
…. But we also want
To limit resources,
prioritize CPU
allocation or I/O, want
to account/metric it too,
and kill at will.. And
security ? Yes please
7
pid
network
mnt
ipc
usr
uts
cgroups
namespaces
LXC : Building blocks – what have we got
blkio
cpu
cpuset
devices
freezer
net_ds
net_prio
security
apparmor
seccomp
8
LXC : Building blocks – how to use it
Docker
libvirt lxc driver
lxc-utils
…. & more
liblxc library
API bindings for Go, Ruby,
Python and more
templates
9
Why LXC : Benefits
• Just enough operating system with common kernel, most of
the things that you can do in a VM are easily done on a
container without the overhead.
• Provisions in milliseconds.
• Near bare-metal performance.
• Ideal for development, running sandboxes, no headache of
maintaining dependencies. Destroy, create is very fast.
Sophisticated APIs and tools to support it.
• Run various flavors of linux on common kernel using
templates. Like Ubuntu on Fedora and viseversa
• Tools like Docker built on top of it bring endless possibilities
with greater power over packaging set of applications,
versioning, automatic build and sharing.
10
Why LXC : Limitations
• In case you need separate kernel level for your specific
application, it won’t be supported. VM would be the best
choice in that scenario.
• Only Linux support, no Windows or other OS supported.
• Not a widely accepted technology yet. There are some
debates over the security concerns in production level
deployments.
• LXC in itself doesn’t support migration or snapshoting of the
containers and have them run on slightly different hardware.
(Docker does address this)
• Support from advanced networking like VXLAN, GRE may not
be full fledged yet. Also, businesses requiring enterprise
network virtualization support would depend on the vendors.
11
Container lifecycle : Setup
• Installation
$ sudo apt-get install lxc lxctl lxc-templates
• Check install
$ lxc-checkconfig
We will be running it on Ubuntu in un-priviladged mode which may
require more packages to support the mode. Ref : here for
troubleshooting. After fixing the packages, you can go ahead and
create your first container with selectable distros.
12
Container lifecycle : Create & destroy (demo)
• Create
$ lxc-create -n container1 -t download (You can optionally specify
the distro name if already downloaded)
• List
$ lxc-ls –f
• Start
$ lxc-start –n container1 –d
• Stop
$ lxc-stop –n container1
• Destroy
$ lxc-destroy –n container1
13
Support in CloudStack
• Original support for LXC added in CloudStack 4.2
• Useful for users who do not require full virtualization of KVM or
Xen
• Compliments the standard hypervizors for the workloads which
do not require full vm creation or demand only a defined set of
applications which could be quickly deployed and destroyed.
• Defined at the same level as any other host, i.e. users can
directly select LXC as a ‘hypervizor’ from the drop down list
• The creation and lifecycle management of a container
is carried out just like a VM.
14
Support in CloudStack : LXC host requirements
Requires Linux Kernel 2.6.24 (cgroups)
Recommended – CentOS/RHEL 6.3 or Ubuntu 12.04
• Libvirt 1.0.0 or higher
• QEMU/KVM 1.0 or higher
Common host requirements :
• Same distribution version of host within one cluster
• All hosts must be with same CPU type, count, feature flags
• Must support HVM (Intel VT, AMD-V enabled)
• At least 1 NIC and 4GB of RAM
15
Support in CloudStack – host variations
Component
Variation
Primary storage
Similar • NFS
• Shared mount points
Secondary storage
Unlike other hypervizors where VM is single file, LXC containers
run from a directory which servers as a ‘/’ for it
Cloudstack agent
/etc/cloudstack/agent/agent.properties
Set - hypervisor.type=lxc
Container Templates
•
•
•
•
•
System VM
•
•
No container templates provided by cloudstack
Templates need to be saved in a tarball on secondary storage.
Create a lxc container using lxc-create with a choice of distro
downloaded
Stop the container
Tar the rootfs and export the template
No native systemvm for LXC, all other hypervizors have
dedicated systemvm template
Systemvm of KVM is used. So the host must support KVM to
spin the systemvm
16
Support in CloudStack – Limitations
• No Console access – (there is a JIRA open, also planned in LXC 2.0 support)
• No Live migration (planned in LXC 2.0 support)
• No migration across clusters
• No snapshot support
• No upload/download volume support
• No template creation from ROOT volume (planned in LXC 2.0 support)
• No ISO support for create Vm (planned in LXC 2.0 support)
• Support for upcoming SDN related technologies from players like Cisco,
BigSwitch, Midokura, F5 and others is not yet available, the support for VXLAN,
NVGRE etc, is not present.
Cloudstack 6122 LXC 2.0 enhancements
17
Support in CloudStack – areas for contribution
•
Allowing addition of VM as host : As we saw, any VM can happily host
containers without added support, but since CloudStack only supports HVM
enabled hypervizors, a VM cannot be dedicated as a container host. There
could be other design limitations to create exception for LXC.
•
Dedicated system vm : Initial support study shows there were some
networking complexities in creating a dedicated systemvm for LXC. This could
be reevaluated and documented. Could a host or one of the container act as a
container ?
•
Support for Docker as a hypervizor type : Although Docker targets more of a
PaaS or application specific needs, introducing this as a supporting arch for
IaaS vendors could be beneficial. There is a open JIRA which targets this.
Docker as hypervizor
18
Ideas, exercises
• User libvirt or lxcutils to create number of containers and connect
them with OpenVSwitch , define networking policies
• Play around with different network topologies, explore other types
of networking like VLANs , MACVLANs, empty, phys, veth
• Use containers to test your load balancers. 2 application containers
and one load balancer container
• Change iptable rules to control containers ip traffic policies. Explore
ebtables, the bridge table rules package for layer2 traffic shaping
• LXC provides a python API, you can get a good exercise writing a
complete multicontainer lifecycle in python
• Containers inside containers – tiered architecture and
networking puzzles.
• Profiling – How much do we gain with containers running
19
Peek into Docker world
20
Resources
http://www.slideshare.net/BodenRussell/realizing-linux-containerslxc
http://www.cybera.ca/news-and-events/tech-radar/contain-your-enthusiasm-partone-a-history-of-operating-system-containers/
http://compositecode.com/2013/11/18/linux-containers-windows-containers-lxcfreebsd-jails-vserver/
http://unix.stackexchange.com/questions/127001/linux-lxc-vs-freebsd-jail
http://stackoverflow.com/questions/17989306/what-does-docker-add-to-just-plainlxc
https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Support+in+Clou
dstack#
Footer – Apply across document
21
Thank you !
22