Uploaded by Galaxy IceCube

Comparison of Different Linux Containers

advertisement
Comparison of Different Linux Containers
Ákos Kovács
Department of Telecommunications
Széchenyi István University
Gyor, Hungary
Email: kovacs.akos@sze.hu
different Linux distribution’s kernels also may differ from each
other see in Fig. 1. Any kernel used in this system consumes a
lot of computing resources mostly CPU and memory, which
are the most required resources for High Performance
Computing. The goal of the containerization is to provide more
computing power to the applications by sacrificing this
flexibility and therefore all containers use the same kernel.
Because of this restriction, containers are also lacking of the
rich full feature list of virtualization like live migration, thus
we have to pause or stop containers to move them to another
host. Besides this, most of the container scenarios use static
containers to ensure business application, which are rarely
being moved while they are operating. Another difference
between container technique and Virtual Machine is the
starting time. While the virtual machine has to boot up it’s own
kernel, the container is using the working kernel of the host [2].
Abstract—The new generation of virtualization is
containerization. In this paper we measured the most common
container techniques to determinate the performance of these
techniques regarding computing power and network
performance with industry standard measurement applications.
We used these measurement results to compare them with native
performance without any container techniques and with the
Linux standard virtualization KVM performance. For the first
time, this paper also measures the performance of the Singularity
container system, which is a new approach of HPC systems
container usage.
Keywords—Benchmark, Computing,
Linux, LXC, Network, Singularity
Container,
Docker,
I. INTRODUCTION
Although the containerization techniques is a buzzword
nowadays especially in the Datacenter and Cloud industry, the
idea is quiet old. Container or “chroot” (change root) was a
Linux technology to isolate single processes from each other
without the need of emulating different hardware for them.
Containers are lightweight operating systems within the Host
Operating system that runs them. It uses native instructions to
the core CPU, without the requirement of any VMM (Virtual
Machine Manager). The only limitation is that we have to use
the Host operating systems kernel to use the existing hardware
components. Unlike with virtualization, where we could use
different Operating systems without any restriction at the cost
of the performance overhead.
III. STATE OF THE ART
There are many papers focusing on benchmarking different
container techniques along with virtualization. A few of them
are focusing on some different application benchmarks like
MYSQL [3] or some more complex network application like
load balanced Wordpress installation with MYSQL as well [4].
The LINPACK benchmark tool is a performance testing
method which is used to compare the CPU performance of the
top500.org supercomputer list [5].
Another important aspect is the comparison of the boot up
speed of the different techniques. Booting up a container is
much faster than booting up a Virtual Machine because we do
not have to boot a different kernel to start an instance [6].
All these papers contribute to some different aspects of
comparing these techniques. There are only very few related
works using HPC environment with containerization, more
specifically none of the papers are showing comparison
including Singularity, which is a new approach of the container
techniques designed for HPC systems.
We compared different container techniques with KVM
(Kernel-based Virtual Machine) virtualization and native OS
performance to ensure there is no performance overhead from
perspective of CPU performance and Network performance,
which are the most common bottlenecks of the HPC (High
Performance Computing) systems. [1]
II. CONTAINER AND VIRTUAL MACHINE COMPARISON
Virtual Machines are used extensively in cloud computing.
Most of the cloud services are based on virtual machines. The
performance of the Virtual Machines has a big impact on the
overall performance of a Cloud application. While Hypervisor
based virtualization techniques allow running multiple different
Operating systems at the same time it is the biggest
disadvantage of the technology from the perspective of
computing power. Because the different Operating systems use
mostly different kernels, not only Windows and Linux but also
IV. DIFFERENT CONTAINERS
Although containerization is not a new concept, it became
more and more popular especially in DevOps industries when
the Docker was introduced [7]. The main porpuse of the
containerization is that if we develop an application with
special software libraries we can share these libraries with the
container. It will have a major impact on the scientific area
because it will greatly improve the reproductability of the
The research leading to these results has received funding from the
project MSO4SC of the European Commission's Horizon 2020 (H2020)
framework programme under grant agreement n° 731063.
978-1-5090-3982-1/17/$31.00 ©2017 IEEE
47
TSP 2017
V. SYSTEM DESIGN AND MEASUREMENTS
A. CPU benchmarks
To test these different container techniques we used a
Huawei CH121 BladeServer with 2x E5-2630 v3 @ 2.40GHz
8Core CPUs and 128GB of DDR4 memory. The host operating
system was Ubuntu 16.10. The container images were also
Ubuntu Linux based images but with version Ubuntu 16.04.1
LTS. For testing the CPU, we used the Linux standard
Sysbench (version 0.4.12) tool with an argument for CPU
testing. Sysbench uses the CPU to test if a number is a prime
number. We can specify the max limit of the testing number.
We used this technique to measure any CPU overhead of these
different container techniques and between native (no
container) and KVM (virtualization) performance. To ensure
the fairness of the measurement, we used 8 threads in each
testing, because we generated a reference virtual machine with
8 cores in KVM virtualization solution. The test command was:
Fig. 1. KVM virtualiztaion vs. Container technique [9]
different experiments.
sysbench --num-threads=8 --test=cpu \
--cpu-max-prime=100000 run
A. LXC (Linux Containers)
LXC was the first widely used container technique. It uses
kernel namespaces to provide resource isolation. It is using
CGroups to manage resources that includes core count,
memory limit, disk I/O limit, etc. [8]. The container within
LXC shares the kernel with the OS, so its file systems and
running processes are visible and manageable from the Host
OS.
B. Docker
Docker is an extension of the LXC. It represents an Open
platform for LXC lightweight virtualization. Docker gives
services and applications with their full dependencies via
Docker Hub with an open source platform where everyone can
create and upload new containers. Docker extends also LXC
with more kernel and application-based features to manage
data, processes and advanced isolation within the host OS. This
is why we measured the performance of both LXC and Docker
in this paper. Docker uses a server-client architecture. The
client containers communicate to the Docker daemon which is
running in the Host operating system [2], [7].
B. Networking benchmarks
We used the industry standard IPerf tool (version 3.1.3) to
measure the networking performance of the different
containers. The IPerf tool measures the maximum utilizable
bandwidth between the server node and the client node. In our
experiments, the containers were always the client node. To
ensure that there is no bottleneck in the system the iperf server
was running native on another HUAWEI BladeServer CH140
With 2x E5-2670 v3 @ 2.30GHz with 12Cores and 128GB
DDR4 memory. It was running an Ubuntu 16.10 with IPerf
version 3.1.3. The two blades were connected through a 10GE
HUAWEI CX311 Switch. For the first time, we used the
default configuration for the container networking, to see the
default behavior of the solutions. The test command was:
iperf3 -i 2 -f M -c <IPerf Server IP> -t 20
In this paper, we did not measure the Container disk I/O
performance, because when we use this container techniques
we usually attach the native OS file system into the container,
which is mainly a local disk or an NFS mounted to the host OS.
C. Singularity (HPC container platform)
Singularity is a new approach of the container techniques. It
does not require root privileges to use all the resources. This is
essential in HPC systems. Singularity blocks privilege
escalation within the container so if we want to run scripts with
root privileges inside the container we must be root outside the
container as well. It is compatible with Docker, so users can
pull down and import Docker images into Singularity. It also
has its own MPI implementation so it is easy to use it in HPC
environments using MPI. Singularity also has the capability to
use HPC interconnects like Infiniband and GPU or accelerators
out of the box. In this paper, we do not cover MPI
measurements [6].
VI. EXPERIMENTS
A. CPU benchmarks
First, we used the sysbench synthetic CPU benchmark. We
performed these measurements eleven times and from the
results we calculated the average value and the standard
deviation as well. The TABLE I. shows our results.
As we can see all the containerization solutions show almost
TABLE I.
48
SYSBENCH MEASUREMENTS RESULTS
Docker
(s)
LXC
(s)
Singularity
(s)
KVM
(s)
Native
(s)
Average
38,3350
38,3309
38,3258
38,4071
38,3292
Std.
Deviation
0,0093
0,0105
0,0086
0,0189
0,0076
TABLE II.
Average
IPERF RETRANSMISSION RATE RESULTS
Docker
LXC
Singularity
KVM
Native
1751,24
1442
2,71
150,6
0
network performance. If we look at the standard deviation we
can also see that the Docker and LXC containers have a very
high standard deviation compared to Singularity or the native
execution.
So we looked at the IPerf tool statistics for the reason. We
realized that when we use the standard Linux bridge, the
retransmission rates for the Docker and for LXC were far
higher than in any other cases. This is shown in TABLE III.
Fig. 2. Sysbench measurements results
the same performance as the native execution. The KVM
virtualization is a bit slower, but as we can see, we had to
manipulate the time scale to show the difference. It means that
KVM virtualization lags behind native execution by less than
1%. The standard deviation results shows us that the
measurements were pretty the same and stable as well.
As we can see the retransmission rates are very high for
Docker and LXC. The native retransmission rate was zero
every time, so we could rule out the hardware failure. So we rerunned the test with a different configuration.
When we use the docker with host networking, it means
that the Docker uses the physical network card as the host OS.
When we enable this mode, the services within the docker can
be reached as if those are running on the native OS. Every
network port that are used by Docker cannot be used on the
host OS. LXC has the same problem, we have to edit some
configuration files to change the network mode, but if we do
that the retransmission rate falls back dramatically.
B. Networking benchmarks
When we measured the network performance, we used the
standard Linux bridge for networking. It is the most common
way to ensure network connectivity for a container. The kernel
module used to function as a bridge, can also be used to create
virtual Ethernet devices, which are connected to each other.
The Singularity is using the network card this way by
default, because it is used by HPC systems where the biggest
performance is needed [8].
First, we measured the normal operation using Linux
bridge. The Results are shown in TABLE II.
When we look at the results, we can realize that there are
more significant differences between the results than it was
before at the CPU benchmarks. The containers except
Singularity fall behind compared to the native or KVM
TABLE I.
The corrected experiment results are shown in table IV and
TABLE V.
As we can see the results are pretty the same. Now only the
KVM virtualization is the slowest one, but this difference is
also below 1%, like in CPU benchmarks.
IPERF MEASUREMENTS RESULTS
Docker
(MB)
LXC
(MB)
Singularity
(MB)
KVM
(MB)
Native
(MB)
Average
1097,4
1094,1
1122,1
1116,7
1122,2
Std.
Deviation
20,689
17,916
0,316
15,720
0,632
In this case when we want to maximize the performance we
must run containers with different network services on the
same host, or we could use only one container per host with
many different services. This reduces the usability of the
containers. There are some other modes of the networking
methodology for containers. There are some overlay
technologies to connect containers over network. For example,
networking from a different perspective like SDN or a
TABLE III.
Average
CORRECTED IPERF RETRANSMISSION RATE RESULTS
Docker
LXC
Singularity
KVM
Native
1,5
3,7
2,71
150,6
0
TABLE IV.
Fig. 3. IPerf measurements results
49
CORRECTED IPERF MEASUREMENT RESULTS
Docker
(MB)
LXC
(MB)
Singularity
(MB)
KVM
(MB)
Native
(MB)
Average
1122,0
1121,8
1122,1
1116,7
1122,2
Std.
Deviation
0,471
0,632
0,316
15,720
0,632
because of the nature of the testing method: the loss or some
delay of a packet does not have a significant influence on the
final measurement results. However, the new benchmarking
method for IPv6 transition technologies defined in [18] and
implemented by dns64perf++ [19] for DNS64 benchmarking,
will probably require native execution.
The performance analysis of the newly invented MPT
network layer multipath library [20] is expected to be another
successful are of application of containers.
VIII. CONCLUSION AND FUTURE WORKS
We measured performances of the different container
techniques which are available at the moment. There some
scientific papers which evaluates the different performance
aspects, but they does not mention anything about Singularity.
It is a new container technique, which is tuned to be the top
performer by sacrificing the agility and flexibility. We
measured the CPU performance of different containers
compared to the native and the virtualization performance.
Containers are almost at the level of the native performance,
which is quite good if we want to run an application, which is
CPU intensive. Containers are also hardening the security and
can be very useful for developers using different libraries for
their application than host OS. The networking performances
are also satisfying even it is not on the native level using
default configuration but we could closely approximate the
performance of the native execution by modification of their
configuration.
Fig. 4. IPerf corrected measurements results
tunneling protocol like VXLAN. Containers are fairly new
technology that is why there are limited work done by the
scientific community yet [10].
C. Discussion
The validity of our results is not without limits. The
measurement were performed by using a particular computer
(Huawei CH140 Bladeserver). The results could be somewhat
different if the experiments are executed on different servers.
We tested the most widely used container technologies but
there are some other competitors like FreeBSD Jails or Solaris
Zones.
VII. POSSIBLE APPLICATION AREAS FOR CONTAINERS
We suggest to change the network configuration and use
native host networking on different container techniques
especially when we use them in HPC systems or in network
intensive applications.
Due to the fact that containers showed only very small
performance degradation compared to the native execution, we
contend that they can be efficiently used in different areas.
They could be advantageous when a large number of
computers are intended to be used for some purposes only
temporarily, e.g. for experimenting, because one does not need
to install the necessary software components to the high
number of computers. Now we show different possible areas of
application.
In the future, we want to test these containers along with
new challengers in HPC environment using queue managers
and MPI implementations.
REFERENCES
[1]
To check the validity of a previously defined criterion for a
good speed up, 12 identical, dual-core computers were used in
[11]. A heterogeneous cluster (altogether 20 computers of four
types) was used in [12] to check the newly defined good speedup criterion for heterogeneous clusters. 26 computers of five
different types were used to show how the extension of the
definition of the relative speed-up for heterogeneous clusters
can be used to evaluate the efficiency of parallel simulation
executed by heterogeneous clusters in [13]. Finally, the number
of computers in the heterogeneous cluster were raised to 38 (of
6 different types) in [14]. The tedious work of software
installation for these experiments could have been significantly
reduced by using containers. In that case only one installation
per CPU type would have been necessary.
[2]
[3]
[4]
[5]
Similarly, the nearly native CPU and networking
performance of containers could have been utilized in the
following experiments, too. Eight computers were used to
provide high enough load for NAT64 performance testing in
[15] and for DNS64 performance testing in [16] and [17]. We
believe that containers are applicable for these type of tests,
[6]
50
Preeth E N; Fr. Jaison Paul Mulerickal; Biju Paul; Yedhu Sastri,
“Evaluation of Docker containers based on hardware utilization”, 2015
International Conference on Control Communication & Computing
India (ICCC), 2015, pp. 697-700.
Amr A. Mohallel; Julian M. Bass; Ali Dehghantaha “Experimenting
with docker: Linux container and base OS attack surfaces”,2016
International Conference on Information Society (i-Society), 2016, pp.:
17 - 21
Rizki Rizki; Andrian Rakhmatsyah; M. Arief Nugroho “Performance
analysis of container-based hadoop cluster: OpenVZ and LXC,” 2016
4th International Conference on Information and Communication
Technology (ICoICT), 2016, pp.: 1 - 4,
Wes Felter, Alexandre Ferreira, Ram Rajamony, Juan Rubio, “An
Updated Performance Comparison of Virtual Machines and Linux
Containers”, 2015 IEEE International Symposium on Performance
Analysis of Systems and Software (ISPASS), 2015, pp.: 171 - 172,
Ann Mary Joy, “Performance comparison between Linux containers and
virtual machines” , 2015 International Conference on Advances in
Computer Engineering and Applications, 2015, pp: 342 - 346
Miguel G. Xavier; Marcelo V. Neves; Fabio D. Rossi; Tiago C. Ferreto;
Timoteo Lange; Cesar A. F. De Rose asdfasdf, “Performance Evaluation
of Container-Based Virtualization for High Performance Computing
Environments”, 2013 21st Euromicro International Conference on
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Parallel, Distributed, and Network-Based Processing , 2013, pp.: 233 240
Flávio Ramalho; Augusto Neto, “Virtualization at the network edge: A
performance comparison”, 2016 IEEE 17th International Symposium on
A World of Wireless, Mobile and Multimedia Networks (WoWMoM),
2016, pp.: 1 – 6.
David Beserra; Edward David Moreno; Patricia Takako Endo; Jymmy
Barreto; Djamel Sadok; Stênio Fernandes “Performance Analysis of
LXC for HPC Environments,” 2015 Ninth International Conference on
Complex, Intelligent, and Software Intensive Systems, 2015, pp.: 358 363
Kurtzer, G. M.. Singularity 2.1.2 - Linux application and environment
containers for science [Data set]. Zenodo.
Joris Claassen; Ralph Koning; Paola Grosso “Linux containers
networking: Performance and scalability of kernel modules”, NOMS
2016 - 2016 IEEE/IFIP Network Operations and Management
Symposium, 2016, pp. 713-717
G. Lencse and A. Varga, "Performance Prediction of Conservative
Parallel Discrete Event Simulation", Proc. 2010 Industrial Simulation
Conf. (ISC'2010) (Budapest, Hungary, 2010. June 7-9.) pp. 214-219.
G. Lencse, I. Derka and L. Muka, "Towards the Efficient Simulation of
Telecommunication Systems in Heterogeneous Distributed Execution
Environments", Proc. 36th International Conf. on Telecomm. and Signal
Processing (TSP 2013), (Rome, Italy, 2013. July, 2-4.) Brno University
of Technology, pp. 314-310.
G. Lencse and I. Derka, "Testing the Speed-up of Parallel Discrete Event
Simulation in Heterogeneous Execution Environments", Proc. ISC'2013,
[14]
[15]
[16]
[17]
[18]
[19]
[20]
51
11th Annual Industrial Simulation Conference, (Ghent, Belgium, 2013.
May 22-24.), pp. 101-107.
G. Lencse and I. Derka, "Measuring the Efficiency of Parallel Discrete
Event Simulation in Heterogeneous Execution Environments", Acta
Technica Jaurinensis, vol. 9. no. 1. pp. 42-53,
G. Lencse and S. Répás, "Performance Analysis and Comparison of the
TAYGA and of the PF NAT64 Implementations", Proceedings of the
36th International Conference on Telecommunications and Signal
Processing (TSP 2013), (Rome, Italy, 2013. July, 2-4.), pp. 71-76.
G. Lencse and S. Répás, "Performance Analysis and Comparison of
Different DNS64 Implementations for Linux, OpenBSD and FreeBSD",
Proc. IEEE 27th International Conf. on Advanced Information
Networking and Applications (AINA 2013), (Barcelona, Spain, 2013.
March, 25-28.), pp. 877-884.
G. Lencse and S. Répás, "Performance Analysis and Comparison of
Four DNS64 Implementations under Different Free Operating Systems",
Telecommunication Systems, vol 63, no 4, pp. 557-577,
M. Georgescu, L. Pislaru and G. Lencse, "Benchmarking Methodology
for IPv6 Transition Technologies", Benchmarking Working Group,
Internet Draft, April 18, 2017, url:https://tools.ietf.org/html/draft-ietfbmwg-ipv6-tran-tech-benchmarking-06
G. Lencse, D. Bakai, "Design and implementation of a test program for
benchmarking DNS64 servers", IEICE Transactions on Commun., vol.
E100-B, no. 6. pp. -, Jun. 2017. DOI: 10.1587/transcom.2016EBN0007
B. Almási, G. Lencse, Sz. Szilágyi, "Investigating the Multipath
Extension of the GRE in UDP Technology", Computer
Communications, vol. 103, no. 1, pp. 29-38, May 1, 2017, DOI:
10.1016/j.comcom.2017.02.002
Download