KVMonitor

advertisement
Efficient VM Introspection in KVM
and
Performance Comparison with Xen
Kenichi Kourai
Kousuke Nakamura
Kyushu Institute of Technology
Intrusion Detection System (IDS)
IDSes detect attacks against servers
 Monitor the systems and networks of servers
 Alert to administrators
Recently, attackers attempt to disable IDSes
 Before they are detected
 This is easy because IDSes are running in servers
detect
IDS
server
intrude
IDS Offloading
Offloading IDSes using virtual machines (VMs)
 Run a server in a VM
 Execute IDSes outside the VM
 Prevent IDSes from being compromised
 Can be provided as a cloud service
 Cloud providers can protect users' VMs
VM
monitor
IDS
In-VM monitoring
IDS
VM
IDS offloading
VM Introspection (VMI)
A technique for monitoring VMs from the outside
 Memory introspection
 Obtain raw memory contents and extract OS data
 Disk introspection
 Obtain raw disk data and interpret a filesystem
 Network introspection
VM
 Obtain packets only from/to VMs
IDS
packets
???
memory
???
disk
network
Performance of VMI
Performance has not been reported in detail
 No performance comparison
 E.g., VMwatcher
[Jiang+ CCS'07]
 Implemented in Xen, QEMU, VMware, and UML
 Reported only for UML
 E.g., EXTERIOR
[Fu+ VEE'13]
 Implemented in KVM and QEMU
 No difference due to using memory dump
Performance data is important
 For user's selection of virtualization software
The Purpose of This Work
Performance comparison among virtualization
software in terms of VMI
 Target: Xen and KVM
 Widely used open source virtualization software
 System architecture is different
process
VM
VM
VM
hypervisor
OS
Xen
KVM
Implementation for KVM
No efficient implementation of VMI for KVM
 Several studies have been done for KVM
 The implementation details are unclear
 LibVMI
KVM
[Payne+ '11]
supports VMI for both Xen and
 The performance of memory introspection is too low
in KVM
 Optimized for Xen
KVMonitor
We have developed an efficient VMI tool for KVM
 Execute an IDS as a process of the host OS
 Provide functions for introspecting memory, disks,
and NICs in QEMU
VM
offload
IDS
KVMonitor
disk
monitor
host OS
NIC
memory
KVM module
QEMU
Memory Introspection (1/2)
Difficult to efficiently introspect QEMU's memory
 LibVMI obtains memory contents from QEMU
KVMonitor shares VM's physical memory with
QEMU via a memory file
 Access As a memory-mapped file
 Enable direct memory introspection
IDS
VM
KVMonitor
QEMU
VM's physical
memory
VM's physical
memory
memory file
Memory Introspection (2/2)
IDSes usually access OS data using virtual
addresses
KVMonitor translates virtual addresses into
physical addresses
 Look up the page table for address translation
 Introspect the CR3 register using QMP
IDS
page
table
CR3
VM
KVMonitor
QEMU
VM's physical
memory
VM's physical
memory
memory file
Disk/Network Introspection
KVMonitor introspects VM's disks via the network
block device (NBD)
 Interpret the qcow2 format in the NBD server
 Interpret the filesystem in the host OS
KVMonitor captures packets from a tap device
IDS
KVMonitor
NBD
disk
image
file
VM
NBD server
host OS
QEMU
tap
network
Transcall with KVMonitor
We have ported Transcall
KVM
[Iida+ '11]
for Xen to
 Enable offloading legacy IDSes without any
modifications
 Consist of a system call emulator and a shadow
filesystem
 Including the proc filesystem
 Analyze OS data by memory introspection
IDS
Transcall
KVMonitor
VM
analyze
QEMU
Experiments
We examined that KVMonitor achieved
 Efficient memory introspection
 No impact on memory performance of a VM
 Effective IDS offloading
PC
VM
CPU: Intel Xeon E5630 (12 MB L3 cache)
Memory: 6 GB DDR3 PC3-8500
HDD: 250 GB SATA
NIC: gigabit Ethernet
Hypervisor: KVM 1.1.2
Host OS: Linux 3.2.0
CPU: 1
Memory: 512 MB
Disk: 20 GB (ext3)
Guest OS: Linux 2.6.27
KVMonitor vs. LibVMI
We measured the performance of memory
introspection
KVMonitor
LibVMI
KVMonitor was
 32x faster than LibVMI
read (GB/s)
 Copy VM's physical memory
by 4KB
12
10
9.6
fast
8
6
4
2
0
0.3
Why is LibVMI so slow?
LibVMI has to issue a QMP command for each
memory access
 Memory contents are transferred from QEMU to
LibVMI
IDS
QMP
LibVMI
LibVMI
VM
IDS
QEMU
KVMonitor
VM's
memory
VM's
memory
VM
memory
file
KVMonitor
QEMU
VM's
memory
In-VM Memory Performance
Doesn't using a memory file affect memory
performance of a VM?
memory file
Using a memory file was
memory
file
VM
VM
QEMU
QEMU
VM's
memory
VM's
memory
memory file
malloc
throughput (GB/s)
 as efficient as malloc
10
malloc
8.6 8.5
8
6.6 6.3
6
4
2
0
read
write
KVMonitor vs. In-VM Access
KVMonitor was faster than
in-VM memory access
KVMonitor
 Due to virtualization
overhead
10
VM
IDS
KVMonitor
VM's
memory
memory
file
QEMU
VM's
memory
read (GB/s)
8
6
4
2
0
In-VM
9.6
8.6
fast
Offloading Legacy IDSes (1/3)
Tripwire
 Check filesystem integrity in disks
We added, deleted, and modified files
 Offloaded Tripwire detected changed files
Rule Name
... Added Removed Modified
Monitor Filesystems
1
1
1
Total Objects scanned: 67082
Total violations found: 3
VM
Tripwire
DB
disk
Offloading Legacy IDSes (2/3)
Snort
 Inspect network packets
We performed portscans from another host
 Offloaded Snort detected portscans
[**] [1:1421:11] SNMP AgentX/tcp request [**]
[Classification: Attempted Information Leak] ...
01/28-10:47:13.406931 192.168.0.68:47962 -> 192.168.0.81:705
Snort
rule sets
packets
VM
portscan
Offloading Legacy IDSes (3/3)
Chkrootkit
 Detect rootkits using ps, netstat, and file inspection
We tampered with ps and netstat in a VM
 Offloaded chkrootkit detected tampered commands
ROOTDOR is ’/’
Checking ’ps’...INFECTED
Checking ’netstat’...INFECTED
:
execute
ps
VM
disk
chkrootkit
netstat
...
ps
netstat
Cross-view Diff (1/2)
A technique for detecting hidden malware
 Compare the results of VMI and in-VM monitoring
 The difference means the existence of hidden
malware
C is hidden
cross-view diff
engine
A B C D ...
VM
monitor
IDS
A B D ...
IDS
Cross-view Diff (2/2)
We tampered with ps in a VM
 A hidden process was detected as malicious
We tampered with netstat in a VM
 A hidden port was detected as a backdoor
ps
netstat
PID TTY
TIME CMD
1 ?
00:00:00 init
2 ?
00:00:00 kthreadd
:
PID TTY
TIME CMD
2 ?
00:00:00 kthreadd
:
Proto ... Local Address ...
tcp
0.0.0.0:5900
tcp
0.0.0.0:22
:
Proto ... Local Address ...
tcp
0.0.0.0:22
:
results from offloaded commands
results from in-VM commands
KVMonitor vs. Xen
We compared the performance of VMI between
KVM and Xen
 Using a VMI tool for Xen
 Memory: standard library
 Disk: loopback mount
 Network: tap device
Dom0 (VM)
disk image
file
tap
IDS
Hypervisor: Xen 4.1.3
Dom0 OS: Linux 3.2.0
VM: fully virtualized
VM
libxenctrl
hypervisor
Memory Introspection
We measured read throughput
 Copy VM's physical memory
by 4KB
12
 48x faster than Xen
read (GB/s)
KVMonitor was
10
KVM
Xen
9.6
8
fast
6
4
2
0.2
0
VMI
Why is Xen so slow?
Xen has to map each memory page
 It cannot map all the pages in advance
 It takes time proportional to the number of pages
KVMonitor can read a pre-mapped file
VM
IDS
IDS
libxenctrl
KVMonitor
map
Xen
VM's
memory
memory
file
KVMonitor
Kernel Integrity Checking
We measured the execution time of the kernel
integrity checker
KVM
 Read the code area
 Translate virtual to
physical addresses
 118x faster than Xen
224
200
time (ms)
KVMonitor was
250
Xen
150
100
fast
50
0
1.9
Why is the speedup so larger?
The speedup in the real IDS was much larger
 48x (simple benchmark)
 118x (kernel checker)
Due to address translation
 In Xen, the access cost of the page table is high
 Only 8 bytes are read after memory mapping
VM
IDS
libxenctrl
map &
read
simple benchmark
VM
IDS
libxenctrl
map &
read
real kernel checker
Disk Introspection
We measured the execution time of Tripwire
 For two formats of disks
KVM
 raw and qcow2
9.4 9.2
 Comparable to Xen
The difference between
formats was larger
time (min)
10
KVMonitor was
 Raw was faster than qcow2
8
Xen
7.5 7.5
6
4
fast
2
0
raw
qcow2
Network Introspection
We measured the packet loss rate in Snort
 Send many packets as
fast as possible
 more lightweight
than Xen
 Dom0 suffered from
virtualization overhead
12
packet loss rate (%)
KVMonitor was
KVM
Xen
10.4
10
8
6
4
2
0
6.2
fast
Chkrootkit
We measured the execution time of chkrootkit
KVM
KVMonitor was
60
 1.6x faster than Xen
 2x slower than in-VM
 Due to system call traps
55
50
time (sec)
 Efficient memory
introspection
 No virtualization
overhead
Xen
40
35
fast
30
18
20
21
10
0
Offloading
in-VM
Related Work
VMI tools
 Livewire [Garfinkel+ NDSS'03] for VMware
 XenAccess [Payne+ ACSAC'07] for Xen
Shm-snapshot for LibVMI
[Xu+ PDL'13]
 Take a VM's memory snapshot in shared memory
 It takes 1.4 seconds for 3 GB
Volatility
[Walters '07]
 A memory forensics framework
 VMI for KVM is enabled by a Python adapter,
PyVMI from LibVMI
Conclusion
KVMonitor
 Achieve efficient VM introspection (VMI) in KVM
 32x faster than existing LibVMI
Performance comparison with Xen
 118x faster at maximum
 Chkrootkit was 1.6x faster
Future work
 Comparison with other virtualization software
 Integration with LibVMI
Download