HyperCheck: a Hardware Assisted Integrity Monitor Jiang Wang, Joint work with Angelos Stavrou and Anup Ghosh CSIS, George Mason University Motivation Our approach Prototype Implementation Evaluation Future work 2 Virtualization is widely deployed for servers and desktops. ◦ In 2009, 18% server workloads were virtualized ◦ Expected to grow to more than 50% by 2012 (Gartner Inc.) Hypervisors (also called Virtual Machine Monitors or VMM) are the core component to enforce policy. Hypervisors are the new attack target. Privileged Domain Kernel 0 Guest OS 1 (Windows) Kernel 1 Guest OS 2 (Linux) Kernel 2 … Hypervisor (Virtual Machine Monitor) Hardware 3 Xen vulnerabilities: ◦ Allow the attacker to run arbitrary code in the privileged domain. E.g. CVE-2007-4993, CVE-2007-1320, DMA attack (Invisible Things Lab, Blackhat 08) ◦ Modify the device driver to write arbitrary data to the hypervisor via DMA ◦ E.g. Interrupt Descriptor Table (IDT). HyperCall table. 4 Modify IDT directly 1 2 3 ... Org IDT Code in memory 5 Copy and change attack 1 1 2 2 3 3 ... ... Org IDT New IDT Code in memory 6 Out-of-VMM defense mechanisms Copilot (Petroni et al. USENIX security ‘04 ). ◦ Cannot get execution state. Copy and change attack. ◦ Can be subverted by DMA remapping attack. HyperGuard (Rutkowska, Blackhat ‘08). ◦ Use SMM to get the execution state. ◦ The OS is frozen when CPU in SMM –- high overhead. HyperSentry, (Azab et al. CCS’ 10) ◦ Use SMM to monitor the hypervisor integrity. DeepWatch (Bulygin, Blackhat ‘08). ◦ Based on micro-controller existed on some motherboard. ◦ Need the signature of the malware. 7 In-VMM defense mechanisms HyperSafe, (Wang, Oakland ’10) ◦ Method: non-modifiable memory lockdown and restricted pointer indexing ◦ Drawbacks: need to modify the kernel; aliasing problem 8 Design goals: ◦ To monitor the hypervisor code and static data ◦ Complete execution view ◦ Low performance overhead ◦ No hardware modification ◦ No software changes to the hypervisor or kernel ◦ Provide out-of-box view that cannot be subverted 9 ◦ SMM + COTS network card (NIC) ◦ SMM existed in all x86 CPU after 486. OS 1 Analysis Module Monitor Machine OS 2 Hypervisor PCI NIC (1) Hardware SMM (2) (1) Acquiring module (2) Register Checking module 10 System Management Mode (SMM) is another CPU mode for x86 Realaddress mode Protected mode SMRAM SMM To enter SMM, a System Management Interrupt (SMI) is required. SMM has a special RAM—SMRAM, and can be LOCKED. SMM code is included in the BIOS. 11 SMRAM cannot be modified: ◦ Locked by hardware in flash and memory Can be integrated with BIOS code Can be set up by a trusted boot module Other software on the target machine is not trusted. ◦ Network card driver is put into SMM. The attacker will modify some portion of the hypervisor kernel in the memory. 12 PCI NIC triggers SMI SMM check the CPU registers SMM send the memory out via NIC Analysis module rcv the data Different from the previous? YES Alarm 13 • Two prototypes: – HyperCheck-I : QEMU based, easy debugging – HyperCheck-II: on real hardware. For performance evaluation. • Protect static part of the VMM or OS – VMM code – Dom 0 code – Linux or Windows kernel code – Static control data (such as Interrupt Descriptor Table) 14 PCI devices with DMA support Use commercial network cards ◦ Challenge: they need drivers, and drivers normally reside in untrusted OS, Driver Domain, or VMM. ◦ Solution: put the driver into SMM. We used Intel e1000 NIC. 15 Resides in SMM Previous CPU registers are saved in SMRAM before switching to SMM. Check two registers: ◦ IDTR (Interrupt Descriptor Table Register): static ◦ CR3: page directory base register. Used to translate virtual addresses to physical ones. 16 Receive the packets from the acquiring module. Compare the current memory snapshot with the clean state (obtained when the system just boot). If different, potential attack. 17 Verifying the static property: ◦ Monitored the target code and data for one hour and didn’t find any changes. ◦ They do change when the system is booting Detection ◦ Detected all the simulated attacks to the Xen hypervisor, Dom0, Linux and Windows kernels. 18 Xen Dom 0 Linux Windows IDT table Hypercall table Exception table Hypervisor code System call table Kernel code IDT table System call table Kernel code IDT table System call table static Y Y Y Y Y Y Y Y Y Y Y Detected modification Y Y Y Y Y Y Y Y Y Y Y 19 Time (million CPU cycles) 140 120 100 80 60 40 20 0 1 3 5 7 9 11 13 15 packet size( K bytes ) Network overhead for variable packet size when sending 2.7MB data. 20 Time (million CPU cycles) 5000 4000 3000 2000 1000 0 10 50 100 150 200 Data size (MB) Network overhead for variable data size. 21 22 time(ms) SMMcode size(MB) HyperCheck only TPM Linux 31 203 1022 2 Xen+Dom0 40 274 >1022 2.7 Window XP 28 183 >972 1.8 Hyper-V+root 36 244 >1022 2.4 VMwareESXi 3.5 33 223 >1022 2.2 Table 1, CPU overhead comparison HyperCheck SMM PCI TPM Memory x x x x Registers x x x Overhead low high low high Table 2, features comparison 23 Scrubbing attack ◦ Modify the hypervisor between two scans interval and recover before the next scan. ◦ -- Randomize scan interval. Dynamic data ◦ Current analysis module does not know how to check them, such as stack, heap. ◦ -- Syntax analysis. 24 Thank you! Questions? 25