slides - Stony Brook University

advertisement
Memory-Based Rack Area
Networking
Presented by: Cheng-Chun Tu
Advisor: Tzi-cker Chiueh
Stony Brook University & Industrial
Technology Research Institute
1
Disaggregated Rack Architecture
Rack becomes a basic building block for cloudscale data centers
CPU/memory/NICs/Disks embedded in selfcontained server
Disk pooling in a rack
NIC/Disk/GPU pooling in a rack
Memory/NIC/Disk pooling in a rack
Rack disaggregation
Pooling of HW resources for global allocation and
independent upgrade cycle for each resource type
2
Requirements
High-Speed Network
I/O Device Sharing
Direct I/O Access from VM
High Availability
Compatible with existing technologies
3
I/O Device Sharing
•
•
•
•
Reduce cost: One I/O device per rack rather than one per host
Maximize Utilization: Statistical multiplexing benefit
Power efficient: Intra-rack networking and device count
Reliability: Pool of devices available for backup
Non-Virtualized
Host
App1
App2
Operating Sys.
Shared Devices:
• GPU
• SAS controller
• Network Device
• … other I/O devices
Non-Virtualized
Host
App1
App2
Virtualized Host
VM1
Operating Sys.
VM2
Hypervisor
Virtualized Host
VM1
VM2
Hypervisor
Switch
10Gb Ethernet
/ InfiniBand switch
Coprocessors
HDD/FlashBased RAIDs
Ethernet
NICs
4
PCI Express
PCI Express is a promising candidate
Gen3 x 16 lane = 128Gbps with low latency (150ns per hop)
New hybrid top-of-rack (TOR) switch consists of PCIe ports
and Ethernet ports
Universal interface for I/O Devices
Network , storage, graphic cards, etc.
Native support for I/O device sharing
I/O Virtualization
SR-IOV enables direct I/O device access from VM
Multi-Root I/O Virtualization (MRIOV)
5
Challenges
Single Host (Single-Root) Model
Not designed for interconnecting/sharing amount multiple
hosts (Multi-Root)
Share I/O devices securely and efficiently
Support socket-based applications over PCIe
Direct I/O device access from guest OSes
6
Observations
PCIe: a packet-based network (TLP)
But all about it is memory addresses
Basic I/O Device Access Model
Device Probing
Device-Specific Configuration
DMA (Direct Memory Access)
Interrupt (MSI, MSI-X)
Everything is through memory access!
Thus, “Memory-Based” Rack Area Networking
7
Proposal: Marlin
Unify rack area network using PCIe
Extend server’s internal PCIe bus to the TOR PCIe switch
Provide efficient inter-host communication over PCIe
Enable clever ways of resource sharing
Share network, storage device, and memory
Support for I/O Virtualization
Reduce context switching overhead caused by interrupts
Global shared memory network
Non-cache coherent, enable global communication through
direct load/store operation
8
PCIe Architecture, SR-IOV, MR-IOV, and NTB (Non-Transparent Bridge)
INTRODUCTION
9
PCIe Single Root Architecture
• Multi-CPU, one root
complex hierarchies
CPU #n
CPU #n
CPU #n
• Single PCIe hierarchy
• Single Address/ID Domain
• BIOS/System software
probes topology
• Partition and allocate
resources
• Each device owns a
range(s)of physical
address
Write Physical Address:
0x55,000
To Endpoint1
PCIe
Root Complex
PCIe
Endpoint
Routing table BAR:
0x10000 – 0x60000
Routing table BAR:
0x10000 – 0x90000
PCIe TB
Switch
PCIe
Endpoint
PCIe TB
Switch
PCIe TB
Switch
• BAR addresses, MSI-X, and
PCIe
Endpoint1
device ID
• Strict hierarchical routing BAR0: 0x50000 - 0x60000
PCIe
Endpoint2
PCIe
Endpoint3
TB: Transparent Bridge
10
Single Host I/O Virtualization
• Direct communication:
Host1
Host2
Host3
• Direct assigned to VMs
• Hypervisor bypassing
• Physical Function (PF):
• Configure and manage the
SR-IOV functionality
• Virtual
(VF):virtual NICs to multiple hosts?
CanFunction
we extend
• Lightweight PCIe function
• With resources necessary
for data movement
• Intel VT-x and VT-d
• CPU/Chipset support for
VMs and devices
Figure: Intel® 82599 SR-IOV Driver Companion Guide
VF
VF
VF
Makes one device “look” like multiple devices
11
Multi-Root Architecture
Host Domains
• Interconnect multiple
hosts
• No coordination between
RCs
• One domain for each root
complex  Virtual
Hierarchy (VH)
• Endpoint4 is shared
Host1
Host2
Host3
CPU #n
CPU #n
CPU #n
CPU #n
CPU #n
CPU #n
CPU #n
CPU #n
CPU #n
MR PCIM
PCIe Root
Complex1
PCIe Root
Complex2
PCIe Root
Complex3
PCIe
Endpoint1
PCIe
Endpoint2
MRA
• Multi-Root
Aware
How
do (MRA)
we enable MR-IOV PCIe
without
relying
Switch1
switch/endpoints
on Virtual Hierarchy?
•
•
•
•
•
New switch silicon
New endpoint silicon
Management model
Lots of HW upgrades
Not/rare available
PCIe TB
Switch2
PCIe MR
Endpoint3
Shared by
VH1 and VH2
PCIe MR
Endpoint4
PCIe TB
Switch3
PCIe MR
Endpoint5
Link
VH1
VH2
VH3
PCIe MR
Endpoint6
Shared Device Domains
12
Non-Transparent Bridge (NTB)
• Isolation of two hosts’ PCIe
domains
Host A
• Two-side device
• Host stops PCI enumeration
at NTB-D.
• Yet allow status and data exchange
• Translation between domains
• PCI device ID:
Querying the ID lookup table (LUT)
• Address:
From primary side and secondary side
[1:0.1]
• Example:
• External NTB device
• CPU-integrated: Intel Xeon E5
Figure: Multi-Host System and Intelligent I/O Design with PCI Express
[2:0.2]
Host B
13
NTB Address Translation
NTB address translation:
<the primary side to the secondary side>
Configuration:
addrA at primary side’s BAR window to
addrB at the secondary side
Example:
addrA = 0x8000 at BAR4 from HostA
addrB = 0x10000 at HostB’s DRAM
One-way Translation:
HostA read/write at addrA (0x8000) ==
read/write addrB
HostB read/write at addrB has nothing to
do with addrA in HostA
Figure: Multi-Host System and Intelligent I/O Design with PCI Express
14
Sharing SR-IOV NIC securely and efficiently [ISCA’13]
I/O DEVICE SHARING
15
Global Physical Address Space
Physical Address Space of MH
MH writes
to 200G
192G
VF2
IOMMU
MMIO
IOMMU
VFn
MH: Management Host
CH: Compute Host
CH n
Physical
Memory
128G
:
CSR/MMIO
Physical
Memory
NTB
VF1
MMIO
IOMMU
NTB
NTB
Leverage unused physical address 248 = 256T
space, map each host to MH
Each machine could write to
another machine’s entire physical 256G
address space
MMIO
CH writes
To 100G
Physical
Memory
64G
CH2
MMIO
Physical
Memory
Global
> 64G
Local
< 64G
CH1
MH
0
16
Address Translations
hpa
hva
gva
gpa
dva
-> host physical addr.
-> host virtual addr.
-> guest virtual addr.
-> guest physical addr.
-> device virtual addr.
CPUs and devices could access remote
host’s memory address space directly.
5. MH’s CPU
Write 200G
6. MH’s device
(P2P)
CPU
DEV
4. CH VM’s CPU
dva
hva
CPU
CH’s CPU
CH’s device
PT
gva
hpa
DEV
CPU
hva
dva
PT
IOMMU
GPT
gpa
hpa
NTB
dva
EPT
IOMMU
IOMMU
NTB
dva
IOMMU
CH’s Physical Address Space
Cheng-Chun Tu
17
Virtual NIC Configuration
4 Operations: CSR, device configuration, Interrupt, and DMA
Observation: everything is memory read/write!
Sharing: a virtual NIC is backed by a VF of an SRIOV NIC and
redirect memory access cross PCIe domain
Native I/O device sharing is realized by
memory address redirection!
18
System Components
Compute Host (CH)
Non-Virtualized Compute Host
App1
App2
Virtualized Compute Host
Dom0
DomU
VF
VF
Opera ng Sys.
VF
VF
Management Host (MH)
Mgmt
Host
Hypervisor
NTB
NTB
upstream
PF
…
NTB
NTB
Control Path
PCIe switch
Data Path
VF1
VFn
SRIOV Device
PF
Non-SRIOV
19
Parallel and Scalable Storage Sharing
Proxy-Based Non-SRIOV SAS controller
Each CH has a pseudo SCSI driver to redirect cmd to MH
MH has a proxy driver receiving the requests, and enable SAS
controller to direct DMA and interrupt to CHs
Two direct accesses out of 4 Operations:
Redirect CSR and device configuration: involve MH’s CPU.
DMA and Interrupts are directly forwarded to the CHs.
Bottleneck!
Compute Host1
Pseudo SAS
driver
PCIe
SCSI cmd
Management
Host
ProxyBased SAS
driver
Management
Host
Ethernet
iSCSI Target
SAS driver
TCP(iSCSI)
Compute Host2
TCP(data)
iSCSI
initiator
DMA and Interrupt
DMA and Interrupt
Marlin
SAS Device
SAS Device
iSCSI
See also: A3CUBE’s Ronnie Express
20
Security Guarantees: 4 cases
MH
CH1
CH2
VM1
VM2
VM1
VM2
VF
VF
VF
VF
Main Memory
PF
VMM
VMM
PCIe Switch Fabric
PF
VF1
VF2
VF3
VF4
SR – IOV Device
Device assignment
Unauthorized Access
VF1 is assigned to VM1 in CH1, but it can screw multiple memory areas.
21
Security Guarantees
Intra-Host
A VF assigned to a VM can only access to memory assigned
to the VM. Accessing other VMs is blocked host’s IOMMU
Inter-Host:
A VF can only access the CH it belongs to. Accessing other
hosts
is blocked
by other
CH’s
Global
address
space
forIOMMU
resource sharing
Inter-VF / inter-device
is secure and efficient!
A VF can not write to other VF’s registers.
Isolate by MH’s IOMMU.
Compromised CH
Not allow to touch other CH’s memory nor MH
Blocked by other CH/MH’s IOMMU
22
Topic: Marlin Top-of-Rack Switch, Ether Over PCIe (EOP)
CMMC (Cross Machine Memory Copying), High Availability
INTER-HOST COMMUNICATION
23
Marlin TOR switch
Intra-Rack via PCIe
NTB Port
…
Upstream Port
TB Port
TB
Master/Slave MH
Marlin PCIe
Hybrid Switch
Ethernet
…
…
Inter-Rack
Ethernet Fabric
Ethernet
10/40GbE
PCIe
Ethernet
Compute Host (CHs)
Each host has 2 interfaces: inter-rack and inter-host
Inter-Rack traffic goes through Ethernet SRIOV device
Intra-Rack (Inter-Host) traffic goes through PCIe
24
Inter-Host Communication
HRDMA: Hardware-based Remote DMA
Move data from one host’s memory to another host’s
memory using the DMA engine in each CH
How to support socket-based application?
Ethernet over PCIe (EOP)
An pseudo Ethernet interface for socket applications
How to have app-to-app zero copying?
Cross-Machine Memory Copying (CMMC)
From the address space of one process on one host to the
address space of another process on another host
25
Cross Machine Memory Copying
Device Support RDMA
Several DMA transactions, protocol overhead, and devicespecific optimization.
InfiniBand/Ethernet RDMA
Payload
RX buffer
DMA to internal
device memory
DMA to receiver buffer
IB/Ethernet
fragmentation/encapsulation,
DMA to the IB link
Native PCIe RDMA, Cut-Through forwarding
Payload
PCIe
PCIe
RX buffer
DMA engine
(ex: Intel Xeon E5 DMA)
CPU load/store operations (non-coherent)
26
Inter-Host Inter-Processor INT
I/O Device generates interrupt
Send packet
IRQ handler
Interrupt
CH1
CH2
InfiniBand/Ethernet
Inter-host Inter-Processor Interrupt
Do not use NTB’s doorbell due to high latency
CH1 issues 1 memory write, translated to become an MSI at
CH2 (total: 1.2 us latency)
Memory Write
Data / MSI
CH1 Addr: 96G+0xfee00000
Interrupt
NTB
PCIe Fabric
IRQ handler
CH2 Addr: 0xfee00000
27
Shared Memory Abstraction
Two machines share one global memory
Non-Cache-Coherent, no LOCK# due to PCIe
Implement software lock using Lamport’s Bakery Algo.
Dedicated memory to a host
PCIe fabric
Compute
Hosts
Remote
Memory
Blade
Reference: Disaggregated Memory for Expansion and Sharing in Blade Servers [ISCA’09]
28
Control Plane Failover
…
MMH (Master) connected to the
upstream port of VS1, and
BMH (Backup) connected to the
upstream port of VS2.
upstream
Virtual Switch 1
Slave MH
VS2
Ethernet
…
Master MH
…
…
Master MH
VS1
TB
Slave MH
Virtual Switch 2
Ethernet
When MMH fails, VS2 takes over
all the downstream ports
by issuing port re-assignment
(does not affect peer-to-peer
routing states).
…
…
29
Multi-Path Configuration
Equip two NTBs per host
248
Map the backup path to
backup address space
Detect failure by PCIe AER
MMIO
Backup Path
1T+128G
CH1
Primary Path
192G
Require both MH and CHs
Switch path by remap virtualto-physical address
Physical
Memory
Prim-NTB
Prim-NTB and Back-NTB
Two PCIe links to TOR switch
Back-NTB
Physical Address Space of MH
128G
MH writes to 200G goes
through primary path
MH writes to 1T+200G goes
through backup path
MMIO
Physical
Memory
MH
0
30
Topic: Direct SRIOV Interrupt,
Direct virtual device interrupt , Direct timer Interrupt
DIRECT INTERRUPT DELIVERY
31
DID: Motivation
4 operations: interrupt is not direct!
Unnecessary VM exits
Ex: 3 exits per Local APIC timer
Interrupt
Injection
Timer
set-up
Guest
Start handling the timer
(non-root mode)
Host
Interrupt due
To Timer expires
(root mode)
Software Timer
Existing solutions:
End-ofInterrupt
Software Timer
Inject vINT
Focus on SRIOV and leverage shadow IDT (IBM ELI)
Focus on PV, require guest kernel modification (IBM ELVIS)
Hardware upgrade: Intel APIC-v or AMD VGIC
DID direct delivers ALL interrupts without paravirtualization
32
Direct Interrupt Delivery
Definition:
An interrupt destined for a VM
goes directly to VM without any
software intervention.
Directly reach VM’s IDT.
Virtual Devices
Back-end
Drivers
core
Virtual device
Local APIC timer
SRIOV device
VM
VM
core
SRIO
V
Hypervisor
Disable external interrupt exiting (EIE) bit in VMCS
Challenges: mis-delivery problem
Delivering interrupt to the unintended VM
Routing: which core is the VM runs on?
Scheduled: Is the VM currently de-scheduled or not?
Signaling completion of interrupt to the controller (direct EOI)
33
Direct SRIOV Interrupt
VM1
VM1
VM2
1. VM Exit
core1
SRIOV
VF1
core1
NMI
SRIOV
VF1
IOMMU
1. VM M is running.
2. KVM receives INT
3. Inject vINT
IOMMU
2. Interrupt for VM M,
but VM M is de-scheduled.
Every external interrupt triggers VM exit, allowing KVM to
inject virtual interrupt using emulated LAPIC
DID disables EIE (External Interrupt Exiting)
Interrupt could directly reach VM’s IDT
How to force VM exit when disabling EIE? NMI
34
Virtual Device Interrupt
Assume device vector #: v
I/O
thread
VM (v)
I/O
thread
VM (v)
core
core
VM Exit
core
core
Tradition: send IPI and
kick off the VM, hypervisor
inject virtual interrupt v
DID: send IPI directly
with vector v
Assume VM M has virtual device with vector #v
DID: Virtual device thread (back-end driver) issues IPI with
vector #v to the CPU core running VM
The device’s handler in VM gets invoked directly
If VM M is de-scheduled, inject IPI-based virtual interrupt
35
Direct Timer Interrupt
• Today:
– x86 timer is located in the per-core local
APIC registers
– KVM virtualizes LAPIC timer to VM
CPU1
CPU2
LAPIC
LAPIC
timer
• Software-emulated LAPIC.
– Drawback: high latency due to several
VM exits per timer operation.
IOMMU
External
interrupt
DID direct delivers timer to VMs:
Disable the timer-related MSR trapping in VMCS bitmap.
Timer interrupt is not routed through IOMMU so when VM
M runs on core C, M exclusively uses C’s LAPIC timer
Hypervisor revokes the timers when M is de-scheduled.
36
DID Summary
DID direct delivers all sources of interrupts
SRIOV, Virtual Device, and Timer
Enable direct End-Of-Interrupt (EOI)
No guest kernel modification
More time spent in guest mode
Guest
Guest
Host
SR-IOV
interrupt
EOI
Timer
interrupt
EOI
PV
interrupt
EOI
SR-IOV
interrupt
Host
EOI
time
37
IMPLEMENTATION &
EVALUATION
38
Prototype Implementation
CH:
Intel i7 3.4GHz /
Intel Xeon E5
8-core CPU
8 GB of memory
Non-Virtualized Compute Host
App1
App2
Virtualized Compute Host
Dom0
DomU
VF
VF
Opera ng Sys.
VF
VF
Link: Gen2 x8 (32Gb)
Mgmt
Host
MH:
Supermicro E3 tower
8-core Intel Xeon 3.4GHz
8GB memory
upstream
PF
NTB
NTB
NTB/Switch:
Control Path
PLX8619
PLX8696
Data Path
PCIe switch
VF1
VFn
SRIOV Device
…
OS/hypervisor:
Fedora15 / KVM
Linux 2.6.38 / 3.6-rc4
Hypervisor
NTB
NTB
VM:
Pin 1 core, 2GB RAM
PF
Non-SRIOV
NIC: Intel 82599
39
NTB PEX 8717
PLX Gen3
Test-bed
Intel 82599
48-lane 12-port PEX 8748
Intel NTB
Servers
1U server
behind
40
Software Architecture of CH
user
space
RDMA
Applica on
Network
Applica on
CMMC API
Socket API
one-copy
zero-copy
TCP/IP
CMMC
Driver
kernel
space
EOP Driver
HRDMA / NTB Driver
Intra-rack
PCIe
VM
QEMU/KVM
direct-interrupt
DID
Intel VF
Driver
Inter-rack
Ethernet
KVM
MSI-X
I/O Devices
41
I/O Sharing Performance
SRIOV
MRIOV
MRIOV+
Copying Overhead
10
9
Bandwidth (Gbps)
8
7
6
5
4
3
2
1
0
64
32
16
8
4
Message Size (Kbytes)
2
1
42
Inter-Host Communication
•
•
•
•
TCP unaligned: Packet payload addresses are not 64B aligned
TCP aligned + copy: Allocate a buffer and copy the unaligned payload
TCP aligned: Packet payload addresses are 64B aligned
UDP aligned: Packet payload addresses are 64B aligned
22
TCP unaligned
20
TCP aligned+copy
Bandwidth (Gbps)
18
16
TCP aligned
14
UDP aligned
12
10
8
6
4
2
0
65536
32768
16384
8192
4096
Message Size (Byte)
2048
1024
43
Interrupt Invocation Latency
DID has 0.9us
overhead
KVM latency is
much higher due to
3 VM exits
Setup: VM runs cyclictest, measuring the latency between
hardware interrupt generated and user level handler is invoked.
experiment: highest priority, 1K interrupts / sec
KVM shows 14us due to 3 exits: external interrupt, program
x2APIC (TMICT), and EOI per interrupt handling.
44
Memcached Benchmark
DID improves 18%
TIG (Time In Guest)
DID improve x3
performance
TIG: % of time
CPU in guest mode
Set-up: twitter-like workload and measure the peak requests
served per second (RPS) while maintaining 10ms latency
PV / PV-DID: Intra-host memecached client/sever
SRIOV/SRIOV-DID: Inter-host memecached client/sever
45
Discussion
Ethernet / InfiniBand
Designed for longer distance, larger scale
InfiniBand is limited source (only Mellanox and Intel)
QuickPath / HyperTransport
Cache coherent inter-processor link
Short distance, tightly integrated in a single system
NUMAlink / SCI (Scalable Coherent Interface)
High-end shared memory supercomputer
PCIe is more power-efficient
Transceiver is designed for short distance connectivity
46
Contribution
We design, implement, and evaluate a PCIebased rack area network
PCIe-based global shared memory network using standard
and commodity building blocks
Secure I/O device sharing with native performance
Hybrid TOR switch with inter-host communication
High Availability control plane and data plane fail-over
DID hypervisor: Low virtualization overhead
Marlin Platform
Processor Board
PCIe Switch Blade
I/O Device Pool
47
Other Works/Publications
SDN
Peregrine: An All-Layer-2 Container Computer Network,
CLOUD’12
SIMPLE-fying Middlebox Policy Enforcement Using SDN,
SIGCOMM’13
In-Band Control for an Ethernet-Based Software-Defined
Network, SYSTOR’14
Rack Area Networking
Secure I/O Device Sharing among Virtual Machines on
Multiple Host, ISCA’13
Software-Defined Memory-Based Rack Area Networking,
under submission to ANCS’14
A Comprehensive Implementation of Direct Interrupt,
under submission to ASPLOS’14
48
Dislike?
Like?
Question?
THANK YOU
49
Download