Hash, Don’t Cache: Fast Packet Forwarding for Enterprise Edge Routers Minlan Yu

advertisement
SIGCOMM WREN’09
Hash, Don’t Cache:
Fast Packet Forwarding for Enterprise Edge Routers
Minlan Yu
Princeton University
minlanyu@cs.princeton.edu
Joint work with Jennifer Rexford
1
Enterprise Edge Router
• Enterprise edge routers
– Connects upstream providers and internal routers
• A few outgoing links
– A small data structure for each next hop
Provider 1
Provider 2
Enterprise Network
2
Challenges of Packet Forwarding
• Full routes forwarding table (FIB)
– For load balancing, fault tolerance, etc.
– More than 250K entries, and growing
• Increasing link speed
– Over 10 Gbps
• Requires large, expensive memory
– Expensive, complicated high-end routers
• More cost-efficient, less power-hungry solution?
– Perform fast packet forwarding in a small SRAM
3
Using a Small SRAM
• Route caching is not a viable solution
– Store the most frequently used entries in cache
– Bad performance during cache miss
• Low throughput and high packet loss
– Bad performance under worst-case workloads
• Malicious traffic with a wide range of destinations
• Route changes, link failures
• Our solution should be workload independent
– Fit the entire FIB in the small SRAM
4
Bloom Filter
• Bloom filters in fast memory (SRAM)
– A compact data structure for a set of elements
– Calculate s hash functions to store element x
– Easy to check membership
– Reduce memory at the expense of false positives
x
Vm-1
V0
0 0 0 1 0 0 0 1 0 1
h1(x)
h2(x)
h3(x)
0 1 0 0 0
hs(x)
Bloom Filter Forwarding
• One Bloom filter (BF) per next hop
– Store all addresses forwarded to that next hop
• Consider flat addresses in the talk
– See paper for extensions to longest prefix match
Bloom Filters
query
Packet
destination
Nexthop 1
Nexthop 2
hit
……
Nexthop T
T is small for enterprise edge routers
6
Contributions
• Make efficient use of limited fast memory
– Formulate and solve optimization problem to
minimize false-positive rate
• Handle false positives
– Leverage properties of enterprise edge routers
• Adapt Bloom filters for routing changes
– Leverage counting Bloom filter in slow memory
– Dynamically adjust Bloom filter size
7
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
8
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
9
Memory Usage Optimization
• Consider fixed forwarding table
• Goal: Minimize overall false-positive rate
– Probability one or more BFs have a false positive
• Input:
– Fast memory size M
– Number of destinations per next hop
– The maximum number of hash functions
• Output: the size of each Bloom filter
– Larger BF for next-hops with more destinations
10
Constraints and Solution
• Constraints
– Memory constraint
• Sum of all BF sizes
fast
 memory size M
– Bound on number of hash functions
• To bound CPU calculation time
• Bloom filters share the same hash functions
• Proved to
be a convex optimization problem
– An optimal solution exists
– Solved by IPOPT (Interior Point OPTimizer)
11
Evaluation of False Positives
Overall False Positives
[fraction]
– The FIB with 200K entries, 10 next hop
– 8 hash functions
– Takes at most 50 msec to solve the optimization
1.00E+00
1.00E-01
1.00E-02
1.00E-03
1.00E-04
1.00E-05
1.00E-06
0
200
400
600
Memory size (KB)
800
1000
12
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
13
False Positive Detection
• Multiple matches in the Bloom filters
– One of the matches is correct
– The others are caused by false positives
Bloom Filters
query
Packet
destination
Multiple hits
Nexthop 1
Nexthop 2
……
Nexthop T
14
Handle False Positives on Fast Path
• Leverage multi-homed enterprise edge router
• Send to a random matching next hop
– Packets can get to the destination even through a lesspreferred outgoing link occasionally
– No extra traffic, but may cause packet loss
• Send duplicate packets
– Send copy of packet to all matching next hops
– Guarantees reachability, but introduce extra traffic
15
Prevent Future False Positives
• For a packet that experiences a false positive
– Conventional lookup in the background
– Cache the result
• For the subsequent packets
– No longer experience false positives
• Compared to conventional route cache
– Much smaller (only for false-positive destinations)
– Not easily invalidated by an adversary
16
Outline
• Optimize memory usage
• Handle false positives
• Handle routing dynamics
17
Problem of Bloom Filters
• Routing changes
– Add/delete entries in BFs
• Problem of Bloom Filters (BF)
– Do not allow deleting an element
• Counting Bloom Filters (CBF)
– Use a counter instead of a bit in the array
– CBFs can handle adding/deleting elements
– But, require more memory than BFs
18
Update on Routing Change
• Use CBF in slow memory
– Assist BF to handle forwarding-table updates
– Easy to add/delete a forwarding-table entry
CBF in slow memory
2
0
1
0
0
0
Delete a route
3
0
2
0
0
0
0
0
2
0
1
0
1
0
0
0
0
0
1
0
BF in fast memory
1
0
1
0
0
0
19
Occasionally Resize BF
• Under significant routing changes
– Number of addresses in BFs changes significantly
– Re-optimize BF sizes
• Use CBF to assist resizing BF
– Large CBF and small BF
– Easy to expand BF size by contracting CBF
CBF
0
0
1
1
0
0
0
3
0
Easy to contract CBF to size 4
BF
0
Hard to expand to size 4
0
0
1
0
20
BF-based Router Architecture
21
Prototype and Evaluation
• Prototype in kernel-level Click
• Experiment environment
– 3.0 GHz 64-bit Intel Xeon
– 2 MB L2 data cache, used as fast memory size M
• Forwarding table
– 10 next hops, 200K entries
• Peak forwarding rate
– 365 Kpps for 64 Byte packets
– 10% faster than conventional lookup
22
Conclusion
• Improve packet forwarding for enterprise edge
routers
– Use Bloom filters to represent forwarding table
• Only require a small SRAM
– Optimize usage of a fixed small memory
• Multiple ways to handle false positives
– Leverage properties of enterprise edge routers
• React quickly to FIB updates
– Leverage Counting Bloom Filter in slow memory
23
Ongoing Work: BUFFALO
• Bloom filter forwarding in large enterprise
– Deploy BF-based switches in the entire network
– Forward all the packets on the fast path
• Gracefully handling false positives
– Randomly select a matching next hop
– Techniques to avoid loops and bound path stretch
www.cs.princeton.edu/~minlanyu/writeup/conext09.pdf
24
Thanks
• Questions?
25
Download