Ditto - A System for Opportunistic Caching in Multi-hop Mesh Networks Fahad Rafique Dogar Joint work with: Amar Phanishayee, Himabindu Pucha, Olatunji Ruwase, and Dave Andersen Carnegie Mellon University Wireless Mesh Networks (WMNs) Cost Effective Greater Coverage •Testbeds: RoofNet@MIT, MAP@Purdue, … •Commercial: Meraki •100,000 users of San Francisco ‘Free the Net’ service 2 Throughput Problem in WMNs P1 P3 •Interference •GW becomes a bottleneck 3 Exploiting Locality through Caching P1 and P3 perform on-path caching P3 Path of the transfer: Alice, P1, P3, GW P1 P2 P2 can perform opportunistic caching On-Path + Opportunistic Caching -> Ditto 4 Ditto: Key Contributions • Built an opportunistic caching system for WMNs • Insights on opportunistic caching – Is it feasible? – Key factors Evaluation on two testbeds • Ditto’s throughput comparison with on-path and no caching scenarios – Up to 7x improvement over on-path caching – Up to 10x improvement over no caching 5 Outline • • • • Challenge and Opportunity Ditto Design Evaluation Related Work 6 Challenge for Opportunistic Caching • Wireless networks experience high loss rates – Usually dealt with through link layer retransmissions • Overhearing node also experiences losses P3 1 2 1 P1 P2 2 Unlike P1, P2 cannot ask for retransmissions Successful overhearing of a large file is unlikely Main Challenge: Lossy overhearing 7 More Overhearing Opportunities P3 Path of the transfer: Alice, P1, P3,… P1 P2 P2 may benefit from multi-hop transfers Reduces the problem of lossy overhearing 8 Outline • Challenge and Opportunity – Lossy Overhearing – Multiple Opportunities to Overhear • Ditto Design – Chunk Based Transfers – Ditto Proxy – Sniffer • Evaluation • Related Work 9 Chunk Based Transfers • Motivation – Lossy overhearing -> smaller caching granularity • Idea – Divide file into smaller chunks (8 – 32 KB) – Use chunk as a unit of transfer • Ditto uses Data Oriented Transfer (DOT)1 system for chunk based transfers 1 Tolia et al, An Architecture for Internet Data Transfer. NSDI 2006. 10 Data Oriented Transfer (DOT) Chunking Cryptographic Hash Foo.txt chunkID1 chunkID2 chunkID3 DOT Transfer Receiver Request – foo.txt App Sender App Response: chunk ids{A,B,C} chunk ids chunk ids Chunk Request DOT DOT Chunk Response 11 An Example Ditto Transfer Receiver Sender Same as DOT App App chunk ids chunk ids Chunk request request request DOT DOT Chunk response response Ditto Proxy response Ditto Proxy 12 Ditto Proxy Opportunistic Caching Separate TCP connection on each hop Next-Hop based on routing table information On-Path Caching 13 Sniffer Path of the transfer: Alice, P1, P3,… P3 P1 P2 (Overhearing) •TCP stream identification through (Src IP, Src Port, Dst IP, Dst Port) •Placement within the stream based on TCP sequence number •Next Step: Inter-Stream Chunk Reassembly 14 Inter-Stream Chunk Reassembly Look for Ditto header Chunk Boundaries Exploits multiple overhearing opportunity 15 Outline • Challenges and Opportunities • Ditto Design • Evaluation – Testbeds – Experimental Setup – Key Results – Summary • Related Work 16 Emulab Wireless Testbed 17 MAP Campus Testbed (Purdue Univ.) Gateway 18 Experimental Setup Mode 802.11 b Rate Auto Other Parameters Default Routing Static Cross Traffic No Cache Eviction No File Sizes 1-5 MB Chunk Sizes 8KB, 16KB, 32KB 19 Evaluation Scenarios • Measuring Overhearing Effectiveness Observer P3 P1 Example GW Transfer Receiver Observer Transfer P2 Observer Receiver • Each observer reports the number of chunks successfully reconstructed • Each node becomes a receiver 20 Reconstruction Efficiency Around 30% of the observers reconstruct at least 50% chunks Around 60% of the observers don’t reconstruct anything 21 Reconstruction Efficiency Around 50% observers are able to reconstruct at least 50% chunks 22 Zooming In --- Campus Testbed 23 24 Zooming In --- Campus Testbed 25 Shield the gateway from becoming a bottleneck 26 Throughput Evaluation • Leaf Nodes request the same file from the gateway – e.g: software update on all nodes • Different request patterns: – Sequential, staggered – Random order of receivers • Schemes – Ditto’s comparison with On-Path and E2E 27 Throughput Improvement in Ditto Median = 540 Kbps Campus Testbed 28 Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Campus Testbed 29 Throughput Improvement in Ditto Median = 1380 Kbps Median = 540 Kbps Campus Testbed Median = 5370 Kbps 30 Evaluation Summary Metric/Factor Insight Proximity Nodes closer to gateway have a very high reconstruction efficiency Can shield the gateway from becoming a hotspot Throughput Orders of magnitude improvement with Ditto Inter-Stream Reassembly Few multiple overhearing opportunities; 10% improvement where applicable Chunk Size 8 - 32 KB chunk size provide good reconstruction efficiency with low overhead 31 Related Work • Hierarchical Caching [Fan98, Das07,..] – Caching more effective on lossy wireless links – Ditto’s overhearing feature is unique • Packet Level Caching [Spring00, Afanasyev08] – Ditto is purely opportunistic – Ditto exploits similarity at inter-request timescale • Making the best of broadcast [MORE, ExOR,..] – Largely orthogonal 32 Conclusion • Opportunistic caching works! – Key Ideas: Chunk based transfer, inter-stream chunk reconstruction – Feasibility established on two testbeds – Nodes closer to the gateway can shield it from becoming a bottleneck • Significant benefit to end users – Up to 7x throughput improvement over on-path caching – Up to 10x throughput improvement over no caching 33 Thank you! 34