NPS: A Non-interfering Web Prefetching System Ravi Kokku, Praveen Yalagandula,

advertisement
NPS: A Non-interfering Web Prefetching System
Ravi Kokku, Praveen Yalagandula,
Arun Venkataramani, Mike Dahlin
Laboratory for Advanced Systems Research
Department of Computer Sciences
University of Texas at Austin
Summary of the Talk
Prefetching should be done aggressively, but safely
 Safe: Non-interference with demand requests
 Contributions:
 A self-tuning architecture for web prefetching
• Aggressive when abundant spare resources
• Safe when scarce resources
 NPS: A prototype prefetching system
• Immediately deployable
July 12, 2016
Department of Computer Sciences, UT Austin
2
Outline
Prefetch aggressively as well as safely




July 12, 2016
Motivation
Challenges/principles
NPS system design
Conclusion
Department of Computer Sciences, UT Austin
3
What is Web Prefetching?
 Speculatively fetch data that will be accessed in
the future
 Typical prefetch mechanism [PM96, MC98, CZ01]
Client
Server
Demand Requests
Responses + Hint Lists
Prefetch Requests
Prefetch Responses
July 12, 2016
Department of Computer Sciences, UT Austin
4
Why Web Prefetching?
 Benefits [GA93, GS95, PM96, KLM97, CB98, D99, FCL99,
KD99, VYKSD01, …]
 Reduces response times seen by users
 Improves service availability
 Encouraging trends
 Numerous web applications getting deployed
• News, banking, shopping, e-mail…
 Technology is improving rapidly
•  capacities and  prices of disks and networks
Prefetch Aggressively
July 12, 2016
Department of Computer Sciences, UT Austin
5
Why doesn’t everyone prefetch?
 Extra resources on servers, network and clients
 Interference with demand requests
 Two types of interference
• Self-Interference– Applications hurt themselves
• Cross-Interference– Applications hurt others
 Interference at various components
• Servers – Demand requests queued behind prefetch
• Networks – Demand packets queued or dropped
• Clients – Caches polluted by displacing more useful data
July 12, 2016
Department of Computer Sciences, UT Austin
6
Example: Server Interference
 Common load vs. response curve
Avg. Demand Response Time (s)
 Constant-rate prefetching reduces server capacity
0.7
Pfrate=5
0.6
0.5
0.4
Pfrate=1
Demand
0.3
0.2
0.1
0
100 200 300 400 500 600 700 800
Demand Connection Rate (conns/sec)
Prefetch Aggressively, BUT SAFELY
July 12, 2016
Department of Computer Sciences, UT Austin
7
Outline
Prefetch aggressively as well as safely
 Motivation
 Challenges/principles
 Self-tuning
 Decoupling prediction from resource management
 End-to-end resource management
 NPS system design
 Conclusion
July 12, 2016
Department of Computer Sciences, UT Austin
8
Goal 1: Self-tuning System
 Proposed solutions use “magic numbers”
 Prefetch thresholds [D99, PM96, VYKSD01, …]
 Rate limiting
[MC98, CB98]
 Limitations of manual tuning
 Difficult to determine “good” thresholds
• Good thresholds depend on spare resources
 “Good” threshold varies over time
 Sharp performance penalty when mistuned
 Principle 1: Self-tuning
 Prefetch according to spare resources
 Benefit: Simplifies application design
July 12, 2016
Department of Computer Sciences, UT Austin
9
Goal 2: Separation of Concerns
 Prefetching has two components
 Prediction – What all objects are beneficial to prefetch?
 Resource management –How many can we actually prefetch?
 Traditional techniques do not differentiate
 Prefetch if prob(access) > 25%
 Prefetch only top 10 important URLs
 Wrong Way! We lose the flexibility to adapt
 Principle 2: Decouple prediction from resource management
 Prediction: Application identifies all useful objects
• In the decreasing order of importance
 Resource management: Uses Principle 1
• Aggressive – when abundant resources
• Safe – when no resources
July 12, 2016
Department of Computer Sciences, UT Austin
10
Goal 3: Deployability
 Ideal resource management vs. deployability
 Servers
• Ideal: OS scheduling of CPU, Memory, Disk…
• Problem: Complexity – N-Tier systems, Databases, …
 Networks
• Ideal: Use differentiated services/ router prioritization
• Problems: Every router should support it
 Clients
• Ideal: OS scheduling, transparent informed prefetching
• Problem: Millions of deployed browsers
 Principle 3: End-to-end resource management
 Server – External monitoring and control
 Network – TCP-Nice
 Client – Javascript tricks
July 12, 2016
Department of Computer Sciences, UT Austin
11
Outline
Prefetch Aggressively as well as safely
 Motivation
 Principles for a prefetching system
 Self-tuning
 Decoupling prediction from resource management
 End-to-end resource management
 NPS prototype design




Prefetching mechanism
External monitoring
TCP-Nice
Evaluation
 Conclusion
July 12, 2016
Department of Computer Sciences, UT Austin
12
Prefetch Mechanism
Prefetch
Requests
Client
Demand Requests Demand
Server
Hint Lists
1.
2.
3.
4.
July 12, 2016
Prefetch
Server
Hint
Server
Fileset
Munger
Server m/c
Munger adds Javascript to html pages
Client fetches html page
Javascript on html page fetches hint list
Javascript on html page prefetches objects
Department of Computer Sciences, UT Austin
13
End-to-end Monitoring and Control
Client
while(1) {
getHint( );
prefetchHint( ); }
Hint
Server
if (budgetLeft)
send(hints);
else
send(“return later”);
GET http://repObj.html
Monitor
Demand
Server
200 OK…
 Principle: Low response times  server not loaded
 Periodic probing for response times
 Estimation of spare resources (budget) at server – AIMD
 Distribution of budget
• Control the number. of clients allowed to prefetch
July 12, 2016
Department of Computer Sciences, UT Austin
14
Monitor Evaluation (1)
Avg Demand Response Time(sec)
0.7
Manual tuning, Pfrate=5
0.6
0.5
Manual tuning,
Pfrate=1
0.4
No-Prefetching
0.3
Monitor
0.2
0.1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
 End-to-end monitoring makes prefetching safe
July 12, 2016
Department of Computer Sciences, UT Austin
15
Monitor Evaluation (2)
No-Prefetching
Bandwidth (Mbps)
80
60
40
Demand: pfrate=1
20
Prefetch:
pfrate=1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
 Manual tuning is too damaging at high load
July 12, 2016
Department of Computer Sciences, UT Austin
16
Monitor Evaluation (2)
No-Prefetching
Bandwidth (Mbps)
80
Prefetch:Monitor
Demand:Monitor
60
40
Demand: pfrate=1
20
Prefetch:
pfrate=1
0
0
100
200
300
400
500
600
700
800
Demand Connection Rate (conns/sec)
 Manual tuning too timid or too damaging
 End-to-end monitoring is both aggressive and safe
July 12, 2016
Department of Computer Sciences, UT Austin
17
Network Resource Management
 Demand and prefetch on separate connections
 Why is this required?
 HTTP/1.1 persistent connections
 In-order delivery of TCP
 So prefetch affects demand
 How to ensure separation?
 Prefetching on a separate server port
 How to use the prefetched objects?
 Javascript tricks – In the paper
July 12, 2016
Department of Computer Sciences, UT Austin
18
Network Resource Management
 Prefetch connections use TCP Nice
 TCP Nice
 A mechanism for background transfers
 End-to-end TCP congestion control
 Monitors RTTs and backs-off when congestion
 Previous study [OSDI 2002]
• Provably bounds self- and cross-interference
• Utilizes significant spare network capacity
 Server-side deployable
July 12, 2016
Department of Computer Sciences, UT Austin
19
End-to-end Evaluation
 Measure avg. response times for demand reqs.
 Compare with No-Prefetching and Hand-tuned
 Experimental setup
Network
Cable modem,
Abilene
Client
httperf
Trace
IBM server
July 12, 2016
PrefSvr
Apache:8085
Fileset
DemandSvr
Apache: 80
HintSvr
PPM predict
Department of Computer Sciences, UT Austin
20
Avg. Dem. Resp. Time (ms)
Prefetching with Abundant Resources
50
37.3
40
30
20
24.3
19.1
11.8
14.5
18
10
0
Cable modem
No prefetching
Abilene
Hand-tuned
NPS
 Both Hand-tuned and NPS give benefits
 Note: Hand-tuned is tuned to the best
July 12, 2016
Department of Computer Sciences, UT Austin
21
Avg. Dem. Resp. Time (ms)
Tuning the No-Avoidance Case
30
24.3
23.3
20
16.4
15.7
18.3
11.8
19.1
14.5
10
0
No prefetching
T=0.05
Cable modem
T=0.5
T=0.02
T=0.2
T=0.01
T=0.1
NPS
 Hand-tuning takes effort
 NPS is self-tuning
July 12, 2016
Department of Computer Sciences, UT Austin
22
Avg. Dem. Resp. Time (ms)
Prefetching with Scarce Resources
250
221.1
200
150
94.6
100
50
41.6
31.8
41.1
31.6
0
Loaded Server
No prefetching
Loaded Network
Hand-tuned
NPS
 Hand-tuned damages by 2-8x
 NPS causes little damage to demand
July 12, 2016
Department of Computer Sciences, UT Austin
23
Conclusions
 Prefetch aggressively, but safely
 Contributions
 A prefetching architecture
• Self-tuning
• Decouples prediction from resource management
• Deployable – few modifications to existing infrastructure
 Benefits
• Substantial improvements with abundant resources
• No damage with scarce resources
 NPS prototype
http://www.cs.utexas.edu/~rkoku/RESEARCH/NPS/
July 12, 2016
Department of Computer Sciences, UT Austin
24
Thanks
July 12, 2016
Department of Computer Sciences, UT Austin
25
Avg. Dem. Resp. Time (ms)
Prefetching with Abundant Resources
160
140
120
100
80
60
40
20
0
134.9
73.3 75.5
24.3
37.3
19.1 18
11.8 14.5
Cable modem
Abilene
No prefetching
Hand-tuned
Trans-atlantic
NPS
 Both Hand-tuned and NPS give benefits
 Note: Hand-tuned is tuned to the best
July 12, 2016
Department of Computer Sciences, UT Austin
26
Client Resource Management
 Resources – CPU, memory and disk caches
 Heuristics to control cache pollution
 Limit the space prefetch objects take
 Short expiration time for prefetched objects
 Mechanism to avoid CPU interference
 Start prefetching after all demand done
• Handles self-interference – more common case
 What about cross-interference?
• Client modifications might be necessary
July 12, 2016
Department of Computer Sciences, UT Austin
27
Download