defense slides - Brett Higgins

advertisement
Balancing Interactive Performance
and Budgeted Resources
in Mobile Applications
Brett Higgins
Brett Higgins
2
Brett Higgins
3
The Most Precious Computational Resource
“The most precious resource in a computer
system is no longer its processor, memory, disk,
or network, but rather human attention.”
– Gollum Garlan et al. (12 years ago)
Brett Higgins
4
User attention vs. energy/data
$$$
3 GB
Brett Higgins
5
Balancing these tradeoffs is difficult!
• Mobile applications must:
• Select the right network for the right task
• Understand complex interactions between:
•
•
•
•
•
Type of network technology
Bandwidth/latency
Timing/scheduling of network activity
Performance
Energy/data resource usage
Opportunity:
system
support
• Cope with uncertainty in predictions
Brett Higgins
6
Spending Principles
• Spend resources effectively.
• Live within your means.
• Use it or lose it.
Brett Higgins
7
Thesis Statement
By:
• Providing abstractions to simplify multinetworking,
• Tailoring network use to application needs, and
• Spending resources to purchase reductions in delay,
Mobile systems can help applications
significantly improve user-visible performance
without exhausting limited energy & data resources.
Brett Higgins
8
Roadmap
• Introduction
• Application-aware multinetworking
• Intentional Networking
• Purchasing performance with limited resources
• Informed Mobile Prefetching
• Coping with cloudy predictions
• Meatballs
• Conclusion
Brett Higgins
9
The Challenge
Diversity of networks
Diversity of behavior
Email
• Fetch messages
YouTube
• Upload video
Match traffic to
available networks
Brett Higgins
10
Current approaches: two extremes
All details hidden
All details exposed
Please
insert
packets
Result: mismatched
traffic
Result: hard for
applications
Brett Higgins
11
Solution: Intentional Networking
• System measures available networks
• Applications describe their traffic
• System matches traffic to networks
Brett Higgins
12
Abstraction: Multi-socket
• Multi-socket: virtual connection
• Analogue: task scheduler
• Measures performance of each alternative
• Encapsulates transient network failure
Client
Server
Brett Higgins
13
Abstraction: Label
• Qualitative description of network traffic
• Interactivity: foreground vs. background
• Analogue: task priority
Client
Server
Brett Higgins
14
Evaluation Results: Email
• Vehicular network trace - Ypsilanti, MI
On-demand fetch time
BG fetch time
70
Best
bandwidth
50
Best latency
40
Aggregate
30
10
7x
Intentional
Networking
0
350
300
Time (seconds)
Time (seconds)
60
20
400
250
3%
200
150
100
50
0
Brett Higgins
15
Intentional Networking: Summary
• Multinetworking state of the art
• All details hidden: far from optimal
• All details exposed: far too complex
• Solution: application-aware system support
• Small amount of information from application
• Significant reduction in user-visible delay
• Only minimal background throughput overhead
• Can build additional services atop this
Brett Higgins
16
Roadmap
• Introduction
• Application-aware multinetworking
• Intentional Networking
• Purchasing performance with limited resources
• Informed Mobile Prefetching
• Coping with cloudy predictions
• Meatballs
• Conclusion
Brett Higgins
17
Mobile networks can be slow
Without prefetching
With prefetching

$#&*!
!
No need for
profanity,
Brett.
Data fetching is slow
User: angry
Fetch time hidden
User: happy
Brett Higgins
18
Mobile prefetching is complex
• Lots of challenges to overcome
•
•
•
•
•
How do I balance performance, energy, and cellular data?
Should I prefetch now or later?
Am I prefetching data that the user actually wants?
Does my prefetching interfere with interactive traffic?
How do I use cellular networks efficiently?
• But the potential benefits are large!
Brett Higgins
19
Who should deal with the complexity?
Users?
Developers?
Brett Higgins
20
What apps end up doing
The Data Hog
The Data Miser
Brett Higgins
21
Informed Mobile Prefetching
• Prefetching as a system service
• Handles complexity on behalf of users/apps
• Apps specify what and how to prefetch
• System decides when to prefetch
Application
prefetch()
IMP
Brett Higgins
22
Informed Mobile Prefetching
• Tackles the challenges of mobile prefetching
•
•
•
•
•
Balances multiple resources via cost-benefit analysis
Estimates future cost, decides whether to defer
Tracks accuracy of prefetch hints
Keeps prefetching from interfering with interactive traffic
Considers batching prefetches on cellular networks
Brett Higgins
23
Balancing multiple resources
Performance
Energy
Cellular
data
$$$
3 GB
Benefit
Cost
Brett Higgins
24
Balancing multiple resources
• Performance (user time saved)
• Future demand fetch time
• Network bandwidth/latency
• Battery energy (spend or save)
• Energy spent sending/receiving data
• Network bandwidth/latency
• Wireless radio power models (powertutor.org)
• Cellular data (spend or save)
• Monthly allotment
• Straightforward to track
Brett Higgins
$$$
3 GB
25
Weighing benefit and cost
• IMP maintains exchange rates
Joules
Bytes
Seconds
Seconds
• One value for each resource
• Expresses importance of resource
• Combine costs in common currency
• Meaningful comparison to benefit
• Adjust over time via feedback
• Goal-directed adaptation
Brett Higgins
26
Users don’t always want what apps ask for
• Some messages may not be read
• Low-priority
• Spam
• Should consider the accuracy of hints
• Don’t require the app to specify it
• Just learn it through the API
• App tells IMP when it uses data
• (or decides not to use the data)
• IMP tracks accuracy over time
Brett Higgins
27
Evaluation Results: Email
500
7
Energy (J)
Time (seconds)
6
5
4
3
2
1
0
Optimal
(100% hits)
Energy usage
400
4
300
3
IMP meets all
resource goals
2
2-8x
200
~300ms
2x
5
3G data usage
100
0
Less energy
than all
others
(including
WiFi-only!)
Brett Higgins
3G data (MB)
Average email fetch time
8
1
0
Only WiFi-only
used less
3GBudget
data
marker
(but…)
28
IMP: Summary
• Mobile prefetching is a complex decision
• Applications choose simple, suboptimal strategies
• Powerful mechanism: purchase performance w/ resources
• Prefetching as a system service
•
•
•
•
Application provides needed information
System manages tradeoffs (spending vs. performance)
Significant reduction in user-visible delay
Meets specified resource budgets
• What about other performance/resource tradeoffs?
Brett Higgins
29
Roadmap
•
•
•
•
Introduction
Application-aware multinetworking
Purchasing performance with limited resources
Coping with cloudy predictions
• Motivation
• Uncertainty-aware decision-making (Meatballs)
• Decision methods
• Reevaluation from new information
• Evaluation
• Conclusion
Brett Higgins
30
Mobile Apps & Prediction
• Mobile apps rely on predictions of:
• Networks (bandwidth, latency, availability)
• Computation time
• These predictions are often wrong
• Networks are highly variable
• Load changes quickly
• Wrong prediction causes user-visible delay
• But mobile apps treat them as oracles!
Brett Higgins
31
Example: code offload
Response time
0.1%
99.9%
10 sec
100 sec
0.1%
Likelihood of
100 sec response time
Brett Higgins
0.0001%
32
Example: code offload
Response time
0.1%
99.9%
10 sec
100 sec
Elapsed time 0 sec
Expected response time 10.09 sec
Expected response time (redundant) 10.00009 sec
Brett Higgins
33
Example: code offload
Response time
0.1%
99.9%
10 sec
100 sec
Elapsed time 9 sec
Expected response time 1.09 sec
Expected response time (redundant) 1.00909 sec
Brett Higgins
34
Example: code offload
Response time
0.1%
99.9%
10 sec
100 sec
Elapsed time 11 sec
Expected response time 89 sec
Expected response time (redundant) 10.09 sec
Brett Higgins
Conditional
expectation
35
Spending for a good cause
• It’s okay to spend extra resources!
• …if we think it will benefit the user.
• Spend resources to cope with uncertainty
• …via redundant operation
✖
• Quantify uncertainty, use it to make decisions
• Benefit (time saved)
• Cost (energy + data)
Brett Higgins
36
Meatballs
• A library for uncertainty-aware decision-making
• Application specifies its strategies
• Different means of accomplishing a single task
• Functions to estimate time, energy, cellular data usage
• Application provides predictions, measurements
• Allows library to capture error & quantify uncertainty
• Library helps application choose the best strategy
• Hides complexity of decision mechanism
• Balances cost & benefit of redundancy
Brett Higgins
37
Roadmap
•
•
•
•
Introduction
Application-aware multinetworking
Purchasing performance with limited resources
Coping with cloudy predictions
• Motivation
• Uncertainty-aware decision-making (Meatballs)
• Decision methods
• Reevaluation from new information
• Evaluation
• Conclusion
Brett Higgins
38
Deciding to operate redundantly
• Benefit of redundancy: time savings
• Cost of redundancy: additional resources
• Benefit and cost are expectations
• Consider predictions as distributions, not spot values
• Approaches:
• Empirical error tracking
• Error bounds
• Bayesian estimation
Brett Higgins
39
Empirical error tracking
• Compute error upon new measurement
• Weighted sum over joint error distribution
• For multiple networks:
• Time: min across all networks
• Cost: sum across all networks
ε(BP1)
ε(BP2)
observed
error =
predicted
.5 .6 .7 .8 .9
1 1.1 1.2 1.3 1.4
.5 .6 .7 .8 .9 1 1.1 1.2 1.3 1.4
Brett Higgins
40
Error bounds
• Range + p(next value is in range)
• Student’s-t prediction interval
• Calculate bound on time savings of redundancy
9
8
7
6
5
4
3
2
1
0
Time (seconds)
Bandwidth (Mbps)
Network bandwidth
BP1
9
8
7
6
5
4
3
2
1
0
BP2
Time to send
10Mb
Max time savings
from redundancy
T1
Brett Higgins
Predictor says:
Use network 2
T2
41
Error bounds
• Calculate bound on net gain of redundancy
max(benefit) – min(cost) = max(net gain)
• Use redundancy if max(net gain) > 0
9
8
7
6
5
4
3
2
1
0
Time (seconds)
Bandwidth (Mbps)
Network bandwidth
BP1
9
8
7
6
5
4
3
2
1
0
BP2
Time to send
10Mb
Max time savings
from redundancy
T1
Brett Higgins
Predictor says:
Use network 2
T2
42
Bayesian Estimation
• Basic idea:
• Given a prior belief about the world,
• and some new evidence,
• update our beliefs to account for the evidence,
• AKA obtaining posterior distribution
• using the likelihood of the evidence
• Via Bayes’ Theorem:
posterior = likelihood * prior
p(evidence)
Normalization factor;
ensures posterior sums to 1
Brett Higgins
43
Bayesian Estimation
• Example: “will it rain tomorrow?”
•
•
•
•
Prior: historical rain frequency
Evidence: weather forecast (simple yes/no)
Posterior: believed rain probability given forecast
Likelihood:
• When it will rain, how often has the forecast agreed?
• When it won’t rain, how often has the forecast agreed?
• Via Bayes’ Theorem:
posterior = likelihood * prior
p(evidence)
Normalization factor;
ensures posterior sums to 1
Brett Higgins
44
Bayesian Estimation
• Applied to Intentional Networking:
•
•
•
•
Prior: bandwidth measurements
Evidence: bandwidth prediction + implied decision
Posterior: new belief about bandwidths
Likelihood:
• When network 1 wins, how often has the prediction agreed?
• When network 2 wins, how often has the prediction agreed?
• Via Bayes’ Theorem:
posterior = likelihood * prior
p(evidence)
Normalization factor;
ensures posterior sums to 1
Brett Higgins
45
Decision methods: summary
• Empirical error tracking
• Captures error distribution accurately
• Computationally intensive
• Error bounds
• Computationally cheap
• Prone to overestimating uncertainty
• Bayesian
• Computationally cheap(er than brute-force)
• Appears more reliant on history
Brett Higgins
46
Reevaluation: conditional distributions
Response time
0.1%
99.9%
10 sec
100 sec
Decision
Elapsed Time
One server
0
Two servers
10
…
Brett Higgins
47
Evaluation: methodology
• Network trace replay, energy & data measurement
• Same as IMP
• Metric: weighted cost function
• time + cenergy * energy + cdata * data
Low-cost Mid-cost
High-cost
cenergy
0.00001
0.0001
0.001
Energy per 1 second delay
reduction
100 J
10 J
1J
36 sec
3.6 sec
Battery life reduction under
6 min
average use (normally 20 hours)
Brett Higgins
48
Evaluation: IntNW, walking trace
Simple
Meatballs
2
Weighted cost (norm.)
Low-resource strategies improve
1.5
2x
Error bounds
leans towards
redundancy
1
24%
0.5
0
No cost
Low cost
Brett Higgins
Meatballs matches the best
strategy
Mid cost
High cost
49
Evaluation: PocketSphinx, server load
Simple
Meatballs
Error bounds
leans towards
redundancy
Weighted cost (norm.)
1.4
1.2
1
23%
0.8
0.6
0.4
Meatballs matches
the best strategy
0.2
0
No cost
Low cost
Brett Higgins
Mid cost
High cost
50
Meatballs: Summary
• Uncertainty in predictions causes user-visible delay
• Impact of uncertainty is significant
• Considering uncertainty can improve performance
• Library can help applications consider uncertainty
•
•
•
•
Small amount of information from application
Library hides complex decision process
Redundant operation mitigates uncertainty’s impact
Sufficient resources are commonly available
• Spend resources to purchase delay reduction
Brett Higgins
51
Conclusion
• Human attention is the most precious resource
• Tradeoffs between performance, energy, cellular
data
• Difficult for applications to get right
• Necessitates system support – proper abstractions
• We provide abstractions for:
• Using multiple networks effectively
• Budgeted resource spending
• Hedging against uncertainty with redundancy
• Overall results
• Improved performance without overspending resources
Brett Higgins
52
Brett Higgins
53
Future work
• Intentional Networking
• Automatic label inference
• Avoid modifying the server side (generic proxies)
• Predictor uncertainty
• Better handling of idle periods
• Increase uncertainty estimate as measurements age
• Add periodic active measurements?
Brett Higgins
54
Abstraction:
Atomicity & Ordering Constraints
• App specifies boundaries, partial ordering
• Receiving end enforces delivery atomicity, order
Server
Chunk 1
Chunk 2
use_data()
Chunk 3
Brett Higgins
55
Evaluation Results: Vehicular Sensing
• Trace #2: Ypsilanti, MI
Urgent update time
BG throughput
Best
bandwidth
7
8
7
Best latency
5
4
Aggregate
3
2
1
0
48%
Intentional
Networking
Brett Higgins
Throughput (KB/sec)
Time (seconds)
6
6
5
6%
4
3
2
1
0
56
Evaluation Results: Vehicular Sensing
• Trace #2: Ypsilanti, MI
7
Time (seconds)
6
5
Best
bandwidth
2
1
0
8
Best latency
Aggregate
4
3
BG throughput
Intentional
Networking
Intentional
Networking,
two-process
Brett Higgins
7
Throughput (KB/sec)
Urgent update time
6
5
4
3
2
1
0
57
IMP Interface
• Data fetching is app-specific
Application
prefetch()
IMP
• App passes a fetch callback
• Receives a handle for data
• App provides needed info
• When prefetch is desired
• When demand fetch occurs
• When data is no longer wanted
Brett Higgins
58
How to estimate the future?
• Current cost: straightforward
• Future cost: trickier
• Not just predicting network conditions
• When will the user request the data?
• Simplify: use average network conditions
• Average bandwidth/latency of each network
• Average availability of WiFi
• Future benefit
• Same method as future cost
Brett Higgins
59
Balancing multiple resources
• Cost/benefit analysis
• How much value can the resources buy?
• Used in disk prefetching (TIP; SOSP ‘95)
• Prefetch benefit: user time saved
• Prefetch cost: energy and cellular data spent
• Prefetch if benefit > cost
• How to meaningfully weigh benefit and cost?
Brett Higgins
60
Error bounds
• How to calculate bounds?
• Chebyshev’s inequality?
• Simple, used before; provides loose bounds
• Bounds turned out to be too loose
• Confidence interval?
• Bounds true value of underlying distribution
• Quantities (e.g., bandwidth) neither known nor fixed
• Prediction interval
• Range containing the next value (with some probability)
• Student’s-t distribution, alpha=0.05
Brett Higgins
61
How to decide which network(s) to use?
• First, compute the time on the “best” network
T1 = Σ s/B1 * posterior(B1| BP1 > BP2)
• Next, compute the time using multiple networks
Tall = Σ min(s/B1, s/B2) * posterior(B1, B2 | BP1 > BP2)
• Finally, do the same with resource costs.
• If cost < benefit, use redundancy
Brett Higgins
62
Tall = Σ min(s/B1, s/B2) * posterior(B1, B2 | BP1 > BP2)
L(BP1 > BP2|B1, B2) * prior(B1, B2)
= Σ s/max(B1, B2) *
p(BP1 > BP2)
prior(B1)
Here is where each component comes from.
1
2 3 4 5 6 7 8 9 10
prior(B2)
likelihood(BP1 > BP2|B1, B2)
B2 = 4
B2 = 5
B2 = 6
1
2 3 4 5 6 7 8 9 10
B1 = 1
B1 = 8
B1 = 9
p(BP1 > BP2)
p(BP1 < BP2)
The weight of history
• A measurement’s age decreases its usefulness
• Uncertainty may have increased or decreased
• Meatballs gives greater weight to new observations
• Brute-force, error bounds: decay weight on old samples
• Bayesian: decay weight in prior distributions
Brett Higgins
64
Reevaluating redundancy decisions
• Confidence in decision changes with new
information
• Suppose we decided not to transmit redundantly
• Lack of response is new information
• Consider conditional distributions of predictions
• Meatballs provides interface for:
• Deferred re-evaluation
• “Tipping point” calculation: when will decision change?
Brett Higgins
65
Applications
• Intentional Networking
• Bandwidth, latency of multiple networks
• Energy consumption of network usage
• Reevaluation (network delay/change)
• Speech recognition (PocketSphinx)
•
•
•
•
Local/remote execution time
Local CPU energy model
For remote execution: network effects
Reevaluation (network delay/change)
Brett Higgins
66
Evaluation: IntNW, driving trace
Simple
Meatballs
Weighted cost (norm.)
2
Not much benefit
from using WiFi
1.5
1
0.5
0
No cost
Low cost
Brett Higgins
Mid cost
High cost
67
Evaluation: PocketSphinx, walking trace
Simple
Meatballs
Weighted cost (norm.)
1.4
1.2
Benefit of redundancy persists more
1
0.8
>2x
0.6
2335%
0.4
0.2
0
No cost
Low cost
Brett Higgins
Mid cost
Meatballs matches
the best strategy
High cost
68
Download