slides - SyNRG

advertisement
Battling demons and vampires
on your lunch break…
Switchboard: A Matchmaking System
for Multiplayer Mobile Games
Justin Manweiler, Sharad Agarwal, Ming Zhang,
Romit Roy Choudhury, Paramvir Bahl
ACM MobiSys 2011
Breakthrough of Mobile Gaming
iPhone App Store
350K applications
Time
20% apps, 80%47%
downloads
on
Mobile Apps
Spent Gaming
Windows Phone 7
Top 10+ apps are games
John Carmack (Wolfenstein 3D, Doom, Quake)…
“multiplayer in some form is where the
breakthrough, platform-defining things are
going to happen in the mobile space”
2
Mobile Games: Now and Tomorrow
Increasing Interactivity
Single-player
Mobile
Multiplayer
Turn-based
Multiplayer
Fast-action
(mobile today)
(mobile today)
(mobile soon)
3
Key Challenge
Bandwidth is fine: 250 kbps
to host 16-player Halo 3 game
Game Type
Latency Threshold
Delay bounds are
much tighter
First-person, Racing
≈ 100 ms
Sports, Role-playing
≈ 500 ms
Challenge: find groups of peers
than can play well together
Real-time Strategy
≈ 1000 ms
4
The Matchmaking Problem
Connection Latency
End-to-end Latency Threshold
Match to satisfy total
delay bounds
Clients
5
Instability in a Static Environment
Median Latency (ms)
310
290
270
Due to instability,
must consider latency distribution
250
230
210
190
170
150
9:36
10:04
10:33
11:02
Time of Day (AM)
11:31
12:00
6
End-to-end Latency over 3G
First-person Shoot.
Racing
Sports
Real-time Strategy
1
Empirical CDF
0.8
Peer-to-peer reduces latency
and is cost-effective
0.6
AT&T to AT&T Direct
AT&T to AT&T via Bing
AT&T to AT&T via Duke
AT&T to AT&T via UW
0.4
0.2
0
0
100
200
300
RTT (in ms)
400
500
600
7
The Matchmaking Problem
Targeting 3G:
play anywhere
Link Performance
Latency not Bandwidth
Measurement / Prediction
interactivity is key
at game timescales
Grouping
P2P Scalability
8
Requirements for 3G Matchmaking
● Latency estimation has to be accurate
 Or games will be unplayable / fail
● Grouping has to be fast
 Or impatient users will give up before a game is initiated
● Matchmaking has to be scalable
 For game servers
 For the cellular network
 For user mobile devices
9
State of the Art
● Latency estimation
 Pyxida, stable network coordinates; Ledlie et al. [NSDI 07]
 Vivaldi, distributed latency est.; Dabek et al. [SIGCOMM 04]
Latency estimation
and matchmaking
● Game matchmaking
for wired
networks are
for wired
networks
 Htrae, gameestablished
matchmaking
in wired
networks;
Agarwal et al. [SIGCOMM 09]
● General 3G network performance
 3GTest w/ 30K users; Huang et al. [MobiSys 2010]
 Interactions with applications; Liu et al. [MobiCom 08]
 Empirical 3G performance; Tan et al. [InfoCom 07]
 TCP/IP over 3G; Chan & Ramjee [MobiCom 02]
10
A “Black Box” for Game Developers
IP network
Internet
GGSN
“Black
Box”
GGSN
SGSN
SGSN
RNC
RNC
End-to-end
Performance
Crowdsourced
Link Performance
Measurement
(over time)
11
Crowdsourcing 3G over Time
Latency
Similarity
by Time
Time
12
Crowdsourcing 3G over Space
Latency
Similarity
by Distance
13
Can we crowdsource HSDPA 3G?
● How does 3G performance vary over time?
 How quickly do old measurements “expire”?
 How many measurements needed to characterize the
latency
distribution?
Details
of parameter space left for the paper
 … (our goal is not to identify the exact causes)
● How does 3G performance vary over space?
 Signal strength? Mobility speed?
 Phones under same cell tower?
 Same part of the cellular network?
 …
14
Methodology
● Platform
 Windows Mobile and Android phones
 HSDPA 3G on AT&T and T-Mobile
● Carefully deployed phones
 Continuous measurements
 Simultaneous, synchronized traces at multiple sites
● Several locations
 Princeville, Hawaii
 Redmond and Seattle, Washington
 Durham and Raleigh, North Carolina
 Los Angeles, California
15
Stability over Time (in a Static Environment)
Redmond, AT&T, 15m Intervals
1
Empirical CDF
0.8
Live characterization is necessary and is feasible
0.6
Performance drifts over
longer time periods
0.4
Similar latencies under
the same tower
Black line represents phone 1
(all other lines phone 2)
0.2
0
120
140
160
180
200
RTT (Msec)
220
240
16
Stability over Space (at the same time)
Similarity at the
same cell tower
Substantial
variation
Divergence between
nearby towers
1
Empirical CDF
0.8
0.6
S-home
0.4
Latona
U Village
0.2
Herkimer
Northgate
1st Ave
0
0
50
100
150
200
RTT difference at 90th percentile (ms)
17
Switchboard: crowdsourced matchmaking
Switchboard Cloud Service
Game
on MSFT Azure
Grouping Agent
Network Testing
Service
Latency
Data
Measurement
Controller
Latency
Estimator
18
Scalability through Reuse…
● Across Time
 Stable distribution over 15-minute time intervals
● Across Space
 Phones can share probing tasks equitably for each tower
● Across Games
 Shared cloud service for any interactive game
19
Client Matchmaking Delay
1
Empirical CDF
0.8
0.6
Switchboard clients benefit
from deployment at scale
0.4
1 client
arrival/sec
Total
1 client/sec
0.2
10 client
arrival/sec
Total
10 clients/s
0
0
100
200
300
400
500
Time until placed in group (s)
600
700
20
Conclusion
● Latency: key challenge for fast-action multiplayer
● 3G latency variability makes prediction hard
● Crowdsourcing enables scalable 3G latency estimation
● Switchboard: crowdsourced matchmaking for 3G
21
Thank you!
QUESTIONS?
cs.duke.edu/~jgm
jgm@cs.duke.edu
22
Mean Sequential Difference (ms)
Stability over Time (in a Static Environment)
35
At timescales longer or shorter than 15 minutes:
successive interval pairs have less similarity
30
25
20
15
95th
10
90th
50th
5
25th
0
0
50
100
150
Interval Size (in minutes)
200
250
23
How Many Measurements Req.?
1
0.9
Empirical CDF
0.8
0.7
5s
0.6
15s
0.5
45s
0.4
60s
0.3
90s
0.2
Reasonably small number of measurements
are required per 15-minute interval
0.1
0
0
10
20
30
40
50
RTT difference at 90th percentile (in ms)
24
End-to-End Performance
1
0.9
Empirical CDF
0.8
End-to-end performance predictable as
the sum of edge and Internet latencies
0.7
0.6
0.5
Durham FRH to San Antonio FRH
0.4
Durham phone to FRH
0.3
San Antonio phone to FRH
0.2
phone to phone
0.1
0
0
100
200
RTT (ms)
300
400
25
ICMP Probes by Client Arrival Rate
1
1 client/s
Empirical CDF
0.8
2 clients/s
5 clients/s
0.6
10 clients/s
More clients = less probing
overhead for each
0.4
0.2
0
0
10
20
30
40
50
60
ICMP Probes per Client
26
Scalability by Client Arrival Rate
Client-to-server Traffic (Kbps)
80
After the initial warming period,
later clients reuse effort by earlier clients
70
60
50
40
30
1 client/s
2 clients/s
5 clients/s
10 clients/s
20
10
0
0
15
30
Time (minutes)
45
60
27
Group Size by Client Arrival Rate
1
1 client/s
Empirical CDF
0.8
2 clients/s
0.6
5 clients/s
10 clients/s
0.4
Availability of high-quality
matches increases with utilization
0.2
0
2
3
4
5
6
7
8
Size of Client Group
9
10
11
28
Timescale Statistical Analysis
1
240 minutes
0.9
120 minutes
0.8
60 minutes
2 minutes
15 minutes
0.7
0.6
0.5
0.4
0.3
0.2
Empirical CDF
sig=90 α=0.1
0.1
0
0.00001
0.0001
0.001
0.01
0.1
1
KS Test P-Value (inter-interval instability where p < α), (log scale)
29
Download