Definition

advertisement
Implementation/Infrastructure
Support for Collaborative
Applications
Prasun Dewan
1
Infrastructure vs. Implementation
Techniques
• Implementation technique are interesting when
general
– Applies to a class of applications
• Coding of such an implementation technique is
infrastructure.
• Sometimes implementation techniques apply to
very narrow app set
– Operation transformation for text editors.
• These may not qualify as infrastructures
• Will study implementation techniques applying to
small and large application sets.
2
Collaborative Application
Coupling
Coupling
3
Infrastructure-Supported Sharing
Client
Sharing
Infrastructure
Coupling
Coupling
4
Systems: Infrastructures
•
•
•
•
•
•
•
•
•
•
•
•
NLS (Engelbart ’68)
Colab (Stefik ’85)
VConf (Lantz ‘86)
Rapport (Ahuja ’89)
XTV (Abdel-Wahab, Jeffay & Feit
‘91)
Rendezvous (Patterson ‘90)
Suite (Dewan & Choudhary ‘92)
TeamWorkstation (Ishii ’92)
Weasel (Graham ’95)
Habanero (Chabert et al ‘ 98)
JCE (Abdel-Wahab ‘99)
Disciple (Marsic ‘01)
•
•
•
•
•
•
Post Xerox
Xerox
Stanford
Bell Labs
UNC/ODU
Bellcore
•
•
•
•
•
•
Purdue
Japan
Queens
U. Illinois
ODU
Rutgers
5
Systems: Products
• VNC (Li, StaffordFraser, Hopper ’01)
• NetMeeting
• Groove
• Advanced Reality
• LiveMeeting (Pay
by minute service
model)
• Webex (service
model)
• ATT Research
• Microsoft
• Microsoft
6
Issues/Dimensions
•
•
•
•
•
•
•
•
Architecture
Concurrency Control
Session management
Session Management
Access control
Architecture Model
Concurrency control
Firewall traversal
Interoperability
Composability
Colab. Sys. 2
Colab. Sys. 1
Implementation 3
…
Implementation 1
7
Infrastructure-Supported Sharing
Client
Sharing
Infrastructure
Coupling
Coupling
8
Architecture?
Infrastructure/
client (logical)
components
Component
(physical)
distribution
9
Shared Window Logical Architecture
Application
Window
NearWYSIWIS
Window
Coupling
10
Centralized Physical Architecture
XTV (‘88)
VConf (‘87)
Rapport (‘88)
NetMeeting
X Client
Input/Output
Pseudo Server
Pseudo Server
X Server
User 1
X Server
User 2
11
Replicated Physical Architecture
Rapport
VConf
X Client
X Client
Input
Pseudo Server
X Server
User 1
Pseudo Server
X Server
User 2
12
Relaxing WYSIWIS?
Application
NearWYSIWIS
Window
Coupling
Window
13
Model-View Logical Architecture
Model
View
View
Window
Window
14
Centralized Physical Model
Model
Rendezvous
(‘90, ’95)
View
View
Window
Window
15
Replicated Physical Model
Model
Infrastructure
Model
View
View
Window
Window
Sync ’96,
Groove
16
Comparing the Architectures
App
App
App
Input
I/O
Pseudo
Server
Pseudo
Server
Window
Window
Model
Pseudo
Server
Window
Model
Pseudo
Server
Window
Model
View
View
View
View
Window
Window
Window
Window
Architecture
Design
Space?
17
Architectural Design Space
• Model/ View are Application-Specific
• Text Editor Model
–
–
–
–
Character String
Insertion Point
Font
Color
• Need to capture these differences in
architecture
18
Single-User Layered Interaction
Layer 0
Layer 1
Layer 0
Layer 1
Communication
Layers
PC
I/O Layers
Layer N-1
Layer N-1
Layer N
Layer N
Increasing
Abstraction
Physical
Devices
19
Single-User Interaction
Layer 0
Layer 1
Increasing
Abstraction
Layer N-1
Layer N
PC
PC
20
Example I/O Layers
Model
Widget
Increasing
Abstraction
Window
Framebuffer
PC
21
Layered Interaction with an Object
{“John Smith”,
2234.57}
Interactor =
Absrtraction
Representation
+
Syntactic Sugar
Abstraction
•
•
John Smith
Interactor/
Abstraction
•
•
John Smith
Interactor/
Abstraction
X
•
•
John Smith
Interactor
22
Single-User Interaction
Layer 0
Layer 1
Increasing
Abstraction
Layer N-1
Layer N
PC
23
Identifying the Shared Layer
Higher layers will
also be shared
Shared
Layer
Lower layers may
diverge
Layer 0
Program
Component
Layer S
Increasing
Abstraction
Layer S+1
Layer N
UserInterface
Component
PC
24
Replicating UI Component
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
PC
PC
25
Centralized Architecture
Layer 0
Layer S
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
PC
PC
26
Replicated (P2P) Architecture
Layer 0
Layer 0
Layer 0
Layer S
Layer S
Layer S
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
PC
PC
27
Implementing Centralized Architecture
Layer 0
Layer S
Slave I/O Relayer
Master Input Relayer
Output Broadcaster
Slave I/O Relayer
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
28
Replicated Architecture
Layer 0
Layer S
Layer 0
Layer S
Input Broadcaster
Input Broadcaster
Layer 0
Layer S
Input Broadcaster
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
29
Classifying Previous Work
–
–
–
–
–
–
–
–
–
–
XTV
NetMeeting App Sharing
NetMeeting Whiteboard
Shared VNC
Habanero
JCE
Suite
Groove
LiveMeeting
Webex
Shared
Layer
Rep vs.
Central
30
Classifying Previous Work
• Shared layer
– X Windows (XTV)
Shared
– Microsoft Windows (NetMeeting Layer
App Sharing)
– VNC Framebuffer (Shared VNC)
Rep vs.
– AWT Widget (Habanero, JCE)
Central
– Model (Suite, Groove, LiveMeeting)
• Replicated vs. centralized
– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)
– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
31
Service vs. Server vs. Local
Commuication
• Local: User site sends data
– VNC, XTV, VConf, NetMeeting Regular
• Server: Organization’s site connected by
LAN to user site sends data
– NetMeeting Enterprise, Sync
• Service: External sites connected by WAN
to user site sends data
– LiveMeeting, Webex
32
Push vs. Pull of Data
• Consumer pulls new data by sending request for it
in response to
– notification
• MVC
– receipt of previous data
• VNC
• Producer pushes data for consumers
– As soon as data are produced
• NetMeeting, Real-time sync
– When user requests
• Asynchronous Sync
33
Dimensions
•
•
•
•
•
Shared layer level.
Replicated vs. Centralized.
Local vs. Server vs. Service Broadcast
Push vs. Pull Data
…
34
Evaluating design space points
•
•
•
•
•
•
•
Coupling Flexibility
Automation
Ease of Learning
Reuse
Interoperability
Firewall traversal
Concurrency and
correctness
• Security
Performance
–
–
–
–
–
Bandwidth usage
Computation load
Scaling
Join/leave time
Response time
• Feedback to actor
– Local
– Remote
• Feedthrough to observers
– Local
– Remote
– Task completion time
35
Sharing Low-Level vs. High-Level Layer
• Sharing a layer nearer the
data
– Greater view independence
– Bandwidth usage less
• For large data sometimes
visualization is compact.
– Finer-grained access and
concurrency control
• Shared window system
support floor control.
– Replication problems better
solved with more app
semantics
• Sharing a layer nearer the
physical device
– Have referential
transparency
• Green object no meaning
if objects colored
differently
– Higher chance layer is
standard.
• Sync vs. VNC
• promotes reusability and
interoperability
• More on this later.
•Sharing flexibility limited with fixed layer
sharing
•Need to support multiple layers.
36
Centralized vs. Replicated: Dist.
Comp. vs. CSCW
• Distributed
computing:
– More reads (output)
favor replicated
– More writes (input)
favor centralized
• CSCW
– Input immediately
delivered without
distributed commitment.
– Floor control or
operation transformation
for correctness
37
Bandwidth Usage in Replicated vs.
Centralized
• Remote I/O bandwidth only an issue when
network bandwidth < 4MBps (Nieh et al
‘2000)
– DSL link = 1 Mbps
• Input in replication less than output
• Input produced by humans
• Output produced by faster computers
38
Feedback in Replicated vs. Centralized
• Replicated: Computation time on local computer
• Centralized
– Local user
• Computation time on local computer
– Remote user
• Computation time on hosting computer plus roundtrip time
– In server/ service model an extra LAN/ WAN link
39
Influence of communication cost
• Window sharing remote feedback
– Noticeable in NetMeeting.
– Intolerable in PlaceWare’s service model.
• Powerpoint presentation feedback time
– not noticeable in Groove & Webex replicated model.
– noticeable in NetMeeting for remote user.
• Not typically noticeable in Sync with shared model
• Depends on amt of communication with remote site
– Which depends on shared layer
40
Case Study: Colab. Video Viewing
41
Case Study: Collaborative Video
Viewing (Cadiz, Balachandran et al. 2000)
• Two users collaboratively
executing media player
commands
• Centralized NetMeeting
sharing added unacceptable
video latency
• Replicated architecture
created using T 120 later
• Part of problem in centralized
system sharing video through
window layer
42
Influence of Computation Cost
• Computation intensive apps
– Replicated case: local computer’s computation
power matters.
– Central case: central computer’s computation
power matters
– Central architecture can give better feedback,
specially with fast network [Chung and Dewan ’99]
– Asymmetric computation power => asymmetric
architecture (server/desktop, desktop/PDA)
43
Feedthrough
• Time to show results at remote site.
• Replicated:
– One-way input communication time to remote site.
– Computation time on local replica
• Centralized:
– One-way input communication time to central host
– Computation time on central host
– One-way output communication time to remote site.
• Server/service model add latency
• Less significant than remote feedback:
– Active user not affected.
• But must synchronize with audio
– “can you see it now?”
44
Task completion time
• Depends on
– Local feedback
• Assuming hosting user inputs
– Remote feedback
• Assuming non hosting user inputs
• Not the case in presentations, where centralized favored
– Feedthrough
• If interdependencies in task
• Not the case in brainstorming, where replicated favored
– Sequence of user inputs
• Chung and Dewan ’01
– Used Mitre log of floor exchanges and assumed interdependent tasks
– Task completion time usually smaller in replicated case
– Asymmetric centralized architecture good when computing power
asymmetric (or task responsibility asymmetric?).
45
Scalability and Load
• Centralized architecture with powerful server more suitable.
• Need to separate application execution with distribution.
– PlaceWare
– Webex
• Related to firewall traversal. More later.
• Many collaborations do not require scaling
– 2-3 collaborators in joint editing
– 8-10 collaborators in CAD tools (NetMeeting Usage Data)
– Most calls are not conference calls!
• Adapt between replicated and centralized based on #
collaborators
– PresenceAR goals
46
Display Consistency
• Not an issue with floor control systems.
• Other systems must ensure that concurrent input
should appear to all users to be processed in the
same (logical) order.
• Automatically supported in central architecture.
• Not so in replicated architectures as local input
processed without synchronizing with other
replicas.
47
Synchronization Problems
abc
dabc
deabc
Program
abc
aebc
daebc
Program
Insert d,1
Insert e,2
Insert e,2
Insert d,1
Input
Distributor
Input
Distributor
Insert d,1
Insert e,2
UI
UI
User 1
User 2
48
Peer to peer Merger
abc
dabc
daebc
abc
aebc
daebc
Program
Program
Insert d,1
Insert e,2
Insert e,3
Insert d,1
Input
Distributor
Merger
Merger
Input
Distributor
Insert d,1
UI
User 1
Insert e,2
Ellis and Gibbs ‘89,
Groove, …
UI
User 2
49
Local and Remote Merger
abc
dabc
daebc
abc
aebc
daebc
Merger
Program
Program
Insert d,1
Insert e,2
Insert e,3
Insert d,1
Input
Distributor
Merger
Merger
Input
Distributor
Insert d,1
UI
User 1
• Curtis et al ’95, LiveMeeting,
Vidot ‘02
• Feedthrough via extra WAN
Link
• Can recreate state through
central site
Insert e,2
UI
User 2
50
Centralized Merger
abc
dabc
daebc
Merger
Program
abc
aebc
daebc
Program
Insert d,1
Insert e,2
Insert e,3
Insert d,1
Input
Distributor
• Munson & Dewan ‘94
• Asynchronous and
synchronous
Insert d,1
UI
Input
Distributor
– Blocking remote merge
• Understands atomic
change set
• Flexible remote merge
semantics
Insert e,2
UI
– Modify or delete can win
User 1
User 2
51
Merging vs. Concurrency Control
• Real-time Merging called Optimistic
Concurrency Control
• Misnomer because it does not support
serializability.
• More on this later.
52
Reading Centralized Resources
f
ab
Program
Central
bottleneck!
Program
read “f”
read” “f”
Input
Distributor
Input
Distributor
read “f”
UI
User 1
Read file
operation
executed
infrequently
UI
User 2
53
Writing Centralized Resources
f
Program
write “f”, “c”
Input
Distributor
ab
abcc
Multiple
writes
Program
write” “f”, “c”
Input
Distributor
write“f”, “c”
UI
UI
User 1
User 2
54
Replicating Resources
f
abcc
f
Program
abcc
Program
write f, “c”
Input
Distributor
write “f”, “c”
Input
Distributor
write“f”, “c”
UI
User 1
• Groove Shared Space
&Webex replication
• Pre-fetching
• Incremental replication
(diff-based) in Groove
UI
User 2
55
Non Idempotent Operations
msg
msg
Program
Program
mail joe, msg
Input
Distributor
mail joe, msg
Input
Distributor
mail joe, msg
UI
UI
User 1
User 2
56
Separate Program Component
Program’
msg
mail joe, msg
Program
Program
insert d, 1
insert d, 1
Input
Distributor
insert d,1
UI
Input
Distributor
•
•
•
•
•
User 1
Groove Bot: Dedicated machine for
external access
Only some users can invite Bot in shared
space
Only some users can invoke Bot
functionality
Bot data can be given only to some users
Similar idea of special “externality
proxy” in Begole 01
UI
User 2
57
Two-Level Program Component
Program++
insert d,1
msg
mail joe, msg
insert d,1
Program
insert d,1
Program
mail joe, msg
UI
•
•
User 1
•
Dewan & Choudhary ’92, Sync,
LiveMeeting
Extra comm. hop and
centralization
Easier to implement
UI
User 2
58
Classifying Previous Work
• Shared layer
– X Windows (XTV)
Shared
– Microsoft Windows (NetMeeting Layer
App Sharing)
– VNC Framebuffer (Shared VNC)
Rep vs.
– AWT Widget (Habanero, JCE)
Central
– Model (Suite, Groove, PlaceWare,)
• Replicated vs. centralized
– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)
– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
59
Layer-specific
•
•
•
•
•
•
So far, layer-independent discussion.
Now concrete layers to ground discussion
Screen sharing
Window sharing
Toolkit sharing
Model sharing
60
Centralized Window Architecture
Window Client
a ^, w1, x, y
draw a, w1, x, y
Output Broadcaster
& I/O Relayer
draw a, w, x, y
a ^, w, x, y
I/O Relayer
draw a, w1, x, y
Win. Server
a ^, w2, x, y draw a, w2, x, y
Win. Server
I/O Relayer
draw a, w3, x, y
Win. Server
Press a
User 1
User 2
User 3
61
UI Coupling in Centralized
Architecture
Window Client
•
Existing approach
– T 120, PlaceWare
•
Output Broadcaster
& I/O Relayer++
UI coupling need not be
supported
– XTV
move w3
move w
I/O Relayer
move w1
Win. Server
move w2
Win. Server
I/O Relayer
move w3
Win. Server
move w2
User 1
User 2
User 3
62
Distributed Architecture for UI Coupling
Window Client
Output Broadcaster
& I/O Relayer++
• Need multicast server at
each LAN
• Can be supported by T 120
move w1
I/O Relayer ++
move w3
I/O Relayer++
move w1
move w1
Win. Server
move w
Win. Server
Win. Server
move w1
User 1
User 2
User 3
63
Two Replication Alternatives
S
S -1
S
S -1
S
S -1
S
S -1
• Replicate d in S by
– S-1 sending input events to all S instances
– S sending events directly to all peers
• Direct communication allows partial sharing (e.g. windows)
• Harder to implement automatically by infrastructure
64
Semantic Issue
• Should window positions be coupled?
• Leads to window wars (Stefik et al ’85)
• Can uncouple windows
– Cannot refer to the “upper left” shared
window
• Compromise
– Create a virtual desktop for physical desktop
of a particular user
65
UI Coupling and Virtual Desktop
66
Raw Input with Virtual Desktop
Window Client
a ^, w1, x, y
Knows about
virtual desktop
draw a, w1, x, y
Output Broadcaster I/O
Relayer & VD
draw a, x’, y’
a ^, x’, y’
VD & I/O Relayer
draw a, w1, x, y a ^, w2, x’, y’ draw a, w2, x’, y’
Win. Server
Win. Server
VD & I/O Relayer
draw a, w3, x’, y’
Win. Server
Press a
User 1
User 2
User 3
67
Translation without Virtual Desktop
Window Client
a ^, w1, x, y
draw a, w1, x, y
Output Broadcaster, I/O
Relayer & Translator
a ^, w1, x, y
I/O Relayer
draw a, w1, x, y a ^, w2, x, y draw a, w2, x, y
Win. Server
Win. Server
I/O Relayer
draw a, w3, x, y
Win. Server
Press a
User 1
User 2
User 3
68
Coupled Expose Events: NetMeeting
69
Coupled Exposed Regions
Window Client
expose w
draw w
T 120 (Virtual Desktop)
Output Broadcaster I/O
Relayer & VD
expose w
VD & I/O Relayer
draw w
Win. Server
expose w
draw w
Win. Server
VD & I/O Relayer
expose w
draw w
Win. Server
front w3
User 1
User 2
User 3
70
Coupled Expose Events: PlaceWare
71
Uncoupled Expose Events
• XTV (no Virtual Desktop)
Window Client
expose w
• expose event not broadcast
so remote computers do not
blacken region
draw w
Output Broadcaster I/O
Relayer & VD
• Potentially stale data
expose w
VD & I/O Relayer
draw w
Win. Server
draw w
Win. Server
VD & I/O Relayer
draw w
Win. Server
front w3
User 1
User 2
User 3
72
Uncoupled Expose Events
• Centralized collaboration-transparent app draws to
areas of last user who sent expose event.
– May only sent local expose events
• If it redraws the entire window anyway everyone
is coupled.
• If it draws only exposed areas.
– Send the draw request only to inputting user
– Would work as long unexposed but visible regions not
changing.
– Assumes draw request can be associated with expose
event.
• To support this accurately, system needs to send it
union of exposed regions received from multiple 73
users
Window-based Coupling
• Couplable properties
–
–
–
–
–
Size
Contents
Positions
Stacking order
Exposed regions
• In shared window
system some must be
coupled and others
may be.
• Mandatory
– Window sizes
– Window contents
• Optional
– Window positions
– Window stacking order
– Window exposed regions
• Optional can be done with or
without virtual desktop
– Remote and local windows
could mix, rather than have
remote windows embedded in
Virtual Desktop window.
– Can lead to “window wars”
(Stefik et al ’87)
74
Example of Minimal Window Coupling
75
Replicated Window Architecture
Program
a ^, w1, x, y
Program
Program
a ^, w2, x, y
Input
Distributor
a ^, w1, x, y
a ^, w3, x, y
Input
Distributor
a ^, w2, x, y
draw a, w2, x, y
UI
a ^, w3, x, y
Input
Distributor
draw a, w2, x, y
UI
draw a, w3, x, y
UI
Press a
User 1
User 2
User 3
76
Replicated Window Architecture
with UI Coupling
Program
Input
Broadcaster
Program
move w
Input
Broadcaster
Program
move w
Input
Broadcaster
move w
move w
UI
move w
UI
UI
move w
User 1
User 2
User 3
77
Replicated Window Architecture
with Expose coupling
Program
Program
expose w
Program
expose w
Input
Distributor
expose w
expose w
Input
Distributor
expose w
Input
Distributor
expose w
draw w
UI
draw w
UI
draw w
UI
move w2
User 1
User 2
User 3
78
Replicated Window System
• Centralized only implemented commercially
– NetMeeting
– PlaceWare
– Webex
• Replicated can offer more efficiency and pass
through firewalls limiting large traffic
• Must be done carefully to avoid correctness problems
• Harder but possible at window layer
– Chung and Dewan ’01
– Assume floor control as centralized systems do
– Also called intelligent app sharing
79
Screen Sharing
• Sharing the screen client
– Window system (and all applications running
on top of it)
– Cannot share windows of subset of apps
– Share complete computer state
– Lowest layer gives coarsest sharing granularity.
80
Sharing the (VNC) Framebuffer Layer
81
VNC Centralized Frame Buffer
Sharing
Window Client
Win. Server
Output Broadcaster
& I/O Relayer
draw pixmap rect
(frame diffs)
key events
mouse events
Framebuffer
I/O Relayer
I/O Relayer
Framebuffer
82
Framebuffer
Replicated Screen Sharing?
• Replication hard if not impossible
– Each computer runs a framebuffer server and
shared input
– Requires replication of entire computer state
• Either all computers are identical and receive same
input from when they were bought
• Or at start of sharing session download one
computer’s entire environment
• Hence centralization with virtual desktop
83
Sharing pix maps vs. drawing
operations
•
•
Potentially larger size
Obtaining pixmap changes difficult
–
–
–
•
•
•
•
Single output operation
Standard operation
No context needed for interpretation
Multiple operations can be coalesced
into single pixmap
–
–
•
Do framebuffer diffs
Put hooks into window system
Do own translation
Per-user coalescing and compression
Based on network congestion and
computation power of user
Pixmap can be compressed
•
•
Smaller size
Obtaining drawing operations easy
– Create proxy that traps them
•
•
•
Many output operations
Non-standard operations
Fonts, colormaps etc need to be
replicated
– Reliable protocol needed
– Possible non standard operations
for distributing state
– Session initiation takes longer
•
Compression but not coalescing
possible
84
T. 120 Mixed Model
• Send either drawing operation or pixmap.
• Pixmap sent when
– Remote site does not support operation
– Multiple graphic operations need to be combined into
single pixmap because of network congestion or
computation overload
• Feedthrough and fidelity of pixmaps only when
required
• More complex – mechanisms and policies for
conversion
85
Pixmap compression
• Combine pixmap updates to overlapping regions into one
update.
– In VNC diffs of framebuffer done.
– In T 120 rectangles computed from updates
• When data already exists, send x,y of source (VNC and T
120)
– Scrolling and moving windows
– Function of pixmap cache size
• Diffs with previous rows of pixmap (T 120)
• Single color with pixmap subrectanges (VNC)
– Background with foreground shapes
• JPEG for still data, MPEG for moving data
• Larger number of operations conflicts with interoperability.
• Reduce statelessness
– Efficieny gain vs loss
86
T 120 Drawing Operation
compression
• Identify operands of previous operations
(within some history) rather than send new
value (T 120)
– E.g. Graphics context often repeated
• Both kinds of compression useless when
bandwidth abundant
– But can unduly increase latency.
87
T 120 Pointer Coalescing
• Multiple input pointer updates combined into one
• Multiple output pointer updates combined into
one.
• Reduced user experience
• Bandwidth usage of pointer updates small.
• Reduce jitter in variable latency situations.
– If events are time stamped
• Consistent with not sending incremental
movements and resizing of shapes in whiteboards.
88
Flow Control Algorithms
• T 120 push-based approach
– Sender pushes data to group of receivers
– Compare end to end rate for slowest receiver by looking at application
Queue
– Works with overlays (firewalls)
– Adapt compression and coalescing based on this
– Very slow computers leave collaboration.
• VNC pull based rate
–
–
–
–
–
Each client pulls data at consumption rate
Gets diffs since since last pull with no intermediate points
Per client diffs must be maintained
Data might be sent along same path multiple times
Could replicate updates at all LANS (federations) [Chung 01]
89
Experimental Data
• Pull-based vs. Push-based flow control
• Sharing pixmaps vs. drawing operations
• Replicated vs. centralized architecture
90
Remote Feedback Experiments
• Nieh et al, 2000: Remote
single-user access experiments.
Window Client
Master I/O Distributor
– VNC
– RDP (T. 120 based)
• Measured
Slave I/O Distributor
Win. Server
Win. Server
User 1
– Latency (Remote feedback time)
– Data transferred
• Give idea of performance seen
by remote user in centralized
architecture, comparing
– Sharing of pixmap vs.drawing
operations
– Pull-based vs. no flow control
91
High Bandwidth Experiments
• Letter A
– Latency
• VNC (Linux) 60 ms
• RDP (Win2K, T 120based) 200 ms
– Data transferred
• VNC 0.4KB
• RDP 0.3KB
• Previewers send text as
bitmaps (Hanrahan)
• Red box fill
– Latency
• VNC (Linux) 100 ms
• RDP (Win2K, T 120based) 220 ms
– Data transferred
• VNC 1.2KB
• RDP 0.5KB
• Compression increases
latency reducing data
92
Web Page Experiments
• Time to execute a web page
script
– Load 54*2 pages (text and
bitmaps)
– Scroll down 200 pixels
– Common parts: blue left
column, white background, PC
magazine logo
• Load time
– 4-100 Mbps < 50 seconds
– 100 MBps
• RDP: 35s
• VNC: 24s
• Load time
– 128 Kbps
• RDP 297s
• VNC 25s
• Data transferred
– 100 Mbps
• Web browser 2MB
• RDP 12MB
• VNC 4MB
– 128 Kbps
• RDP 12 MB
• VNC 1MB
• Data loss reduces load time
93
Animation Experiments
• 98 KB Macromedia Flash
315 550x400 frames
• FPS
– 100 Mbps
• RDP: 18
• VNC: 15
– 512 kbps
• RDP: 8
• VNC: 15
– 128 Kbps
• RDP: 2
• VNC: 16
• Data transferred
– 100 Mbps
• RDP: 3MB
• VNC: 2.5MB
– 512 kbps
• RDP: 2MB
• VNC: 1.2MB
– 128 kbps
• RDP: 2MB
• VNC: 0.3MB
• 18 fps acceptable, < 8fps intolerable
• Data loss increases fps
• LAN speed required for tolerable
animations
94
Cyclic Animation Experiments
• Wong and Seltzer 1999,
RDP Win NT
• Animated 468x60 pixel
GIF banner
– 0.01 mbps
• Animated scrolling news
ticker
– 0.01mbps
•
Bezier screen saver
– 10 bezier curves repeated
– 0.1 mbps
• GIF banner and scrolling
news ticker simultaneously
– 1.60 Mbps
• Client side cache of pixmaps
– Cache not big enough to
accommodate both animations
– LRU policy not ideal for cyclic
animations
• 10 Mbps can accommodate
only 5 users
• Load put by other UI
operations?
95
Network Loads of UI Ops
• Wong and Seltzer 1999,
RDP Win NT
• Typing
– 75 wpm word typist
generated 6.26 kbps
• Mousing
– Random, continuous:
2Kbps
– Usefulness of mouse
filtering in T 120?
• Menu navigation
– Depth-first selection from
Windows start menu: 1.17
Kbps
– Alt right arrow in word:
39.82 Kbps
– Office 97 with animation:
48.88 KBps
• Scrolling
– Word document, PG down
key held: 60 kbps
96
Relative Occurrence of Operations
•
•
•
•
•
Danshkin, Hanrahan ’94, X
Two 2D drawing programs
Postscript previewer
X11 perf benchmark
5 grad students doing daily
work
• Most output responses are
small.
•
1. Images
1. 53 bytes avg size
2. BW bitmap rectangles
2. Geometry
1. Half clearing letter
rectangles
3. Text
4. Window enter and leave
5. Mouse, Font, Window
movement, etc events
negligible
– 100 bytes
– TCP/IP adds 50% overhead
• Startup lots of overhead ~ 20s
Bytes used
•
Grad students vs. real people?
97
User Classes vs. Load & Bandwidth
Usage
•
•
Terminal services study
Knowledge Worker
–
–
–
–
•
Makes own work
Marketing, authoring
Excel, outlook, IE, word
Keeps apps open all the time
•
2xPentium 111 Xeon 450 MHz
40 structured task workers
70 knowledge workers
320 Data entry workers
In central architecture, perhaps
separate multicaster
Network utilization
– Structured task: 1950 bps
– Knowledge worker: 1200 bps
– Data entry: 495 bps
Data Entry worker
– Transcription, typists, order entry
– SQL, forms
Simulation scripts run to measure
how may of each class can be
supported before 10% degradation
in server response
–
–
–
–
–
Structured task worker
– Claims processing, accts payable
– Outlook, word
– Uses each app for less time,
closing and opening apps
•
•
•
Encryption has little effect
98
Regular vs. Bursty Traffic
• Droms and Dyksen ’90, X traffic
• Regular
– 8 hour systems programmer usage
• 236 bps, 1.58 packets per second
– Compares well with network file system traffic
• Bursts
– 40,000 bps, 100 pps
– Individual apps
• Twm and xwd > 100, 000 bps, 100 pps
• Xdvi, 60,000 bps, 90 pps
– Comparable to animation loads
• Bandwidth requirements as much as remote file system
99
Bandwidth in Replicated vs.
Centralized
• Input in replication less data than output
– Several mouse events could be discarded
– Output could be buffered.
• X Input vs. Output (Ahuja ’90)
– Unbuffered: 6 times as many messages sent in centralized
– Buffered: 3.6 times as many messages sent
– Average input and output message size: 25 bytes
• RDP each keystroke message 116 bytes
• Letter a, box fill, text scroll: < 1 Kb
• Bitmap load: 100 KB
100
Generic Shared Layers Considered
• Framebuffer
• Window
101
Shared Widgets
• Layer above window is Toolkit
• Abstractions offered
– Text
– Sliders
– Other “Widgets”
102
Sharing the (Swing) Toolkit Layer
• Different window sizes
• Different looks and feel
• Independent scrolling
103
Window Divergence
• Independent scrolling
• Multiuser scrollbar
• Semantic telepointer
104
Shared Toolkit
• Unlike window system, toolkit not a network layer
• So more difficult to intercept I/O
• Input easier by subscribing to events, and hence popular
replicated implementations done for Java AWT & Swing
– Abdel Wahab et al 1994 (JCE),Chabert et al 1998 (NCSA’s
Habanero) ,Begole 01
– GlassPane can be used in Swing
• A frame can be associated with a glass pane whose transparent property
is set to true
• Mouse and keyboard events sent to glass pane
• Centralized done for Java Swing by intercepting output and
input (Chung ’02)
–
–
–
–
Modified JComponent constructor to turn debug option on
Graphics object wrapped in DebugGraphics object
DebugGraphics class changed to intercept actions
Cannot modify Graphics as it is an abstract class subclassed by
platform dependent classes
105
Shared Toolkit
• Widely available commercial shared toolkits not available.
• Intermediate point between model and window sharing.
• Like model sharing
– Independent window sizes and scrolling
– Concurrent editing of different widgets
– Merging of concurrent changes to replicated text widget
• Like window sharing
– No new programming model/abstractions
– Existing programs
106
Replicated Widgets
abc
abc
adbc
adbc
Program
Insert w, d,1
Input
Distributor
Program
Insert w,d,1
Input
Distributor
Insert w, d,1
Toolkit
Toolkit
User 1
User 2
107
Sharing the Model Layer
• The same model can be bound to different
widgets!
• Not possible with toolkit sharing
108
Sharing the Model Layer
Model
Program
Component/
Model
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
109
Sharing the Model Layer
Model
Cost of
accessing
remote
model
View
Controller
Program
Component/
Model
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
110
Sharing the Model Layer
Send
changed
model state
in notfication
Model
View
Controller
Program
Component/
Model
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
111
Sharing the Model Layer
Model
No standard
protocol
View
Controller
Program
Component/
Model
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
112
Centralized Architecture
Program
Output Broadcaster
& I/O Relayer
Output Broadcaster and
relayers cannot be
standard
I/O Relayer
UI
User 1
UI
User 2
I/O Relayer
UI
113
User 3
Replicated Architecture
Program
Program
Program
Input
Broadcaster
Input
Broadcaster
Input
Broadcaster
UI
UI
UI
User 1
User 2
Input broadcaster
cannot be
standard
User 3
114
Model Collaboration Approaches
• Communication facilities of varying
abstractions for manual implementation.
• Define Standard I/O for MVC
• Replicated types
• Mix these abstractions
115
Unstructured Channel Approach
• T 120 and other multicast approaches
– Used for data sharing in whiteboard
• Provide byte-stream based IPC primitives
• Add multicast to session capability
• Programmer uses these to create relayers
and broadcasters
116
RPC
• Communicate PL types rather than unstructured
byte streams
– Synchronous or asynchronous
• Use RPC
– Many Java based colab platforms use RMI
117
M-RPC
• Provide multicast RPC (Greenberg and Marwood
’92, Dewan and Choudhary ’92) to subset of sites
participating in session:
–
–
–
–
–
–
processes of programmer-defined group of users
processes of all users in session
processes of users other than current inputter
current inputter
all processes of specific user
specific process
118
GroupKit Example
proc insertIdea idea {
insertColouredIdea blue $idea
gk_toOthers ''insertColouredIdea red $idea'' }
119
Model Collaboration Approaches
• Communication facilities for varying
abstractions for manual implementation.
• Define Standard I/O for MVC
• Replicated types
• Mix these abstractions
120
Sharing the Model Layer
Program
Component/
Model
Define
standard
protocol
Model
View
Controller
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
121
Sharing the Model Layer
Program
Component/
Model
Define
standard
protocol
Model
View
Increasing
Abstraction
Toolkit
Window
UserInterface
Component
Framebuffer
122
Standard Model-View Protocol
Model
• Can be in terms of model
objects or view elements.
Displayed
element • View elements are varied
– Bar charts, Pie charts
• Model elements can be
defined by standard types
• Single-user I/O model
View
– Output: Model sends its
displayed elements to view and
updates to them.
– Input: View sends input
updates to displayed model
elements
• Dewan & Choudhary ‘90
123
IM Model
Create view of element named
“IM History” whose type is
“IM_History” and value is at
address “&im_history”
/*dmc Editable String, IM_History */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;
Load () {
Dm_Submit (&im_history, "IM History", "IM_History");
Whenever “Message”
Dm_Submit (&message, "Message", String);
Dm_Callback(“Message”, &updateMessage);
is changed by user call
Dm_Engage ("IM History");
updateMessage()
Dm_Engage ("Message");
}
updateMessage (String variable, String new_message) {
im_history.message_arr[im_history.num++] = value;
Dm_Insert(“IM History”, im_history.num, value);
Show (a la map)
}
the view of “IM
124
History”
Multiuser Model-View Protocol
• Multi-user I/O model
– Output Broadcast: Output
messages broadcast to all
views.
– Input relay: Multiple views
send input messages to
model.
– Input coupling: Input
messages can be sent to
other views also
Model
• Dewan & Choudhary ’91
View
View
125
IM Model
/*dmc Editable String, IM_History */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;
Insert to to all
Load () {
Dm_Submit (&im_history, "IM History", "IM_History");
Dm_Submit (&message, "Message", String);
Dm_Callback(“Message”, &updateMessage);
Dm_Engage ("IM History");
Dm_Engage ("Message");
}
updateMessage (String variable, String new_message) {
im_history.message_arr[im_history.num++] = value;
Dm_Insert(“IM History”, im_history.num, value);
}
Called by
any user
126
Replicated Objects in Central
Architecture
Model
View
• Distributed view
replicas needs to create local
replica of displayed
object.
• Can build
replication into
types
View
127
Replicating Popular Types for Central
and Replicated Architectures
Model
View
View
Model
Model
View
View
• Create replicated versions of selected popular types.
• Changes in a type instance automatically made in all of its
replicas (in views or models)
– No need for explicit I/O
• Can select which values in a layer replicated
• Architectures
– replicated architecture (Greenberg and Marwood ’92, Groove)
– semi-centralized (Munson & Dewan ’94, PlaceWare)
128
Example Replicated Types
• Popular primitive types: String, int, boolean …
(Munson & Dewan ’94, PlaceWare, Groove)
• Records of simple types (Munson & Dewan ’94,
Groove)
• Dynamic sequences (Munson & Dewan ’94,
Groove, PlaceWare)
• Hashtables (Greenberg & Marwood ’92, Munson
& Dewan ’94, Groove)
• Combinations of these types/constructors (Munson
& Dewan ’94, PlaceWare, Groove)
129
Kinds of Distributed Objects
• By reference (Java and .NET)
site 1
site 1
site 1
site 2
site 2
site 2
– reference sent to remote site
– remote method invocation site results in
calls at local site
• By value (Java and .NET)
– deep copy of object sent
– remote method invocations results in calls
at remote site
– copies diverge
• Replicated objects
– deep copy of object sent
– remote method invocations results in local
and remote calls
– either locks or merging used to detect/fix
conflicts
130
Alternative model sharing
approaches
1.
2.
3.
4.
Stream-based communication
Regular RPC
Multicast RPC
Replicated Objects (/Generic Model View
Protocol)
131
Replicated Objects vs.
Communication Facilities
• Higher abstraction
– No notion of other sites
– Just make change
• Cannot use existing types directly
– E.g. in Munson & Dewan ’94, ReplicatedSequence
• Architecture flexibility
– PlaceWare bound to central architecture
– Replicas in client and server of different types, e.g. VectorClient &
VectorServer
• Abstraction flexibility
– Set of types whose replication supported by infrastructure automatically
– Programmer-defined types not automatically supported
• Sharing flexibility
– Who and when coupled burnt into shared value
• Use for new apps
132
Replicated Objects vs.
Communication Facilities
• PlaceWare has much richer set than WebEx
– Ability to include Polling as a slide in a
PowerPoint presentation
– Seating arrangement
• Not as useful for converting existing apps.
– Need to convert standard types to replicated
types
– Repartitioning to separate shared and unshared
models
133
Stream based vs. Others
• Lowest-level
– Serialize and deserialize objects
– Multiplex and demultiplex operation invocations into
and from stream
• Stream-based communication (wire protocol) is
language independent
• No need to learn non standard syntax and
compilers
• May be the right abstraction for converting
existing apps into collaborative ones.
134
Case Study: Collaborative Video
Viewing (Cadiz, Balachandran et al. 2000)
• Replicated architecture
created using T 120
multicast later.
• Exchanged command
names
• Implementer said it
was easy to learn and
use.
135
RPC vs. Others
• Intermediate ease of learning, ease of usage,
flexibility
• Use when:
– Overhead of channel usage < overhead of RPC
learning
– Appropriate replicated types
• Not available, or
– Who and when coupled, architecture burnt into replicated
type
• learning overhead > RPC usage overhead
136
M-RPC vs. RPC
• Higher-level abstraction
• Do not have to know exact site roster
– Others, all, current
• Can be automatically mapped to streambased multicast
• Use M-RPC when possible
137
Combining Approaches
•
System combining benefits of multiple
abstractions?
–
•
•
•
Flexibility of lower-level and automation of
higher-level
Co-existence
Migratory path
New abstractions
138
Coexistence
Support all of these abstractions in one system
• RPC and shared objects (Dewan &
Choudhary ’91, Greenberg & Marwood
’92, Munson & Dewan ’94, and
PlaceWare)
139
Migratory Path
Problem of simple co-existence
•
Low-level abstraction effort not reused.
–
•
Allow the use of low-level abstraction to create higherlevel abstraction
Framework allowing RPC to be used to create new
shared objects (Munson & Dewan ’94, PlaceWare).
•
–
•
•
E.g. RPC used to built a file directory
E.g. shared hash table
Can be difficult to use and learn
Low-level abstraction still needed when controlling who
and when coupled
140
New abstractions: Broadcast
Methods
Stefik et al ’85: Mixes shared
objects and RPC
•
Declare one or more
methods of arbitrary
class as broadcast
•
Method invoked on all
corresponding instances
in other processes in
session
•
Arbitrary abstraction
flexibility
public class Outline {
String getTitle();
broadcast void setTitle(String
title);
Section getSection(int i);
int getSectionCount();
broadcast void setSection(int
i,Section s);
broadcast void insertSection(int
i,Section s);
broadcast void
removeSection(int i);
}
141
Broadcast Methods Usage
Associates/
Replicas
Associates/
Replicas
Association
Model
bm
lm
View
lm
Broadcast
method
Model
lm
View
lm
lm
Window
Window
User 1
User 2
142
Problems with Broadcast Methods
•
Language support needed
– C#?
•
Single multicast group
– Cannot do subset of participants
•
Selecting broadcast methods
required much care
– Sharing at method rather than
data level
public class Outline {
String getTitle();
broadcast void setTitle(String title);
Section getSection(int i);
int getSectionCount();
broadcast void setSection(int i,Section
s);
broadcast void insertSection(int
i,Section s);
broadcast void removeSection(int i);
broadcast void insertAbstract (Section s)
{
insertSection (0, s);
}
}
Broadcast method should not call another broadcast method!
143
Method vs. State based Sharing
• Method-based sharing for indirectly sharing state.
• Programmer provides mapping between state and
methods that change it.
• With infrastructure known mapping, replicated
types automatically implemented.
• Mapping of internal state and methods not
sufficient because of host-dependent data
(specially in UI abstractions)
• Need mapping of external (logical) state.
144
Property-based Sharing
• Roussev & Dewan ’00
• Synchronize external state or
properties
• Properties deduced
automatically from
programming patterns
– Getter and setter for record
fields
– Hashtables and sequences
• System keeps properties
consistent
public class Outline {
String getTitle();
void setTitle(String title);
Section getSection(int i);
int getSectionCount();
void setSection(int i,Section s);
void insertSection(int i,Section s);
void removeSection(int i);
void insertAbstract (Section s) {
insertSection(0, s);
}
}
– Parameterized coupling model
• Patterns can be programmerdefined
145
Programmer-defined conventions
getter = <PropType> get<PropName>()
setter = void set<PropName>(<PropType>)
insert = void insert<PropName> (int, <ElemType>)
remove = void remove<PropName> (int)
lookup = <ElemType> elementAt<PropName>(int)
set = void set<PropName> (int, <ElemType>)
count = int get<PropName>Count()
146
Multi-Layer Sharing with Shared
Objects
Story so far:
• Need separate sharing
implementation for each
layer
– Framebuffer: VNC
– Window: T. 120
– Toolkit: GroupKit
• Problem with data layer
since no standard protocol
• Create shared objects for
this layer
• But objects occur at
each layer
– Framebuffer
– Window
– TextArea
• Why not use shared
object abstraction for
any of these layers?
147
Sharing Various Layers
Model
View
Parameterized
Coupler
Model
View
Toolkit
Toolkit
Window
Window
Framebuffer
Framebuffer
148
Sharing Various Layers
Model
Model
View
View
Toolkit
Parameterized
Coupler
Toolkit
Window
Window
Framebuffer
Framebuffer
149
Sharing Various Layers
Model
Model
View
Toolkit
View
Parameterized
Coupler
Toolkit
Window
Window
Framebuffer
Framebuffer
150
Experience with Property Based
Sharing
• Used for
– Model
– AWT/Swing Toolkit
– Existing Graphics Editor
• Requires well written code
– Existing may not be
151
Multi-layer Sharing
• Two ways to implement colab. application
– Distribute I/O
• Input in Replicated
• Output in Centralized
• Different implementations (XTV, NetMeeting) distributed
different I/O
– Defined replicated objects
• A single implementation used for multiple layers
• Single implementation in Distribute I/O approach?
152
Translator-based Multi-Layer Support
for I/O Distribution
• Chung & Dewan ‘01
• Abstract Inter-Layer Communication Protocol
– input (object)
– output(object)
– …
• Translator between specific and abstract protocol
• Adaptive Distributor supporting arbitrary, external mappings
between program and UI components
• Bridges gap between
– window sharing (e.g. T 120 app sharing) and higher-level sharing (e.g.
T 120 whiteboard sharing)
• Supports both centralized and replicated architectures and
153
dynamic transitions between them.
I/O Distrib: Multi-Layer Support
Layer 0
Layer 0
Layer 0
Layer S
Layer S
Layer S
Layer N-1
Layer N-1
Layer N-1
Translator
Adaptive Distributor
Translator
Translator
Adaptive Distributor
Adaptive Distributor
Layer N
Layer N
Layer N
PC
154
I/O Distrib: Multi-Layer Support
Layer 0
Layer 0
Layer 0
Layer S
Layer S
Layer S
Translator
Adaptive Distributor
Translator
Adaptive Distributor
Translator
Adaptive Distributor
Layer S+1
Layer S+1
Layer S+1
Layer N
Layer N
Layer N
PC
155
Experience with Translators
•
•
•
•
VNC
X
Java Swing
User Interface
Generator
• Web Services
• Requires translator
code, which can be
non trivial
156
Infrastructure vs. MetaInfrastructure
Text Editor
Outline Editor
application
Pattern Editor
application
application
application
application
application
application
Checkers
application
X
JavaBeans
Java’s
Swing
VNC
Property/Translator-based Distributor/Coupler
Infrastructure
Meta-Infrastructure
157
The End of Comp 290-063
Material
(Remaining Slides FYI)
158
Using Legacy Code
• Issue: how to add collaboration awareness to
single-user layer
–
–
–
–
Model
Toolkit
Window System
…
• Goal
– Would as little coupling as possible between existing
and new code
159
Adding Collaboration Awareness to Layer
Colab. Transp.
Colab. Aware
Suite
Ad-Hoc
Colab. Transp.
JCE
Colab. Aware
Extend ColabTransp. Class
Colab. Transp.
Colab. Aware
Sync
Colab. Transp.
Extend Colab.
Aware Class
Colab. Aware
Colab. Aware Delegate
Roussev
’00
160
Proxy Delegate
Calling Object
X Client
Pseudo Server
X Server
XTV
Adapter Object COLA
Called Object
161
Identifying Replicas
•
Manual connection:
– Translators identify peers (Chung and Dewan ’01)
•
Automatic:
– Central downloading:
• Central copy linked to downloaded objects (PlaceWare, Suite, Sync)
– Identical programs: Stefik et al ’85
• Assume each site runs the same program and instantiates programs in the same order
• Connect corresponding instances (at same virtual address) automatically.
– Identical instantiation order intercepted
• Connect Nth instantiated object intercepted by system
• E.g. Nth instantiated windows correspond
– External descriptions (Groove)
• Assume an external description describing models and corresponding views
• System instantiates models and automatically connects remote replicas of them.
• Gives programmers events to connect models to local objects (views, controllers).
–
•
No dynamic control over shared objects.
Semi-manual (Roussev and Dewan ’00)
– Replicas with same GID’s automatically connected.
– Programmer assigns GIDs to top level objects, system to contained objects
162
Connecting Replicas vs. Layers
• Object correspondence
established after containing
layer correspondence.
• Only some objects may be
linked
• Layer correspondence
established by session
management
• E.g. Connecting whiteboards
vs. shapes in NetMeeting
163
Basic Session Management
Operations
Create/ Delete
(Conference 1)
Add/Delete
(App3)
Conference 1
App1
App2
User 1
App3
List/Query/
Set/ Notify
Properties
User 2
Join/Leave (User 2)
164
Basic Firewall
reply
send
protected site
unprotected proxy
call
send
open
communicating site
• Limit network
communication to and from
protected sites
• Do not allow other sites to
initiate connections to
protected sites.
• Protected sites initiate
connection through proxies
that can be closed if
problems
• can get back results
– Bidirectional writes
– Call/reply
165
Protocol-based Firewall
sip
open
reply
protected site
• May be restricted to
certain protocols
– HTTP
– SIP
unprotected proxy
sip
http
call
open
communicating site
166
Firewalls and Service Access
• User/client at protected site.
• Service at unprotected site.
• Communication and
dataflow initiated by
protected client site
reply
protected user
unprotected proxy
http-rpc
rpc
call
open
unprotected service
– Can result in transfer of data
to client and/or server
• If no restriction on protocol
use regular RPC
• If only HTTP provided,
make RPC over HTTP
– Web services/Soap model
167
Firewalls and Collaboration
open
protected user
• Communicating sites
may all be protected.
• How do we allow
opens to protected
user?
protected user
168
Firewalls and collaboration
close
send
open
protected user
unprotected forwarder
close
send
open
protected user
• Session-based forwarder
• Protected site opens connection
to forwarder site outside firewall
for session duration
• Communicating site also opens
connection to forwarder site.
• Forwarder site relays messages to
protected site
• Works well if unrestricted access
allowed and used
• What if restricted protocol?
169
Restricted Protocol
• If only restricted protocol then
communication on top of it as in service
solution
• Adds overhead.
170
Restricted protocols and data to
protected site
• HTTP does not allow data flow to be initiated by
unprotected site
• Polling
– Semi-synchronous collaboration
• Blocked gets (PlaceWare)
– Blocked server calls in general in one-way call model
– Must refresh after timeouts
• SIP for MVC model
– Model sends small notifications via SIP
– Client makes call to get larger data
– RPC over SIP?
171
Firewall-unaware clients
• Would like to isolate specific apps from worrying protocol choice and
translation.
• PlaceWare provides RPC
– Can go over HTTP or not
• Groove apps do not communicate directly – just use shared objects and
don’t define new ones
– Can go either way
• Groove and PlaceWare try unrestricted first and then HTTP
• UNC system provides standard property-based notifications to
programmer allows them to be delivered as:
–
–
–
–
–
RMI
Web service
SIP
Blocked gets
Protected site polling
172
Forwarder & Latency
protected user
• Adds latency
– Can have multiple forwarders bound to
different areas (Webex)
– Adaptive based on firewall detection
(Groove)
•
•
unprotected forwarder
try to open directly first
if fails because of firewall, opens
system provided forwarder
• asymmetric communication possible
– Messages to user go through forwarder
– Messages from user go directly
protected user
– Groove is also a service based model!
– PlaceWare always has latency and it
shows
173
Forwarder & Congestion Control
protected user
unprotected forwarder
protected user
• Breaks congestion control
algorithms
– Congestion between
communication between
protected site and forwarder
controlled by algorithms may
be different than end to end
congestion
– T 120 like end to end
congestion control relevant
Different congestions
174
Forwarder + Multi-caster
•
protected user
•
Forwarder can multicast to other users
on behalf of sending user
Separation of application processing
and distribution
–
•
•
unprotected forwarder
+ multicaster
•
Reduces messages in link to forwarder
Separate multicaster useful even if no
firewalls
Forwarder can be much more powerful
machine.
–
•
protected
user
•
T 120 provides multi-caster without
firewall solution
Forwarder can be connected to higher
speed networks
–
protected
user
Supported by PlaceWare, Webex
In groove, if (possibly unprotected) user
connected via slow network, single
message sent to forwarder, which is
then multicast
May need hierarchy of multicasters (T
120), specially dumb-bell
175
Forwarder + State Loader
•
•
– Avoid message from forwarder to
protected site containing state
– Alternative to multicast
– Extra message to forwarder for pulling
adds latency and traffic
– Each site pulls at its consumption rate
read
protected user
unprotected forwarder
+ state loader
protected
user
protected
user
Forwarder can also maintain state in
terms of object attributes
Slow and latecomer sites pull state
asynchronously from state loader
•
Works for MVC like I/O models
– VNC: framebuffer rectangles
– PlaceWare: PPT slides
– Chung & Dewan ’98: Log of arbitrary
input/output events converted to object
state
•
•
Useful even if no firewalls
Goes against stateless server idea
– State should be an optimization
176
Forwarder + multicaster + state
loader
• Multicaster for rapidly
changing information
• State loader for slower
changing information
• Solution adopted in
PlaceWare
protected user
unprotected forwarder
+ multicaster + state
loader
protected
user
protected
user
– Multicast for window sharing
– State loading for PPT slides
• VNC results show pull model
works for window sharing
• Greenberg ’02 shows pull
model works for video
177
Interoperability
• Cannot make assumptions about remote
sites
• Important in collaboration because one non
conforming can prevent adoption of
collaboration technology
• Devise “standard” protocols for various
collaboration aspects to which specific
protocols can be translated
178
Examples of Collaboration Aspects
• Codecs in media (SIP)
• Window/Frame-based sharing
–
–
–
–
Caching capability for bitmaps, colormaps.
Graphics operations supported
Bits per pixel
Virtual desktop size
179
Layer and Standard Protocols
• Easier to agree on lower level layer
• Every computer has a framebuffer with similar
properties.
• Windows are less standard
– WinCE and Windows not same
• Toolkits even less so
– Java Swing and AWT
• Data in different languages and types
– Interoperation very difficult
180
Data Standard
• Web Services
– Everyone converts to it
• Object properties based on patterns
translated to Web services?
• XAF
181
Multiple Standards
• More than one standard can exist
– With different functionality/performance
• How to negotiate?
• Same techniques can be used to negotiate
user policies
– E.g. which form of concurrency control or
coupling
182
Enumeration/Selection Approach
• One party proposes a series of protocols
– M= audio 4006 RTP/AVP 0 4
– A = rtpmap: 0 PCMU/8000
– A = rtpmap: 4 GSM/8000
• Other party picks one of them
– M= audio 4006 RTP/AVP 0 4
– A = rtpmap: 4 GSM/8000
183
Extending to multiple parties
• One party proposes a series of protocols
• Other responds with subsets supported
• Proposing party picks some value in
intersection.
• Multiple rounds of negotiaton
184
Single-Round
• Assume
– Alternative protocols can be ordered into levels,
where support for protocol at level l indicates
support for all protocols at levels less than l
• Broadcast level and pick min of values
received
185
Capability Negotiation
• Protocol not named but function of capabilities
– Set of drawing operations supported.
• Increasing levels can represent increasing
capability sets.
– Sets of drawing operations
• Increasing levels can represent increasng
capability values
– Max virtual desktop size
186
Uniform Local Algorithm
• Apply same local algorithm at all sites to choose
level and hence associated “collapsed” capability
set
– Min
• Of capability set implies an AND
• Bits per pixel, drawing operation sets
– Max
• Of boolean values implies an OR
• Virtual desktop size
– Something else based on # and identity of sites
supporting each level
187
UI Policy Negotiation
• Can use same mechanism for UI policy negotiation
• Examples
– Unconditional grant floor control: In T 120, each node can say
•
•
•
•
•
Yes
No
Ask Floor Controller
Yes < Ask Floor Controller < No
Use min of this for least permissive control
– Sharing control: many systems each node can say:
• Share scrollbar
• Not share < share
• Use min for least permissive sharing
188
Office Apps
• Multiple versions of office apps exist
• Use similar scheme for negotiating
capabilities of office apps in conference
– pdf capability < viewer < full app
– Office 10 < Office 11 < Office 12
189
Conversion to Standard Protocols
• May need to convert richer protocol to lesser
“lowest common denominator” protocol with
lesser capabilities
• Also may not wish to lose lowest common
denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120
• Fine-grained locks to floor control in Dewan and
Sharma ‘91
190
Composability
• Collaboration infrastructure must perform several tasks
– Session management
– Set up (centralized/replicated/hybrid) architecture
– I/O distribution
• filtering
• Multipoint communication
• Latecomer and slow user accomodation
– Access and concurrency control
– Firewall traversal
• Multiple ways to perform each of these functions
• Implement separate functions in different composable
modules
• Difficult because they must work with each other
191
T 120 Composable Architecture
• Multicast layer
– Multicast + tokens
• Session Management
– Session operations + capability negotiation
• Application template
– Standard structure of client of multicast + session management
• Window-based sharing
– Centralized architecture for window sharing
– Uses session management + multicast
• Whiteboard
– Replicated or centralized whiteboard sharing
– Uses session management + multicast
192
T 120 Layers
User Application(s)
(Using Both Standard and Non-Standard Application Protocols)
Node
Controller
User Application(s)
(Using Std. Appl. Protocols)
User Application(s)
(Using Non-Std Protocols)
Rec. T.127 (MBFT)
...
Rec. T.126 (SI)
Application Protocol Entity
...
Non-Standard Application
Protocol Entity
Rec. T.120
Application Protocol
Recommendations
Generic Conference Control (GCC)
Rec. T.124
Multipoint Communication Service (MCS)
Rec. T.122/T.125
Network Specific Transport Protocols
Rec. T.123
193
T.120 Infrastructure Recommendations
T0826350-96
Composability Advantages
• Can use needed components
– Just the multicast channel
• Can substitute layers
– Different multicast implementation
• Orthogonality
– Level of sharing not bound to multicast
– Architecture not bound to multicast
194
Composability Disadvantages
• May have to do much more work.
• T 120 component model
– Create application protocol entity and relate to
actual application
– Create/ join multicast channel
• Suite & PlaceWare monolithic model
– Instantiating an application automatically
performs above tasks
195
Combining Advantages
• Provide high-level abstractions representing
popular ways of interacting with sub sets of
these components
• e.g. Implementing APE for Java applets
196
Improving T120 Componentization
• Add object abstraction on top of application
protocol entity
• Web Service
• Object with properties?
197
Improving T120 Componentization
• Separate input and output sharing
• Some nodes will be input only
– E.g PDAs sharing projected presentation
198
Using Mobile Computer for Input
Program
UI
UI
UI
199
Use mobile computers for input (e.g. polls)
Generic Conference Abstraction
•
•
•
•
Conference (T 120, PlaceWare)
Room (MUDs, Jupiter)
Space (Groove)
Session (SIP)
– Different from application session
• May be persistent and asynchronous
– Space, Room
200
Basic Session Management
Operations
Create/ Delete
(Conference 1)
Add/Delete
(App3)
Conference 1
App1
App2
User 1
App3
List/Query/
Set/ Notify
Properties
User 2
Join/Leave (User 2)
201
Advanced Session Management
•
•
•
•
Join/Leave subset of (possibly queried) apps (T 120)
Eject user (T 120)
Transfer users from one conference to another (T 120)
Timed conference
– Set conference duration (T 120, PlaceWare)
– Query duration left (T 120)
– Extend duration ( T 120)
•
•
•
•
Schedule conference and modify schedule (PlaceWare)
Keep interaction log, and query (PlaceWare)
Terminate when no active users (PlaceWare, T 120
In persistent conferences, in-core version automatically created
– When first user joins (PlaceWare)
– When conference manager launched (Groove)
202
Centralized Session Management
• Add app
Program
– loads and starts program at:
• invoker’s site
Output
Broadcaster & I/O
Relayer
– XTV, Suite
• or some other site
– T 120
I/O Relayer
– joins all existing users
• Join conference
UI
User 1
UI
– loads, starts, and binds local
UI to central program
User 2
203
Replicated Session Management
• Add app
Program
Program
– loads and starts program
replica at invoker’s site
– XTV, Groove
Input
broadcaster
UI
User 1
Input
broadcaster
UI
– joins all existing users
• Join conference
– loads, starts, and binds
replicas
User 2
204
Architecture Flexibility of Session
Management
• Architecture specific
– Groove, PlaceWare, …
• Architecture semi-dependent
– T 120
• Single APE abstraction “started” when user connects
• APE abstraction bound to architecture
• Architecture independent
– Chung and Dewan, 01
• Single “loggable” abstraction connected to central or replicated logger
• Loggable not bound to architecture
• Join operation specifies architecture
205
Architecture-independent Session
Management
(a) creating a new session
(b) joining an existing session
Chung & Dewan ’01: one app session per
conference
206
Application-Session Management
Coordination
• Session Management must know about attachment points.
• In centralized architecture:
–
–
–
–
POD and Applet – PlaceWare
X app and X server – XTV
Generic APE – T 1.20
Java “loggable” objects - Chung & Dewan -98
• In replicated architecture: program replicas and UIs
– model and views (Groove & Sync)
– APE ( T 120)
– Java “loggable” objects - Chung & Dewan -01
• These are registered with session management.
207
Explicit & Implicit Join/Leave
• Explicit
– Create, join, and leave operations explicitly
executed
• Implicit
– Automatic or side effect of other operations
208
Implicit Join/Leave
Session Joining/Leaving Side Effect of:
•
Artifacts being edited
– Editing same object joins them in conference
– Dewan & Choudhary ’91, Edwards
– Important MSFT Office 12 scenario
•
Intersection of auras in virtual environment
– Benford and DIVE
– Applications and users have auras
– Join conference result of user’s aura intersecting application aura
•
Conference has single application session
– Office 12 fixes thus
•
•
Not general.
No control – with options, semi-implicit
209
Explicit Join/Leave
Autonomous Joining
Conference 1
App1
T 120
Conference 1
App2
User 1
Invitation-based Joining
App3
User 2
Join
User 2
App1
App2
User 1
T 120 Accept
SIP
Groove
App3
User 2
Invite
User 2
210
Autonomous vs. Invitation-based Join
• Less message traffic and
per user overhead
– No invitations sent
• Needs discovery (e.g.
notifications), name
resolution, and separate
access control mechanism
• Overhead amortized in
recurring conferences
• Suitable for large, planned
conferences
• Implicit notification
• Low overhead to create
small conference.
• Raises mobility issues
– User may have multiple
devices
– Can register device (SIP,
Groove)
– Privacy issues
• Raises firewall issues
– invitee must accept
connections
211
Examples
• Invitation-based
– NetMeeting, Messenger
• Autonomous
– PlaceWare, Webex
• Both
– T 120
– Integrate messenger and PlaceWare?
212
Open vs. Closed Session
Management
• Closed Session Management
– Policies bound (PlaceWare)
• Name vs. Invitation
• Implicit vs. explicit
• UI
• Open Session Management
– Multiple policies can be implemented ( T. 120, SIP,
Roseman & Greenberg ’92) using an API
– Defaults may be provided (Roseman & Greenberg ‘92)
213
API for 2-Party Invites
User
Agent A
invite A
accept
bye
User • SIP Model
Agent B • N-party?
• Namebased?
214
2-Party, Autonomous
User
Agent B
User
Agent A
Conference
Agent
•
•
•
•
•
GroupKit
+create X OK?
+create X OK
Delete like Create C
Bye terminates
215
N-Party, Autonomous
User
Agent B
User
Agent A
Joined X, C
Join X, C
Conference
Agent
•
•
•
GroupKit, T 120
Leave like Join
– Event may not be
broadcast as leaver
can do so (T 120)
+ LastUserLeft event
User
Agent C
216
N-Party, Autonomous and Invitationbased
User
Agent B
User
Agent A
Accept
X, C
Invite X, C
Conference
Agent
User
Agent C
217
Example GroupKit Policies
• Open registration
– Anyone can invite
– Conference persists after last user
• Centrally facilitated
– Only convener can invite
• Room-based session management
– Anyone can join name (room)
218
Performance Problems
• Operations are heavyweight
– Require OK? and success events sent to each user
– joining expensive in T 120
• Could use publish/subscribe
– Build n-party, name-based on top of SIP
publish/subscribe and invite/accept/delete model?
– Mobility supported
– Need extra (conference) argument to invite
219
Improving Programming
User
Agent B
User
Agent A
Accept
X, C
Invite X, C
Conference
Agent
User
Agent C
• Shared data type
• Success events
generated on
update to it
– Joined X, C
• GroupKit
220
Session-Aware Applications
• Applications may want session events
– To display information
– To create (centralized or replicated) application session
possibly involving multicast channels
– To exchange capabilities (interoperation)
• Each app on a site can subscribe directly from
conference agent (GroupKit)
– Multiple events sent to a node
• Each app subscribes from user agent (T 120)
– IPC latency
– User agent implements conference agent interface
221
Improving Session Access Control
• Create, delete, leave, join, protected through events
• Could also protect add/delete application
– Add/delete app Ok?, OK and Success
• Protect discovery of conferences
– Listed attribute in T 120
• Protect query of conference information
– PlaceWare
• “Lock” ? “Unlock” conference (T 120)
– Allow/disallow more joins
– Set user limit (PlaceWare)
• Protect how late users can join (PlaceWare)
222
Improving Access Control
• Can support ACLs and passwords
– Password protected attribute and extra join parameter (T 120,
PlaceWare)
– ACL parameter (PlaceWare)
– More efficient but earlier binding than interactive OK? Events.
• Regular, Interactive, and Optimistic access control
– Tech fest demo
• Can protect groups of conferences together
– As files in a directory
– PlaceWare place is group of conferences similarly protected
• Can specify groups of users
– PlaceWare
223
Session vs. Application Access
Control
• Controls session
operations
–
–
–
–
Create, Delete conf.
Join, Leave user
Add, Remove App
Query…
• Indirectly provides coarsegrained application access
– If cannot join, cannot use
applications
• May want to prevent joins
for performance rather
than security reasons
• Control interaction with
applications.
– Presenter vs. audience
privileges. (PlaceWare &
Webex)
– Telepointer editable only by
creator (T 120, PlaceWare,
GroupKit, NetMeeting
Whiteboard, Webex)
• Access denied for
authorization rather than
performance reasons.
224
Shared Layer & Application Access
Control
• Higher level sharing implies finer granularity access
– screen sharing protected operations
•
provide input
– window sharing
• Display window
• Input in window
• Add to NetMeeting to support digital rights?
– PPT sharing
•
change shared slide vs. change private slide (Webex)
• In many cases screen sharing enough
– PlaceWare PPT sharing: audience vs. presenter equivalent to providing
input control
• App-specific controls may be needed
225
Operation-specific access control
• Allow each operation to determine who can execute it.
– Dewan and Choudhary ’91 and Groove
• Operation can query environment for user
– PlaceWare
•
•
•
•
Operation is remote procedure call
Callee identity automatically added as an extra argument
Integrated with RPC proxy generation
Add such a facility to Indigo?
• Dewan and Shen ’92
– Can build app-specific access control without access awareness
– Extends the notion of generic “file rights” to generic “tree-based
collaboration rights”
– Assumes system intercepts operation before it is executed
– Would apply to XAF-like tree model
226
Meta Access Control
• Who sets the access privileges?
• Convener
– PlaceWare
– T 120
• Group ownership, delegation etc
– Dewan & Shen ‘96
227
Access vs. Concurrency Control
• Access control
– Controls whether user is authorized to execute operation
• Concurrency control
– Controls whether authorized users actions conflict with
others and schedules conflicting actions
• Can share common mechanism
– for preventing operation from being executed.
• In T 120 window sharing, UI can be identical
– Only one user allowed to enter input
– UI allows mediator to give application control to users
– User passed because of AC or CC.
228
Shared Layer & Concurrency
Control
• Higher level sharing implies more concurrency
– screen sharing
• Cannot distinguish between different kinds of input
• Multiple input events make an operation
• Must prevent concurrency
– window sharing (add to NetMeeting and PlaceWare?)
• Can allow concurrent input in multiple windows
• Probably will not conflict
– Same word document in multiple windows
– Whiteboard
• can allow concurrent editing of different objects
• Probably will not conflict
– Object and connecting line
• App-specific concurrency control may be needed
229
Pessimistic vs. Optimistic CC
• Two alternatives to serializable transactions
• Pessimistic
– Prevent conflicting operation before it is executed
– Implies locks and possibly remote checking
• Optimistic
– Abort conflicting operation after it executes
– Involves replication, check pointing/compensating
transactions
– Not actually implemented in collaborative systems
• Aborting user (vs. programmed) transactions not acceptable
– Merge and optimistic locking variations
230
Merging
• Like optimistic
– Allow operation to execute without local checks
• But no aborts
– Merge conflicting operations
– E.g. insert 1,a || insert 2, b = insert 1, a; insert 3, b || insert 2, b;
insert 1, a
• Serializability not guaranteed
– Strange results possible
– E.g. concurrent dragging of an object in PlaceWare whiteboard
• App-specific
231
App-Specific Merging
•
Text editor specific
– Sun ’89, ….
•
Tree editor specific
– ECSCW ’03
– Apply to XAF and Office Apps?
•
Programmer writes merge procedures
– Per file in Coda (Kistler and Satya 92)
– Per object in Rover (Joseph et al 95, & PlaceWare)
– Per relation in Bayou and Longhorn WinFS (Terry et al 95, terry@microsoft.com)
•
Programmer creates merge specifications (Munson & Dewan ’94)
–
–
–
–
–
Object decomposed into properties
Properties merged according to merge matrix
Less flexible but easier to use.
Accommodates all existing policies.
Implement in C# objects?
232
Synchronous vs. Asynchronous
Merge
• Synchronous
–
–
–
–
Efficient
Less work to destroy
Can accommodate simple-minded merge
Replicated operation transformation
• Asynchronous
– Opposite
– Centralized, merge procedures and matrices
• Faster computers allow complex synchronous merging
– Centralized merge matrix
• Merging of drawing operations still an issue
233
Merging vs. Locking
• Requires replication
– With its drawbacks and advantages
• Requires high-level local operations
– Cannot work with replicated window-based systems
• Conflicts cannot be merged
– Require an interactive phase
• No lock delays
• More concurrency
• Disconnected (asynchronous) interaction
234
Response time for locks
• Central lock information
– Well known site knows who has locks
– Delay in contacting the site
• Distributed lock information (T 120)
– Lock information sent to all sites
– More traffic but less delay
• Still delay in getting lock from current
holder
235
Optimistic Locking
• Greenberg et al, 94
– In general remote checking can take time
– Allow operation as in optimistic until lock
response received
– At that point continue operation or abort
• Abort damage potentially small
• Office 12 scenarios
236
Floor Control
• Host only (T 120)
– Person hosting app has control
– Usually convener
• Mediated ( T 120)
– Any one can request floor.
– One or more of the other users have to agree (specially current floor holder)
– Can pass control to another, if latter accepts
• Facilitated
– Facilitator distributes floor (PlaceWare)
– Special case of mediated when floor passed through facilitator
• Unconditional grant (T 120)
– Anyone can take current floor by clicking
– Special case of mediated where no user has to agree.
• End user negotiation to decide on policy interactively ( T 120)
– Interoperability solution works
• API to set and implement policy programmatically (T 120)
237
Fine-Grained Concurrency Control
• Provide an API (T 120)
–
–
–
–
–
Allocate/de-allocate token
Test
Grab exclusively/non exclusively
Release
Request/give token
• Munson and Dewan ’96
– Lock hierarchical object properties
– Associate lock tables with properties
– Hierarchical locking
• Office 12 scenarios use fine-grained locks
238
Interoperability
• Cannot make assumptions about remote
sites
• Important in collaboration because one non
conforming can prevent adoption of
collaboration technology
• Devise “standard” protocols for various
collaboration aspects to which specific
protocols can be translated
239
Examples of Collaboration Aspects
• Codecs in media (SIP)
• Window/Frame-based sharing
–
–
–
–
Caching capability for bitmaps, colormaps.
Graphics operations supported
Bits per pixel
Virtual desktop size
240
Layer and Standard Protocols
• Easier to agree on lower level layer
• Every computer has a framebuffer with similar
properties.
• Windows are less standard
– WinCE and Windows not same
• Toolkits even less so
– Java Swing and AWT
• Data in different languages and types
– Interoperation very difficult
241
Data Standard
• Web Services
– Everyone converts to it
• Object properties based on patterns
translated to Web services?
• XAF
242
Multiple Standards
• More than one standard can exist
– With different functionality/performance
• How to negotiate?
• Same techniques can be used to negotiate
user policies
– E.g. which form of concurrency control or
coupling
243
Enumeration/Selection Approach
• One party proposes a series of protocols
– M= audio 4006 RTP/AVP 0 4
– A = rtpmap: 0 PCMU/8000
– A = rtpmap: 4 GSM/8000
• Other party picks one of them
– M= audio 4006 RTP/AVP 0 4
– A = rtpmap: 4 GSM/8000
244
Extending to multiple parties
• One party proposes a series of protocols
• Other responds with subsets supported
• Proposing party picks some value in
intersection.
• Multiple rounds of negotiaton
245
Single-Round
• Assume
– Alternative protocols can be ordered into levels,
where support for protocol at level l indicates
support for all protocols at levels less than l
• Broadcast level and pick min of values
received
246
Capability Negotiation
• Protocol not named but function of capabilities
– Set of drawing operations supported.
• Increasing levels can represent increasing
capability sets.
– Sets of drawing operations
• Increasing levels can represent increasng
capability values
– Max virtual desktop size
247
Uniform Local Algorithm
• Apply same local algorithm at all sites to choose
level and hence associated “collapsed” capability
set
– Min
• Of capability set implies an AND
• Bits per pixel, drawing operation sets
– Max
• Of boolean values implies an OR
• Virtual desktop size
– Something else based on # and identity of sites
supporting each level
248
UI Policy Negotiation
• Can use same mechanism for UI policy negotiation
• Examples
– Unconditional grant floor control: In T 120, each node can say
•
•
•
•
•
Yes
No
Ask Floor Controller
Yes < Ask Floor Controller < No
Use min of this for least permissive control
– Sharing control: many systems each node can say:
• Share scrollbar
• Not share < share
• Use min for least permissive sharing
249
Office Apps
• Multiple versions of office apps exist
• Use similar scheme for negotiating
capabilities of office apps in conference
– pdf capability < viewer < full app
– Office 10 < Office 11 < Office 12
250
Conversion to Standard Protocols
• May need to convert richer protocol to lesser
“lowest common denominator” protocol with
lesser capabilities
• Also may not wish to lose lowest common
denominator protocol and do per site conversion
• Drawing operation to bitmap in T 120
• Fine-grained locks to floor control in Dewan and
Sharma ‘91
251
Basic Firewall
reply
send
protected site
unprotected proxy
call
send
open
communicating site
• Limit network
communication to and from
protected sites
• Do not allow other sites to
initiate connections to
protected sites.
• Protected sites initiate
connection through proxies
that can be closed if
problems
• can get back results
– Bidirectional writes
– Call/reply
252
Protocol-based Firewall
sip
open
reply
protected site
• May be restricted to
certain protocols
– HTTP
– SIP
unprotected proxy
sip
http
call
open
communicating site
253
Firewalls and Service Access
• User/client at protected site.
• Service at unprotected site.
• Communication and
dataflow initiated by
protected client site
reply
protected user
unprotected proxy
http-rpc
rpc
call
open
unprotected service
– Can result in transfer of data
to client and/or server
• If no restriction on protocol
use regular RPC
• If only HTTP provided,
make RPC over HTTP
– Web services/Soap model
254
Firewalls and Collaboration
open
protected user
• Communicating sites
may all be protected.
• How do we allow
opens to protected
user?
protected user
255
Firewalls and collaboration
close
send
open
protected user
unprotected forwarder
close
send
open
protected user
• Session-based forwarder
• Protected site opens connection
to forwarder site outside firewall
for session duration
• Communicating site also opens
connection to forwarder site.
• Forwarder site relays messages to
protected site
• Works well if unrestricted
write/write allowed and used
• How to support RPC and higherlevel protocols in both directions
• What if restricted protocol?
256
RPC in both directions
• Would like RPC to be invoked by and on protected
site (via forwarder).
• In two one-way RPCs
– Proxies generated separately
– Create independent channels opened by each party
– Implies forwarder opens connection to protected site.
• PlaceWare two-way RPC
– Proxies generated together
– Use single channel opened by (client) protected site.
257
Restricted Protocol
• If only restricted protocol then
communication on top of it as in service
solution
• Adds overhead.
• Groove and PlaceWare try unrestricted first
and then HTTP
258
Restricted protocols and data to
protected site
• HTTP does not allow data flow to be initiated by
unprotected site
• Polling
– Semi-synchronous collaboration
• Blocked gets (PlaceWare)
– Blocked server calls in general in one-way call model
– Must refresh after timeouts
• SIP for MVC model
– Model sends small notifications via SIP
– Client makes call to get larger data
– RPC over SIP?
259
Firewall-unaware clients
• Would like to isolate specific apps from worrying protocol choice and
translation.
• PlaceWare provides RPC
– Can go over HTTP or not
• Groove components use SOAP to communicate
– Apps do not communicate directly – just use shared objects and don’t
define new ones
• UNC system provides standard property-based notifications to
programmer allows them to be delivered as:
–
–
–
–
–
RMI
Web service
SIP
Blocked gets
Protected site polling
260
Forwarder & Latency
protected user
• Adds latency
– Can have multiple forwarders bound to
different areas (Webex)
– Adaptive based on firewall detection
(Groove)
•
•
unprotected forwarder
try to open directly first
if fails because of firewall, opens
system provided forwarder
• asymmetric communication possible
– Messages from user go through
forwarder
– Messages to user go directly
protected user
– Groove is also a service based model!
– PlaceWare always has latency and it
shows
261
Forwarder & Congestion Control
protected user
unprotected forwarder
protected user
• Breaks congestion control
algorithms
– Congestion between
communication between
protected site and forwarder
controlled by algorithms may
be different than end to end
congestion
– T 120 like end to end
congestion control relevant
Different congestions
262
Forwarder + Multi-caster
•
protected user
•
Forwarder can multicast to other users
on behalf of sending user
Separation of application processing
and distribution
–
•
•
unprotected forwarder
+ multicaster
•
Reduces messages in link to forwarder
Separate multicaster useful even if no
firewalls
Forwarder can be much more powerful
machine.
–
•
protected
user
•
T 120 provides multi-caster without
firewall solution
Forwarder can be connected to higher
speed networks
–
protected
user
Supported by PlaceWare, Webex
In groove, if (possibly unprotected) user
connected via slow network, single
message sent to forwarder, which is
then multicast
May need hierarchy of multicasters (T
120), specially dumb-bell
263
Forwarder + State Loader
•
•
– Avoid message from forwarder to
protected site containing state
– Alternative to multicast
– Extra message to forwarder for pulling
adds latency and traffic
– Each site pulls at its consumption rate
read
protected user
unprotected forwarder
+ state loader
protected
user
protected
user
Forwarder can also maintain state in
terms of object attributes
Slow and latecomer sites pull state
asynchronously from state loader
•
Works for MVC like I/O models
– VNC: framebuffer rectangles
– PlaceWare: PPT slides
– Chung & Dewan ’98: Log of arbitrary
input/output events converted to object
state
•
•
Useful even if no firewalls
Goes against stateless server idea
– State should be an optimization
264
Forwarder + multicaster + state
loader
• Multicaster for rapidly
changing information
• State loader for slower
changing information
• Solution adopted in
PlaceWare
protected user
unprotected forwarder
+ multicaster + state
loader
protected
user
protected
user
– Multicast for window sharing
– State loading for PPT slides
• VNC results show pull model
works for window sharing
• Greenberg ’02 shows pull
model works for video
265
Composability
• Collaboration infrastructure must perform several tasks
– Session management
– Set up (centralized/replicated/hybrid) architecture
– I/O distribution
• filtering
• Multipoint communication
• Latecomer and slow user accomodation
– Access and concurrency control
– Firewall traversal
• Multiple ways to perform each of these functions
• Implement separate functions in different composable
modules
• Difficult because they must work with each other
266
T 120 Composable Architecture
• Multicast layer
– Multicast + tokens
• Session Management
– Session operations + capability negotiation
• Application template
– Standard structure of client of multicast + session management
• Window-based sharing
– Centralized architecture for window sharing
– Uses session management + multicast
• Whiteboard
– Replicated or centralized whiteboard sharing
– Uses session management + multicast
267
T 120 Layers
User Application(s)
(Using Both Standard and Non-Standard Application Protocols)
Node
Controller
User Application(s)
(Using Std. Appl. Protocols)
User Application(s)
(Using Non-Std Protocols)
Rec. T.127 (MBFT)
...
Rec. T.126 (SI)
Application Protocol Entity
...
Non-Standard Application
Protocol Entity
Rec. T.120
Application Protocol
Recommendations
Generic Conference Control (GCC)
Rec. T.124
Multipoint Communication Service (MCS)
Rec. T.122/T.125
Network Specific Transport Protocols
Rec. T.123
268
T.120 Infrastructure Recommendations
T0826350-96
Composability Advantages
• Can use needed components
– Just the multicast channel
• Can substitute layers
– Different multicast implementation
• Orthogonality
– Level of sharing not bound to multicast
– Architecture not bound to multicast
269
Composability Disadvantages
• May have to do much more work.
• T 120 component model
– Create application protocol entity and relate to
actual application
– Create/ join multicast channel
• Suite & PlaceWare monolithic model
– Instantiating an application automatically
performs above tasks
270
Combining Advantages
• Provide high-level abstractions representing
popular ways of interacting with sub sets of
these components
• e.g. Implementing APE for Java applets
271
Improving T120 Componentization
• Add object abstraction on top of application
protocol entity
• Web Service
• Object with properties?
272
Improving T120 Componentization
• Separate input and output sharing
• Some nodes will be input only
– E.g PDAs sharing projected presentation
273
Using Mobile Computer for Input
Program
UI
UI
UI
274
Use mobile computers for input (e.g. polls)
Summary
• Multiple policies for
–
–
–
–
–
–
Architecture
Session management
Coupling, Concurrency, Access Control
Interoperability
Firewalls
Componentization
• Existing systems such as Groove, PlaceWare, NetMeeting
are not that different, sharing many policies.
• Pros and cons of each policy
• Flexible system possible
275
Recommendations: window sharing
• Centralized window sharing.
– Remove expose coupling in window sharing
– Add window-based access and concurrency
control in
– Provide multi-party sharing, through firewalls,
without extra latency
• Investigate replicated window sharing
– Will go through firewalls because low
bandwidth
276
Recommendations: model sharing
• Decouple architecture and data sharing
– Use delegation based model
• Provide a replicated type for XAF tree
model.
• Use property based sharing to share
collaboration-unaware C# objects
277
Recommendations: Multi-Layer
Sharing
• Allow users to choose the level of sharing
– Transparently change system (NetMeeting,
PlaceWare)
– Provide layer-neutral sharing
• Allow users to select the architecture,
possibly dynamically
– From peer to peer to server-based to service
based depending on single collaborator, local
multiple collaborators, and remote collaborators
278
Recommendations: expriments
• Need more experimental data
– Sharing different layers
– Centralized, replicated, and hybrid archs
• Need benchmarks
– MSR usage scenarios?
279
Recommendations: Communication
• Use standard Indigo layer, with
modifications
– Sending data to protected site
• Use SIP
• Provide PlaceWare 2-way RPC
– Access aware methods
• Add M-RPC
• Build over mulicast
280
Recommendations: Communication
and componentization
• Have a separate stream multicast for
language neutrality and lightweightness
• Need M-RPC so it can be mapped to above
layer
281
Recommendations: Coupling
• Add externally configurable filtering
component to determine what, when, and
who.
282
Recommendations: Concurrency
Control
• Support
–
–
–
–
–
Various kinds of floor control
Fine-grained token-based control
Optimistic and regular locks
Property-based locking on top
Property-based merging of arbitrary C# types
283
Recommendations: Session
Management
• Build N-party session management on top
of SIP to get mobility
• Support
– Implicit and explicit
– Name-based and invitation-based
284
Recommendations: Custom
Collaborative Applications
• Model sharing in existing office
applications
• Use capability negotiation
• Create shared object type for XAF
285
Recommendations: Composability
•
•
•
•
Extend T120 component model with
Replicated types
M-RPC
SIP features
286
Recommendation
• Lots of research in this area
• Use input from research also when deciding
on new products
287
THE END (The rest are extra slides)
288
Partial Sharing
Uncoupled
Coupled
289
Merging vs. Concurrency Control
• Real-time Merging called Optimistic Concurrency Control
• Misnomer because it does not support serializability.
• Related because Concurrency Control prevents the
problem it tries to fix
– Collaboration awareness needed
– User intention may be violated
– Correctness vs. latency tradeoff
• CC may be
– floor control: e.g. NetMeeting App Sharing
– fine-grained: e.g. NetMeeting Whiteboard
• Selecting an object implicitly locks it.
• Approach being used in design of some office apps.
290
Evaluating Shared Layer and
Architecture
•
•
•
•
Mixed centralized-replicated architecture
Pros and cons of layering choice
Pros and cons of architecture choice
Should implement entire space rather than single
points
– Multiple points
• NetMeeting App Sharing, NetMeeting Whiteboard, PlaceWare,
Groove
– Reusable code
• T 120
• Chung and Dewan ‘01
291
Centralized Architecture
Program
Output Broadcaster
& I/O Relayer
I/O Relayer
UI
User 1
UI
User 2
I/O Relayer
UI
292
User 3
Replicated Architecture
Program
Program
Program
Input
Broadcaster
Input
Broadcaster
Input
Broadcaster
UI
UI
UI
User 1
User 2
User 3
293
Limitations
•
In OO system must create new types for sharing
– No reuse of existing single-user types
– E.g. in Munson & Dewan ’94, ReplicatedVector
•
Architecture flexibility
– PlaceWare bound to central architecture
– Replicas in client and server of different types, e.g. VectorClient & VectorServer
•
Abstraction flexibility
– Set of types whose replication supported by infrastructure automatically
– Programmer-defined types not automatically supported
•
Sharing flexibility
– Who and when coupled burnt into shared value
•
Single language assumed
– Interoperability of structured types very difficult
– XML-based solution needed
294
Translating Language Calls to SOAP
• Semi-Automatic translation from Java & C#
exist
• Bean objects automatically translated.
• Other objects must be translated manually.
• Could use pattern and property-based
approach to do translation (Roussev &
Dewan ’00)
295
Property-based Notifications
• Assume protected site gets notified (small amt of
data) and then pulls data in response a la MVC
• Provide standard property-based notifications to
programmer
• Communicate them using
–
–
–
–
–
RMI
Web service
SIP
Blocked gets
Protected site polling
• Semi-synchronous collaboration
296
Shared Layer Conclusion
• Infrastructure should support as many shared
layers as possible
• NetMeeting/T. 120
– Desktop sharing
– Window sharing
– Data sharing (at high cost)
• PlaceWare
– Data sharing (at low cost)
• Should and can support a larger set of layers at
low cost (Chung and Dewan ’01)
297
Classifying Previous Work
• Shared layer
– X Windows (XTV)
Shared
– Microsoft Windows (NetMeeting Layer
App Sharing)
– VNC Framebuffer (Shared VNC)
Rep vs.
– AWT Widget (Habanero, JCE)
Central
– Data (Suite, Groove, PlaceWare,)
• Replicated vs. centralized
– Centralized (XTV, Shared VNC, NetMeeting App. Sharing,
Suite, PlaceWare)
– Replicated (VConf, Habanero, JCE, Groove, NetMeeting
Whiteboard)
298
Suite Text Editor
299
Suite Text Editor Type
/*dmc Editable String */
String text = "hello world";
Load () {
Dm_Submit (&text, "Text", "String");
Dm_Engage ("Text");
}
300
Multiuser Outline
301
Outline Type
/*dmc Editable Outline */
typedef struct { unsigned num; struct section *sec_arr; } SubSection;
typedef struct section {
String Name; String Contents; SubSection Subsections;
} Section;
typedef struct { unsigned num; Section *sec_arr; } Outline;
Outline outline;
Load () {
Dm_Submit (&outline, "Outline", "Outline");
Dm_Engage ("Outline");
}
302
Talk
303
Talk Program
/*dmc Editable String */
String UserA = "", UserB = "";
int talkers = 0;
Load () {
if (talkers < 2) {
talkers++;
Dm_Submit (&UserA, "UserA", "String");
Dm_Submit (&UserB, "UserB", "String");
if (talkers == 1)
Dm_SetAttr ("View: UserB", AttrReadOnly, 1);
else
Dm_SetAttr ("View: UserA", AttrReadOnly, 1);
Dm_Engage_Specific ("UserA", "UserA", "Text");
Dm_Engage_Specific ("UserB", "UserB", "Text"); }
}
304
N-User IM
/*dmc Editable Outline */
typedef struct { unsigned num; struct String *message_arr; }
IM_History;
IM_History im_history;
String message;
Load () {
Dm_Submit (&im_history, "IM History", "IM_History");
Dm_SetAttribute(“IM History”, “ReadOnly”, 1);
Dm_Engage ("IM History");
Dm_Submit (&message, "Message", String);
Dm_Update(“Message”, &updateMessage);
Dm_Engage ("Message");
}
updateMessage (String variable, String new_message) {
im_history.message_arr[im_history.num++] = value;
}
305
Broadcast Methods
Model
Broadcast
method
Model
bm
View
View
Toolkit
Toolkit
Window
Window
User 1
User 2
306
Connecting Applications
• Replicas connected when containing applications
connected in (collaborative) sessions.
• Collaborative application session created when
application is added to a conference.
• Conference created by a convener to which others
can join.
• Management of conference and application
sessions called conference/session management.
307
Mobility Issues
• Invitee registers current device(s) with
system
• System sends invitation to all current
devices
• Supported by Groove and SIP
308
Connecting Applications
• Replicas connected when containing applications
connected in (collaborative) sessions.
• Collaborative application session created when
application is added to a conference.
• Conference created by a convener to which others
can join.
• Management of conference and application
sessions called conference/session management.
309
Synchronization in Replicated
Architecture
abc
abc
dabc
deabc
X Client
Insert d,1
Insert e,2
Pseudo Server
Insert d,1
aebc
daebc
X Client
Insert e,2
Insert d,1
Pseudo Server
Insert e,2
X Server
X Server
User 1
User 2
310
Comparing the Architectures
App
App
Input
I/O
Pseudo
Server
Pseudo
Server
Pseudo
Server
Window
Window
Model
View
Window
App
Window
Pseudo
Server
Window
Host 1
View
Window
Shared
Abstraction
= Model +
View
Shared
Abstraction
= Model
Shared
Abstraction
Centralized
Shared
Abstraction
Replicated
311
Download