Presentation 1 - Faculty of Information Technology

advertisement
Advanced Distributed Systems, Concurrency
Pablo de la Fuente
Departamento de Informática
E.T.S. de Ingeniería Informática
Campus Miguel Delibes
47011 Valladolid
Spain
pfuente@infor.uva.es
Organization
General concepts.
Types of distributed systems.
Comunication models.
Concurrency and distributed systems
Examples
Organization
General concepts.
Types of distributed systems.
Comunication models.
Concurrency and distributed systems
Examples
Development of Computer Technology
•
•
•
•
•
•
1950s: serial processors
1960s: batch processing
1970s: time-sharing
1980s: personal computing
1990s: parallel, network, and distributed processing
2000s: wireless networks and mobile computing?
© Jie Wu, Distributed System Design, 1999
What is a Distributed System?
There are a lot of definitions for Distributed System.
Colouris et al. (2001): Is a system in which hardware or software components
located at networked computers communicate and coordinate their actions only
by message passing.
Emmerich(1997): A distributed system consists of a collection of autonomous
computers, connected through a network and distribution middleware, which
enables computers to coordinate their activities and to share the resources of the
system, so that users perceive the system as a single, integrated computing
facility.
Sinha (1997): is a collection of processors interconnected by a communication
network in which each processor has its own local memory and others
peripherals, and the communication between two processors of the system takes
place by message passing over the communication network. Points over the local
and remote resources.
Centralised versus Distributed System
Centralised
One component with non-autonomous parts
Component shared by users all the time
All resources accessible
Software runs in a single process
Single Point of control
Single Point of failure
Distributed
Multiple autonomous components
Components are not shared by all users
Resources may not be accessible
Software runs in concurrent processes on different processors
Multiple Points of control
Multiple Points of failure
Another vision
• A distributed system is a collection of independent computers that
appear to the users of the system as a single computer.
• Distributed systems are "seamless": the interfaces among
functional units on the network are for the most part invisible to the
user.
System structure from the physical (a) or logical point of view (b).
© Jie Wu, Distributed System Design, 1999
Common Characteristics
What are we trying to achieve when we construct a distributed system?
Certain common characteristics can be used to assess distributed systems
• Resource Sharing
• Openness
• Concurrency
• Scalability
Size
Geographical distance
Administrative structure
• Fault Tolerance
• Transparency
• Heterogeneity (mobile code and mobile
agent)
Networks
Hardware
Operating systems and middleware
Program languages
• Security
Schroeder's Definition
• A list of symptoms of a distributed system
–
–
–
–
Multiple processing elements (PEs)
Interconnection hardware
PEs fail independently
Shared states
Enslow's Definition
Distributed system = distributed hardware + distributed control + distributed
data
A system could be classified as a distributed system if all three
categories (hardware, control, data) reach a certain degree of
decentralization.
© Jie Wu, Distributed System Design, 1999
Enslow's model of distributed systems.
© Jie Wu, Distributed System Design, 1999
Estructure of a distributed system
Other services
Security
File service
Naming
User names
Name resolution
Addresses
Routes
Resource
Allocation
Deadlock
Detection
Resources
Process
synchronization
Comunication
security
Process management
Interprocess Comunication
Transport protocols supporting
Interprocess communication primitives
© A. Goscinski, 1992
Transparency
Distributed systems should be perceived by users and application programmers
as a whole rather than as a collection of cooperating components.
Transparency has different dimensions that were identified by ANSA.
These represent various properties that distributed systems should have.
Scalability
Transparency
Performance
Transparency
Failure
Transparency
Migration
Transparency
Replication
Transparency
Concurrency
Transparency
Access
Transparency
Location
Transparency
Transparencies
Access transparency: enables local and remote resources to be accessed
using identical operations.
Location transparency: enables resources to be accessed without knowledge
of their location.
Concurrency transparency: enables several processes to operate
concurrently using shared resources without interference between them.
Replication transparency: enables multiple instances of resources to be used
to increase reliability and performance without knowledge of the replicas by
users or application programmers.
Failure transparency: enables the concealment of faults, allowing users and
application programs to complete their tasks despite the failure of hardware
or software components.
Mobility transparency: allows the movement of resources and clients within a
system without affecting the operation of users or programs.
Performance transparency: allows the system to be reconfigured to improve
performance as loads vary.
Scaling transparency: allows the system and applications to expand in scale
without change to the system structure or the application algorithms.
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Software and hardware service layers in distributed systems
Applications, services
Middleware
Operating system
Platform
Computer and network hardware
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
A working Model of a distributed system
Component1
Component1
...
Componentn
Component1
Middleware
. . . Componentn
Component1
...
Componentn
Middleware
. . . Componentn
Network op. system
Middleware
Network op. system
Middleware
Computer
hardware
Network
op. system
Computer hardware
Network op. system
Host2
Hostn-1
Computer hardware
Host1
Computer hardware
Network
Hostn
© W. Emmerich, 2000
Organization
General concepts.
Types of distributed systems.
Comunication models.
Concurrency and distributed systems
Examples
Process types in a distributed system
Filter: data transformer. It receives streams of data values from its input
channel, performs some computation on those values and sends streams of
results to its output channels.
Client: is a triggering process. Clients make requests that trigger reactions
from server.
Server: is a reactive process. A server waits for requests to be made, then
reacts to them.
Peer: is a collection of identical processes that interact to provide a service o
resolve a problem.
There are several process-interaction patterns on distributed system. Each
interaction paradigm is an example or model of a communication pattern and
associated programming technique that can be used to solve a variety of
interesting distributed programming techniques.
Clients invoke individual servers
Client
invocation
res ult
Server
invocation
Server
res ult
Client
Key:
Process :
Computer:
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
A service provided by multiple servers
Service
Server
Client
Server
Client
Server
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Web proxy server
Web
server
Client
Proxy
server
Client
Web
server
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
A distributed application based on peer processes
Application
Application
Coordination
code
Coordination
code
Application
Coordination
code
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Web applets
a) client request res ults in the dow nloading of applet c ode
Client
Applet code
Web
s erv er
b) client interac ts w ith the applet
Client
Applet
Web
s erv er
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Thin clients and compute servers
Compute server
Network computer or PC
Thin
Client
network
Application
Process
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Spontaneous networking in a hotel
Music
service
gateway
Alarm
service
Internet
Hotel wireless
network
Discovery
service
Camera
TV/PC
Laptop
PDA
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Guests
devices
Processes and channels
process p
process q
send
m
receiv e
Communic ation c hannel
Outgoing mess age buffer
Incoming mes sage buffer
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Omission and arbitrary failures
Class of failure
Fail-stop
Affects
Description
Process
Process halts and remains halted. Other processes may
detect this state.
Crash
Process
Process halts and remains halted. Other processes may
not be able to detect this state.
Omission
Channel A message inserted in an outgoing message buffer never
arrives at the other end’s incoming message buffer.
Send-omission
Process
A process completes a send, but the message is not put
in its outgoing message buffer.
Receive-omission Process
A message is put in a process’s incoming message
buffer, but that process does not receive it.
Arbitrary
Process or Process/channel exhibits arbitrary behaviour: it may
(Byzantine)
channel
send/transmit arbitrary messages at arbitrary times,
commit omissions; a process may stop or take an
incorrect step.
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Timing failures
Class of Failure
Affects
Description
Clock
Process
Performance
Process
Performance
Channel
Process’s local clock exceeds the bounds on its
rate of drift from real time.
Process exceeds the bounds on the interval
between two steps.
A message’s transmission takes longer than the
stated bound.
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
Organization
General concepts.
Types of distributed systems.
Comunication models.
Concurrency and distributed systems
Examples
Synchronization and Communication
• The correct behaviour of a concurrent program depends on
synchronization and communication between its processes.
• Synchronization: the satisfaction of constraints on the interleaving
of the actions of processes (e.g. an action by one process only
occurring after an action by another).
• Communication: the passing of information from one process to
another.
– Concepts are linked since communication requires
synchronization, and synchronization can be considered as
contentless communication.
– Data communication is usually based upon either shared
variables or message passing.
Distributed programming mechanisms
Busy waiting
Semaphores
Monitors
RPC
Message passing
Rendezvous
Message-Based Communication and Synchronization
•
•
Use of a single construct for both synchronization and communication
Three issues:
– the model of synchronization
– the method of process naming
– the message structure
Process P1
Process P2
receive message
send message
time
time
© Alan Burns and Andy Wellings, 2001
Process Synchronization
•
•
Variations in the process synchronization model arise from the semantics
of the send operation
Asynchronous (or no-wait) (e.g. POSIX)
– Requires buffer space. What happens when the buffer is full?
Process P1
Process P2
send message
message
receive message
time
time
© Alan Burns and Andy Wellings, 2001
Process Synchronization
•
Synchronous (e.g. CSP, occam2)
– No buffer space required
– Known as a rendezvous
Process P1
Process P2
send message
blocked
time
M
receive message
time
© Alan Burns and Andy Wellings, 2001
Process Synchronization
•
•
Remote invocation (e.g. Ada)
– Known as an extended rendezvous
Analogy:
– The posting of a letter is an asynchronous send
– A telephone is a better analogy for synchronous communication
Process P1
Process P2
send message
M
receive message
blocked
reply
time
time
© Alan Burns and Andy Wellings, 2001
Asynchronous and Synchronous Sends
•
•
Asynchronous communication can implement synchronous communication:
P1
P2
asyn_send (M)
wait (M)
wait (ack)
asyn_send (ack)
Two synchronous communications can be used to construct a remote
invocation:
P1
P2
syn_send (message)
wait (message)
wait (reply)
...
construct reply
...
syn_send (reply)
© Alan Burns and Andy Wellings, 2001
Disadvantages of Asynchronous Send
• Potentially infinite buffers are needed to store unread messages
• Asynchronous communication is out-of-date; most sends are
programmed to expect an acknowledgement
• More communications are needed with the asynchronous model,
hence programs are more complex
• It is more difficult to prove the correctness of the complete system
• Where asynchronous communication is desired with synchronized
message passing then buffer processes can easily be constructed;
however, this is not without cost
© Alan Burns and Andy Wellings, 2001
Process Naming
•
•
•
•
•
Two distinct sub-issues
– direction versus indirection
– symmetry
With direct naming, the sender explicitly names the receiver:
send <message> to <process-name>
With indirect naming, the sender names an intermediate entity (e.g. a channel,
mailbox, link or pipe):
send <message> to <mailbox>
With a mailbox, message passing can still be synchronous
Direct naming has the advantage of simplicity, whilst indirect naming aids the
decomposition of the software; a mailbox can be seen as an interface between
parts of the program
© Alan Burns and Andy Wellings, 2001
Process Naming
•
•
•
•
A naming scheme is symmetric if both sender and receiver name each other
(directly or indirectly)
send <message> to <process-name>
wait <message> from <process-name>
send <message> to <mailbox>
wait <message> from <mailbox>
It is asymmetric if the receiver names no specific source but accepts
messages from any process (or mailbox)
wait <message>
Asymmetric naming fits the client-server paradigm
With indirect the intermediary could have:
– a many-to-one structure – a many-to-many structure
– a one-to-one structure
– a one-to-many
© Alan Burns and Andy Wellings, 2001
Message Structure
•
•
•
A language usually allows any data object of any defined type (predefined
or user) to be transmitted in a message
Need to convert to a standard format for transmission across a network in a
heterogeneous environment
OS allow only arrays of bytes to be sent
The Ada Model
• Ada supports a form of message-passing between tasks
• Based on a client/server model of interaction
• The server declares a set of services that it is prepared to offer other tasks
(its clients)
• It does this by declaring one or more public entries in its task specification
• Each entry identifies the name of the service, the parameters that are
required with the request, and the results that will be returned
© Alan Burns and Andy Wellings, 2001
Message Characteristics
Principal Operations
Anonymity
• send
• anonymous message passing
• receive
• non-anonymous message passing
Synchronization
Receipt of Messages
• Synchronous
• Unconditional
• Asynchronous
• Selective
• Rendevous
Multiplicity
• one-one
• many-one
• many-many
• Selective
Selective waiting
• So far, the receiver of a message must wait until the specified
process, or mailbox, delivers the communication
• A receiver process may actually wish to wait for any one of a
number of processes to call it
• Server processes receive request messages from a number of
clients; the order in which the clients call being unknown to the
servers
• To facilitate this common program structure, receiver processes are
allowed to wait selectively for a number of possible messages
• Based on Dijkstra’s guarded commands
x < y  m := x
Non-determinism and Selective Waiting
• Concurrent languages make few assumptions about the execution
order of processes
• A scheduler is assumed to schedule processes non-deterministically
• Consider a process P that will execute a selective wait construct
upon which processes S and T could call
Non-determinism and Selective Waiting
• P runs first; it is blocked on the select. S (or T) then runs and
rendezvous with P
• S (or T) runs, blocks on the call to P; P runs and executes the select; a
rendezvous takes place with S (or T)
• S (or T) runs first and blocks on the call to P; T (or S) now runs and is
also blocked on P. Finally P runs and executes the select on which T
and S are waiting
• The three possible interleavings lead to P having none, one or two
calls outstanding on the selective wait
• If P, S and T can execute in any order then, in latter case, P should be
able to choose to rendezvous with S or T — it will not affect the
programs correctness
Non-determinism and Selective Waiting
• A similar argument applies to any queue that a synchronization
primitive defines
• Non-deterministic scheduling implies all queues should release
processes in a non-deterministic order
• Semaphore queues are often defined in this way; entry queues
and monitor queues are specified to be FIFO
• The rationale here is that FIFO queues prohibit starvation but if
the scheduler is non-deterministic then starvation can occur
anyway!
POSIX Message Queues
• POSIX supports asynchronous, indirect message passing through the
notion of message queues
• A message queue can have many readers and many writers
• Priority may be associated with the queue
• Intended for communication between processes (not threads)
• Message queues have attributes which indicate their maximum size,
the size of each message, the number of messages currently queued
etc.
• An attribute object is used to set the queue attributes when the queue
is created
POSIX Message Queues
• Message queues are given a name when they are created
• To gain access to the queue, requires an mq_open name
• mq_open is used to both create and open an already existing queue
(also mq_close and mq_unlink)
• Sending and receiving messages is done via mq_send and
mq_receive
• Data is read/written from/to a character buffer.
• If the buffer is full or empty, the sending/receiving process is blocked
unless the attribute O_NONBLOCK has been set for the queue (in
which case an error return is given)
• If senders and receivers are waiting when a message queue becomes
unblocked, it is not specified which one is woken up unless the priority
scheduling option is specified
POSIX Message Queues
• A process can also indicate that a signal should be sent to it when an
empty queue receives a message and there are no waiting receivers
• In this way, a process can continue executing whilst waiting for
messages to arrive or one or more message queues
• It is also possible for a process to wait for a signal to arrive; this allows
the equivalent of selective waiting to be implemented
• If the process is multi-threaded, each thread is considered to be a
potential sender/receiver in its own right
Remote Procedure Call (RPC)
Remote Procedure Call is a procedure P that
caller process C gets server process S to execute
as if C had executed P in C's own address space
RPCs support
distributed computing at higher level than sockets
architecture/OS neutral passing of simple & complex data types
common application needs like name resolution, security etc.
caller process
Call procedure
and wait for reply
server process
Receive request and
start procedure
execution
Procedure P
executes
Resume
execution
Send reply and wait
for the next request
Remote Procedure Call (RPC)
• Allow programs to call procedures located on other machines.
• Traditional (synchronous) RPC and asynchronous RPC.
RPC.
Remote Procedure Call (RPC)
RPC Standards
There are at least 3 widely used forms of RPC
Open Network Computing version of RPC
Distributed Computing Environment version of RPC
Microsoft's COM/DCOM proprietary standard
ONC RPC specified in RFC 1831
is widely available RPC standard on UNIX and other OSs
has been developed by Sun Microsystems
Distributed Computing Environment
is widely available RPC standard on many OSs
supports RPCs, directory of RPC servers & security services
COM is proprietary, adaptive extension of DCE standard
Java RMI
Remote Method Invocation (RPC)
RMI.
Robustness
• Exception handling in high level languages
• Four Types of Communication Faults
– A message transmitted from a node does not reach its intended
destinations
– Messages are not received in the same order as they were sent
– A message gets corrupted during its transmission
– A message gets replicated during its transmission
Failures in RPC
If a remote procedure call terminates abnormally (the time out
expires) there are four possibilities.
– The receiver did not receive the call message.
– The reply message did not reach the sender.
– The receiver crashed during the call execution and either has
remained crashed or is not resuming the execution after crash
recovery.
– The receiver is still executing the call, in which case the execution
could interfere with subsequent activities of the client.
Concurrency. Summary
• semaphore — a non-negative integer that can only be acted upon
by WAIT and SIGNAL atomic procedures
• Two more structured primitives are: condition critical regions and
monitors
• Suspension in a monitor is achieved using condition variable
• POSIX mutexes and condition variables give monitors with a
procedural interface
• Ada’s protected objects give structured mutual exclusion and highlevel synchronization via barriers
• Java’s synchronized methods provide monitors within an objectoriented framework
State Model
• A process executes three types of events: internal
actions, send actions, and receive actions.
• A global state: a collection of local states and the state
of all the communication channels.
System structure from logical point of view.
© Jie Wu, Distributed System Design, 1999
Thread
• lightweight process (maintain minimum information in its
context)
• multiple threads of control per process
• multithreaded servers (vs. single-threaded process)
A multithreaded server in a dispatcher/worker model.
© Jie Wu, Distributed System Design, 1999
Real-time ordering of events
s end
X
receiv e
1
m1
2
Y
receiv e
4
s end
3
m2
receiv e
Phy sical
time
receiv e
s end
Z
receiv e
receiv e
m3
A
t1
t2
m1
m2
receiv e receiv e receiv e
t3
Instructor’s Guide for Coulouris, Dollimore and Kindberg Distributed Systems: Concepts and Design Edn. 3
© Addison-Wesley Publishers 2000
The happened-before relation
The happened-before relation (denoted by ) is defined as
follows:
Rule 1 : If a and b are events in the same process and a was executed
before b, then a  b.
Rule 2 : If a is the event of sending a message by one process and b
is the event of receiving that message by another process, then a  b.
Rule 3 : If a  b and b  c, then a  c.
Relationship between two events.
Two events a and b are causally related if a  b or b  a.
Two distinct events a and b are said to be concurrent if a  b
and b  a (denoted as a b).
A time-space representation
A time-space view of a distributed
system.
Rule 1:
a0  a1  a2  a3
b0  b1  b2  b3
c0  c1  c2  c3
Rule 2:
a0  b3
b1  a3; b2  c1; b0  c2
Snapshot of Global States
A simple distribute algorithm to capture a consistent
global state.
A system with three processes Pi, Pj , and Pk.
Organization
General concepts.
Types of distributed systems.
Comunication models.
Concurrency and distributed systems
Examples
Theoretical foundations
We have seen a state model defined by:
A process executes three types of events: internal actions,
send actions, and receive actions.
A global state: a collection of local states and the state of all
the communication channels.
Process
Process
receive
send
Process
Process
Properties
• Safety: the system (program) never enters a bad state.
• Liveness: the system (program) eventually enters a good
state.
– Examples of safety property: partial correctness, mutual exclusion,
and absence of deadlock.
– Examples of liveness property: termination and eventual entry to a
critical section.
Three Ways to Demonstrate the Properties
– Testing and debugging (run the program and see what happens)
– Operational reasoning (exhaustive case analysis)
– Assertional reasoning (abstract analysis)
Concurrency and process algebras
A formal study of concurrency enables:
•understanding the essential nature of concurrency
•reasoning about the behavior of concurrent systems
•developing tools to aid in producing correct systems
The pi-calculus of Robin Milner and its former CCS:
•an algebra (operators, expressions, reaction rules)
•an interpretation for concurrent/communicating/mobile processes
A process is an autonomous entity possessing named ports through
which it may communicate with other processes. The name of the
process and its ports are introduced as:
ProcessName(port-list) = behavior of the process
In the description of the process’ behavior, port names with overbars are
interpreted as “output” ports while names without overbars are often
interpreted as “input’” ports.
The process below models a simple client that has one output port,
“request” and one input port, “reply”.
A calculus of communicating systems (CCS)
A Simple Example
Agent C
Dynamic system is network of agents.
Each agent has own identity persisting over time.
Agent performs actions (external communications or internal actions).
Behavior of a system is its (observable) capability of communication.
Agent has labeled ports.
Input port in.
Output port out.
Behavior of C:
C := in(x).C'(x)
C'(x) := out(x).C
The behaviour can also be represented by: C:= in(x).out(x).C
A calculus of communicating systems (CCS)
Behaviour Descriptions (I)
Agent names can take parameters.
Prefix in(x)
Handshake in which value is received at port in and becomes the value of variable x.
Agent expression in(x).C'(x)
Perform handshake and proceed according to definition of C'.
Agent expression out(x).C
Output the value of x at port out and proceed according to the definition of C.
Scope of local variables:
Input prefix introduces variable whose scope is the agent expression C.
Formal parameter of defining equation introduces variable whose scope is the
equation.
A calculus of communicating systems (CCS)
Behaviour Descriptions (II)
C := in(x).out(x).C
A := in(x).in(y).out(x).out(y).A
How do behaviours differ?
A inputs two values and outputs two values.
C inputs and output a single value.
Agent expression C^C.
Combinator A1^A2 (defined later).
Agent formed by linking out of A2 to in of A1.
^ is associative.
A calculus of communicating systems (CCS)
Behaviour Descriptions (III)
out
in
D
ackin
ackout
To define un agent D whose interface looks as the figure:
def
D = in(x).out(x).ackout.ackin.D
Note that there no value parameters in the actions ackout, ackin.
These action represent pure sinchronization between agents.
We can also define a linking combinator ; it links the two right-hand ports of the first
agent to the two left-hand ports of the second.Thus the combination of n copies of D:
out
in
D
D
D
D
ackout
ackin
def
will be defined by: D(n) = D  D  ... D
(n times)
A calculus of communicating systems (CCS)
Example
Let us define a simple semaphore as follow:
def
Sem = wait.signal.Sem
wait
Sem.
signal
def
Sem = wait. Sem’
def
Sem’ = signal.Sem
we need other agents in orden to evolve:
def
Process = wait.Process’
def
Process’ = signal.Process
The total behaviour will be: (Process | Sem)\{wait, signal}
A calculus of communicating systems (CCS)
The five operations of the calculus are:
Operation
Examples
Prefix
in(x).P
Summation
P+Q
Composition
P|Q
Restriction
P \{l1,...ln}
Relabeling
P[l’1/ l1,...l’n / ln ]
geth.Q
A calculus of communicating systems (CCS)
Summation
Basic combinator '+'
P+Q behaves like P or like Q.
When one performs its first action, other is discarded.
If both alternatives are allowed, selection is non-deterministic.
Combining forms
Summation P+Q of two agents.
Sequencing alpha.P of action alpha and agent P.
Different levels of abstractions
Agent can be expressed directly in terms of its interaction with environment.
Agent can be expressed indirectly in terms of its composition of agents.
A calculus of communicating systems (CCS)
Examples
A vending machine:
Big chocolade costs 2p, small one costs 1p.
V := 2p.big.collect.V
+ 1p.little.collect.V
in
A multiplier
Twice := in(x).out(2*x).Twice.
Output actions may take expressions
twice.
out
A calculus of communicating systems (CCS)
Composition
Composite Agent A|B
A := a.A', A' := c.A
B := c.B', B' := b.B
a
c
A
a
b
c
B
c
c
A
A' allows A|B
A'|B
A
B
A' ->c A allows A'|B ->c A|B
A' ->c A and B ->c B' allows A'|B ->tau A|B'
Completed (perfect) action tau.
Simultaneous action of both agents.
Internal to composed agent.
Act = L U {tau}
Internal versus external actions
Internal actions are ignored.
Only external actions are visible.
Two systems are equivalent if they exhibit same pattern of external actions.
P ->tau P1 ->tau ...->tau Pn equivalent to P ->tau Pn
->a
->a
b
A calculus of communicating systems (CCS)
Restrictions
Restriction (A|B)\c
P ->alpha P' allows P\L ->alpha P'\L
a
A
(if alpha not in L)
Transition (derivation) tree
(A|B)\c
|a
(A'|B)\c
|tau
(A|B')\c
/b
\a
(A|B)\c (A'|B')\c
|a
|b
(A'|B)\c (A'|B)\c
...
B
b
A calculus of communicating systems (CCS)
Transition Graph
(A|B)\c = a.tau.C
C := a.b.tau.C + b.a.tau.C
Composite system
Behavior defined without use of composition combinator | or
restriction combinator\
Internal communication alpha.tau.P = alpha.P
(A|B)\c = a.D
D := a.b.D + b.a.D
Bibliography (I)
George Coulouris, Jean Dollimore and Tim Kindberg. Distributed Systems:
Concepts and Design (Edition 3 ). Addison-Wesley 2001
http://www.cdk3.net/
Andrew S. Tanenbaum, Maarten van Steen. Distributed Systems: Principles
and Paradigms. Prentice-Hall 2002.
http://www.cs.vu.nl/~ast/books/ds1/
M. L. Liu. Distributed Computing -- Concepts and Application. Addison
Wesley 2003. http://www.csc.calpoly.edu/~mliu/book/index.html
P. K. Sinha, P.K. "Distributed Operating Systems, Concepts and Design",
IEEE Press, 1993
Sape J. Mullender, editor. Distributed Systems, 2nd edition, ACM Press,
1993 http://wwwhome.cs.utwente.nl/~sape/gos0102.html
Bibliography (II)
J. Wu, “Distributed System Design”, CRC Press 1999
Gregory R. Andrews. Foundations of Multithreaded, Parallel, and
Distributed Programming, Addison Wesley, 2000
http://www.cs.arizona.edu/people/greg/
J. Magee & J. Kramer. Concurrency. State Models and Java Programs.
John Wiley, 1999 http://www-dse.doc.ic.ac.uk/concurrency/
Jean Bacon. Concurrent Systems.Operating systems, database and
distributed systems an integrated approach. Addison Wesley, 2nd edition
1998 http://www.cl.cam.ac.uk/users/jmb/cs.html
Download