Lecture 3 - Colorado State University Computer Science Department

advertisement
Distributed Operating Systems
CS551
Colorado State University
at Lockheed-Martin
Lecture 3 -- Spring 2001
CS551: Lecture 3

Topics
–
–
Real Time Systems (and networks)
Interprocess Communication (IPC)
 message
passing
 pipes
 sockets
 remote
–
7 February 2001
procedure call (RPC)
Memory Management
CS-551, Lecture 3
2
Real-time systems

Real-time systems:
–
systems where “the operating system must
ensure that certain actions are taken within
specified time constraints” Chow & Johnson,
Distributed Operating Systems & Algorithms, Addison-Wesley
(1997).
–
systems that “interact with the external world in
a way that involves time … When the answer is
produced is as important as which answer is
produced.” Tanenbaum, Distributed Operating Systems,
Prentice-Hall (1995).
7 February 2001
CS-551, Lecture 3
3
Real-time system examples

Examples of real-time systems
–
–
–
–
–
–
–
7 February 2001
automobile control systems
stock trading systems
computerized air traffic control systems
medical intensive care units
robots
space vehicle computers (space & ground)
any system that requires bounded response time
CS-551, Lecture 3
4
Soft real-time systems

Soft real-time systems
–
“missing an occasional deadline is all right”
Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995).
–
“have deadlines but are judged to be in working
order as long as they do not miss too many
deadlines” Chow & Johnson, Distributed Operating Systems
& Algorithms, Addison-Wesley (1997).
–
7 February 2001
Example: multimedia system
CS-551, Lecture 3
5
Hard real-time systems

Hard real-time systems
–
“only judged to be correct if every task is
guaranteed to meet its deadline” Chow & Johnson,
Distributed Operating Systems & Algorithms, Addison-Wesley
(1997).
–
“a single missed deadline … is unacceptable, as
this might lead to loss of life or an environmental catastrophe” Tanenbaum, Distributed Operating
Systems, Prentice-Hall (1995).
7 February 2001
CS-551, Lecture 3
6
Hard/Soft real-time systems

“A job should be completed before its
deadline to be of use (in soft real-time
systems) or to avert disaster (in hard realtime systems). The major issue in the
design of real-time operating systems is the
scheduling of jobs in such a way that a
maximum number of jobs satisfy their
deadlines.” Singhal & Shivaratri, Advanced Concepts in
Operating Systems, McGraw-Hill (1994).
7 February 2001
CS-551, Lecture 3
7
Firm real-time systems

Firm real-time systems
–
“similar to a soft real-time system, but tasks
that have missed their deadlines are discarded”
Tanenbaum, Distributed Operating Systems, Prentice-Hall (1995).
–
“where missing a deadline means you have to
kill off the current activity, but the consequence
is not fatal” Chow & Johnson, Distributed Operating Systems
& Algorithms, Addison-Wesley (1997).
–
7 February 2001
E.g. partially filled bottle on assembly line
CS-551, Lecture 3
8
Types of real-time systems

Re-active:
–

Embedded:
–

controls specialized hardware
Event-triggered:
–

interacts with environment
unpredictable, asynchronous
Time-Triggered:
–
7 February 2001
predictable, synchronous, periodic
CS-551, Lecture 3
9
Myths of real-time systems
(Tanenbaum)

“Real-time systems are about writing device
drivers in assembly code.”
–

“Real-time computing is fast computing.”
–

involve more than device drivers
computer-powered telescopes
“Fast computers will make real-time
systems obsolete.”
–
7 February 2001
no -- only more will be expected of them
CS-551, Lecture 3
10
Message passing
A form of communication between two
processes
 A physical copy of message is sent from
one process to the other
 Blocking vs Non-blocking

7 February 2001
CS-551, Lecture 3
11
Blocking message passing
Sending process must wait after send until
an acknowledgement is made by the
receiver
 Receiving process must wait for expected
message from sending process
 A form of synchronization
 Receipt determined

–
–
7 February 2001
by polling common buffer
by interrupt
CS-551, Lecture 3
12
Figure 3.1 Blocking Send and
Receive Primitives: No Buffer. (Galli,
p.58)
7 February 2001
CS-551, Lecture 3
13
Figure 3.2 Blocking Send and
Receive Primitives with Buffer. (Galli,
p.58)
7 February 2001
CS-551, Lecture 3
14
Non-blocking message passing
Asynchronous communication
 Sending process may continue immediately
after sending a message -- no wait needed
 Receiving process accepts and processes
message -- then continues on
 Control

–
–
7 February 2001
buffer -- receiver can tell if message still there
interrupt
CS-551, Lecture 3
15
Process Address

One-to-one addressing
–
–

explicit
implicit
Group addressing
–
–
–
7 February 2001
one-to-many
many-to-one
many-to-many
CS-551, Lecture 3
16
One-to-one Addressing

Explicit address
–

specific process must be given as parameter
Implicit address
–
–
–
7 February 2001
name of service used as parameter
willing to communicate with any client
acts like send_any and receive_any
CS-551, Lecture 3
17
Figure 3.3 Implicit Addressing for
Interprocess Communication. (Galli,
p.59)
7 February 2001
CS-551, Lecture 3
18
Figure 3.4 Explicit Addressing for
Interprocess Communication.
(Galli,p.60)
7 February 2001
CS-551, Lecture 3
19
Group addressing

One-to-many
–
–

Many-to-one
–

one sender, multiple receivers
broadcast
multiple senders, but only one receiver
Many-to-many
–
7 February 2001
difficult to assure order of messages received
CS-551, Lecture 3
20
Figure 3.5 One-to-Many Group
Addressing. (Galli, p.61)
7 February 2001
CS-551, Lecture 3
21
Many-to-many ordering

Incidental ordering
–
–

Uniform ordering
–

least structured, fastest
acceptable if all related messages received in
any order
all receivers receive messages in same order
Universal ordering
–
7 February 2001
all messages must be received in exactly the
same order as sent
CS-551, Lecture 3
22
Figure 3.6 Uniform Ordering.
(Galli, p.62)
7 February 2001
CS-551, Lecture 3
23
Pipes
“interprocess communication APIs”
 “implemented by a finite-size, FIFO bytestream buffer maintained by the kernel”
 “serves as a unidirectional communication
link so that one process can write data into
the tail end of the pipe while another
process may read from the head end of the
pipe”


Chow & Johnson, Distributed Operating Systems & Algorithms,
Addison-Wesley (1997).
7 February 2001
CS-551, Lecture 3
24
Pipes, continued
“created by a pipe system call, which
returns two pipe descriptors (similar to a file
descriptor), one for reading and the other for
writing … using ordinary write and read
operations” (C&J)
 “exists only for the time period when both
reader and writer processes are active” (C&J)
 “the classical producer and consumer IPC
problem” (C&J)

7 February 2001
CS-551, Lecture 3
25
Figure 3.7 Interprocess
Communication Using Pipes.
(Galli,
p.63)
7 February 2001
CS-551, Lecture 3
26
Unnamed pipes

“Pipe descriptors are shared by related
processes” (e.g. parent, child)
–
Chow & Johnson, Distributed Operating Systems & Algorithms,
Addison-Wesley (1997).
Such a pipe is considered unnamed
 Cannot be used by unrelated processes

–
7 February 2001
a limitation
CS-551, Lecture 3
27
Named pipes
“For unrelated processes, there is a need to
uniquely identify a pipe since pipe
descriptors cannot be shared. One solution
is to replace the kernel pipe data structure
with a special FIFO file. Pipes with a path
name are called named pipes.” (C&J)
 “Since named pipes are files, the
communicating processes need not exist
concurrently” (C&J)

7 February 2001
CS-551, Lecture 3
28
Named pipes, continued

“Use of named pipes is limited to a single
domain within a common file system.” (C&J)
–
a limitation
–
…. Therefore, sockets….
7 February 2001
CS-551, Lecture 3
29
Sockets
“a communication endpoint of a
communication link managed by the
transport services” (C&J)
 “created by making a socket system call that
returns a socket descriptor for subsequent
network I/O operations, including fileoriented read/write and communicationspecific send/receive” (C&J)

7 February 2001
CS-551, Lecture 3
30
Figure 1.4 The ISO/OSI
Reference Model. (Galli, p.9)
7 February 2001
CS-551, Lecture 3
31
Sockets, continued

“A socket descriptor is a logical
communication endpoint (LCE) that is local
to a process; it must be associated with a
physical communication endpoint (PCE) for
data transport. A physical communication
endpoint is specified by a network host
address and transport port pair. The
association of a LCE with a PCE is done by
the bind system call.” (C&J)
7 February 2001
CS-551, Lecture 3
32
Types of socket communication

Unix
–
–

local domain
a single system
Internet
–
–
7 February 2001
world-wide
includes port and IP address
CS-551, Lecture 3
33
Types , continued

Connection-oriented
–
uses TCP
 “a
connection-oriented reliable stream transport
protocol” (C&J)

Connectionless
–
uses UDP
 “a
connectionless unreliable datagram transport
protocol” (C&J)
7 February 2001
CS-551, Lecture 3
34
Connection-oriented socket
communication
Server
socket
Client
socket
bind
listen
rendezvous
connect
accept
request
read
reply
write
 Adapted
7 February 2001
write
read
from Chow & Johnson
CS-551, Lecture 3
35
Connectionless socket
communication
Peer
Processes
Peer
Processes
socket
socket
LCE
LCE
bind
PCE

bind
Sendto /recvfrom
PCE
Adapted from Chow & Johnson
7 February 2001
CS-551, Lecture 3
36
Socket support


Unix primitives
–
socket, bin, connect, listen,
send, receive, shutdown
–
available through C libraries
Java classes
–
–
7 February 2001
Socket
ServerSocket
CS-551, Lecture 3
37
Figure 3.8
Socket
 Analogy


(Galli,p.66)
7 February 2001
CS-551, Lecture 3
38
Figure 3.9 Remote Procedure
Call Stubs. (Galli,p.73)
7 February 2001
CS-551, Lecture 3
39
Figure 3.10 Establishing
Communication for RPC.
(Galli,p.74)
7 February 2001
CS-551, Lecture 3
40
Table 3.1 Summary of IPC Mechanisms
and Their Use in Distributed System
Components.

(Galli, p.77)
IPC Method
Distributed
Real-Time
Parallel
Components
Components
Components
Message Passing
Yes
Yes
Yes
Pipes
No
Yes
Yes
Sockets
Yes
Yes
Yes
RPC
Yes
Yes
Yes
7 February 2001
CS-551, Lecture 3
41
Memory Management
Review
 Simple memory model
 Shared memory model
 Distributed shared memory
 Memory migration

7 February 2001
CS-551, Lecture 3
42
Virtual memory (pages, segments)
Virtual memory
 Memory management unit
 Pages - uniform sized
 Segments - of different sizes
 Internal fragmentation
 External fragmentation

7 February 2001
CS-551, Lecture 3
43
Figure 4.1 Fragmentation in Page-Based
Memory versus a Segment-Based Memory.
(Galli, p.83)
7 February 2001
CS-551, Lecture 3
44
Page replacement algorithms
Page fault
 Thrashing
 First fit
 Best fit

–
–

LRU (NRU)
second chance
Worst fit
7 February 2001
CS-551, Lecture 3
45
Figure 4.2
Algorithms
 for
 Choosing
 Segment
 Location


(Galli,p.84)
7 February 2001
CS-551, Lecture 3
46
Simple memory model

Parallel UMA systems
–
–
–
–
7 February 2001
thrashing can occur when parallel processes
each want its own pages in memory
time to service all memory requests expensive,
unless memory large
virtual memory expensive
caching can be expensive
CS-551, Lecture 3
47
Shared memory model
NUMA
 Memory bottleneck
 Cache consistency

–
–
–
7 February 2001
Snoopy cache
enforce critical regions
disallow caching shared memory data
CS-551, Lecture 3
48
Figure 4.3
Snoopy
 Cache


(Galli, p.89)
7 February 2001
CS-551, Lecture 3
49
Distributed shared memory
BBN Butterfly
 Reader-writer problems

7 February 2001
CS-551, Lecture 3
50
Figure 4.4 Centralized Server for
Multiple Reader/Single Writer DSM.
(Galli,p.92)
7 February 2001
CS-551, Lecture 3
51
Figure 4.5 Partially Distributed Invalidation
for Multiple Reader/Single Writer DSM. (Galli,
p.92)
7 February 2001
CS-551, Lecture 3
52
Ffigure 4.6 Dynamic Distributed Multiple
Reader/Single Writer DSM. (Galli, P.93)
7 February 2001
CS-551, Lecture 3
53
Figure 4.7 Dynamic Data Allocation for
Multiple Reader/Single Writer
DSM.(Galli,p.96)
7 February 2001
CS-551, Lecture 3
54
Figure 4.8 Stop-and-Copy
Memory Migration. (Galli,p.99)
7 February 2001
CS-551, Lecture 3
55
Download