Operating Systems Assignment 2 Sridhar Godavarthy (U32086851

advertisement
Operating Systems
Assignment 2
Sridhar Godavarthy (U32086851)
Q 3.4 With respect to the RPC mechanism, consider the “exactly once” semantic. Does the algorithm
for implementing this semantic execute correctly even if the “ACK” message back to the client is lost
due to a network problem? Describe the sequence of messages and whether "exactly once" is still
preserved
A)
Assumptions: Message was received at server and server sends ACK message.
Yes. It will execute correctly.
1.
2.
3.
4.
Client makes RPC Call to Server
Server sends ACK message that is lost.
Client times out and resends same RPC Call to server( with timestamp)
Server receives RPC call, checks history to match this call with already received one based on
timestamp. Does not execute. Only sends back ACK.
5. 3, 4 repeat until ACK is received.
Step 4 ensures that ‘exactly once’ is preserved.
Q 3.5 Assume that a distributed system is susceptible to server failure. What mechanisms would be
required to guarantee the “exactly once” semantics for execution of RPCs?
Have a mirror configuration with a hot swap. By maintaining the history of requests (which includes
the client, request timestamp and request status) we can respond back to any client in a manner that
adheres to the “exactly once” semantic.
Q 3.6 Describe the differences among short-term, medium-term and long-term scheduling.
Short Term Scheduling: Selects from processes(already in memory) that are ready to execute.
Selects a new process frequently approximately once every 100ms.
Medium Term Scheduling: Swaps processes out and in assisting in reducing the amount of
multiprocessing. This scheduler adapts itself to changes in memory requirements and improves
process mix.
Long Term scheduling: Selects from processes on a mass storage device. Executes much less
frequently. Usually minutes between processes.
Q 3.7 Describe the actions taken by a kernel to context-switch between processes.
1. Saves current(running process) context. This is the PCB
a. CPU Registers
b. Process state
c. Memory management information
2. Preserved address space of current process
3. Suspends current process
4. Restores context of new process from PCB.
Additional operations may be performed depending on the hardware architecture.
Q 3.11 Give an example of a situation in which ordinary pipes are more suitable than named pipes and
Operating Systems
Assignment 2
Sridhar Godavarthy (U32086851)
an example of a situation where names pipes are more suitable than ordinary pipes.
Named pipes can be used to listen to requests from other processes( similar to TCP IP ports). If the
calling processes are aware of the name, they can send requests to this. Unnamed pipes cannot be used
for this purpose.
Ordinary pipes are useful in situations where the communication needs to happen only between two
specified process, known beforehand. Named pipes in such a scenario would involve too much of an
overhead in such a scenario.
Q 3.12 Consider the RPC mechanism. Describe the undesirable consequences that could arise from
not implementing the “at most once” or the “exactly once” semantics. Describe possible uses for a
mechanism that has neither of these guarantees.
An RPC call could be executed multiple times or could never be executed depending on the receipt of
the acknowledgement. The same operation will be repeated multiple times which may place the
system in an unstable state.
Such a mechanism can be used in places where only read only operations need to be performed. As
this kind of transactions do not cause a change in state, multiple incurrences will not affect the
operation of the system.
Q 3.13 What will be the output at line A.
A) 5
Q 3.14 What are the benefits and disadvantages of each of the following?
a. Synchronous and Asynchronous communication
Synchronous communication is trivial as the sender and receiver are blocked and hence there is
only one communication at a time. However, the processes are blocked until the communication is
complete and cannot perform any other operation. In asynchronous communication, the processes
need to handle multiple communications and hence need mechanisms to identify the messages, store
and respond to each message individually. Also, this requires mechanism to identify when a new
message has arrived and break from regular path to handle the new message. However, asynchronous
communication has the advantage that process flow is not obstructed when waiting for a response.
b. Automatic and explicit buffering
In automatic buffering, the system provides the buffers and the program does not have to
bother about it. Buffer size will increase dynamically depending on the utilization. This is a complex
system and may not be possible to implement in a memory efficient way. The sender never blocks.
In explicit buffering, the system fixes the amount of buffer. The system implementation is
trivial and memory usage is efficient. However, dynamically increasing the buffer size is not possible.
At most n messages can reside in it. After that, the sender has to block. Hence the requirements need
to be known beforehand.
c. Send by copy and Send by Reference
In Send by Copy, a copy of the variable is sent. Any modifications to this variable in the callee will
not affect the variable in the caller.
In Send by Reference, an internal pointer to the original variable is sent. This allows modification of
Operating Systems
Assignment 2
Sridhar Godavarthy (U32086851)
the variable from the callee.
d. Fixed-size and Variable-size messages.
Fixed size messages: System implementation is straight forward as the data structures needed for this
are fixed. However, programming of messages becomes difficult as size cannot exceed the given size,
in which case it has to be broken up or sent by a different mechanism.
Variable size messages: System implementation is complex as the data structures need to grow
dynamically. The programming, however, is easier as the programmer need not bother about the size
of the message.
Q 4.2 What are two differences between user level threads and kernel level threads? Under what
circumstances is one type better than the other?
User threads are managed without kernel support whereas kernel threads are managed by the kernel.
User threads are associated with a process. Kernel threads may be associated with user threads.
Every user thread needs to be mapped to a kernel thread(1:1, n:1 or n:m)
Q 4.3 Describe the action the kernel takes to context-switch between kernel level threads.
Save state of CPU registers, preserve address space. Load CPU registers for new thread.
Q 4.7 Provide two programming examples in which multithreading does not perform better than a
single threaded solution
1. Simple programs. Eg:
void main(){printf(“\n hello world”);}
2. Sequential programs Eg. A program that counts the number of keystrokes that a user has
entered.
Q 4.10 Which of the following components of program are shared across threads in a multithreaded
process?
Heap Memory
Global Variables
Q 4.12
Design is much simpler as there is no need to handle threads and processes separately
There is more overhead when using threads in Linux - defeating the concept of threads.
However, the advantage that Linux offers is that the sharing of resources between threads can be
controlled.
Q 4.13 The program shown in Figure 4.14 uses Pthreads API. What would be the output from the
program at LINE C and LINE P?
C = 0; P = 5
Q 4.14
Kernel Threads < no. of Processors
One kernel thread is associated to a user level thread at a point of time. As this is a many to one
mapping, some user threads will remain idle. Each kernel thread will now map onto an available
processor. As there are less number of kernel threads than processors, some of the processors will not
Operating Systems
Assignment 2
Sridhar Godavarthy (U32086851)
be utilized. Hence, even though there are idle user threads and idle processors, they will not be
utilized.
Kernel Threads = No. of processors
One kernel thread is associated to a user level thread at a point of time. As this is a many to one
mapping, some user threads will remain idle. Every kernel thread will run on a processor. This is the
scenario where maximum utilization takes place. As a user threads is swapped, the next user thread is
mapped on the kernel. Of course, depending on the state of the thread and the operation being
performed(I/O interactive jobs), the processor may or may not be used.
Kernel Threads > No. of processors
One kernel thread is associated to a user level thread at a point of time. As this is a many to one
mapping, some user threads will remain idle. Every kernel thread will run on a processor. Some kernel
threads will remain idle. However, if a kernel thread blocks, it can be swapped with an idle kernel
thread.
Q 5.3
a. [(8 - 0) + (12 - 0.4) + (13 - 1) ] /3 = 10.533
b. Process 1 starts. No preemptive . Hence 1 has to complete. Process 2 and 3 are in ready. When
process 1 finishes at 8 units, process 3 starts and finishes. P2 starts at 9 units.
Turn around time = [(8-0) + (9 – 1) +(13 – 0.4)]/3 = 9.53
c. P3 starts first at 1 unit, followed by P2 at 2 units followed by P1 at 6 units.
Turnaround time = [(2 - 1) + (6 – 0.4) + (14 - 0) ] /3 = 6.86
Q 5.9 Why is it important for the scheduler to distinguish I/O programs from CPU bound programs?
I/O programs use very little CPU and are waiting for IO for most of the time. The CPU bound
processes use up most of the CPU when the IO processes are blocked. Hence, in order to provide the
IO programs with a fair share of the CPU, it is necessary for the scheduler to distinguish them so that
it can give a higher priority to the IO processes when they request for CPU.
Q 5.12
P1
P2 P4
P2
P2
P3
P3
P1
P5
P5
P5
P4
P1
P3
P4
P1 P2 P3 P4 P5 P1 P3 P5 P1 P5 P1 P5 P1 P5 P1 P1 P1 P1 P1
22
2
b. turnaround time
Operating Systems
Assignment 2
P1
P2
P3
P4
P5
FCFS
10
11
13
14
19
SJF
19
1
4
2
9
NPP
16
1
18
19
6
Sridhar Godavarthy (U32086851)
RR
19
2
7
4
14
c. Waiting Time:
P1
P2
P3
P4
P5
Average
d. Shortest Job First( 3.2 units).
FCFS
0
10
11
13
14
9.6
SJF
9
0
2
1
4
3.2
NPP
6
0
16
18
1
10
RR
9
1
5
3
9
5.4
Q 5.14
a. The process gets scheduled twice the number of times and gets completed twice as fast. The
turnaround time and waiting time are reduced. But ofcourse, if this were repeated for all processes, it
would have no impact.
b. Provides opportunities for a different mechanism of finishing tasks earlier without increasing the
priority. Does not increase the risk of starvation.
Causes other processes to be delayed. When there are no other jobs to be scheduled, increases the
overhead unnecessarily as the context will be switched despite there being only one process.
c. Implement adaptive quantums so that processes which need higher priority are given more time
while the other ‘regular’ processes are given lesser time. This saves the overhead discussed above.
Q 5.21
P1 = (40 /2 ) + 60 = 80
P2 = (18/2) + 60 = 69
P3 = (10/2) + 60 = 65
Priority is reduced for CPU bound processes.
Download