The Critical-Section Problem

advertisement
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Q1. Explain Operating system with objectives and functions.
Ans:
OPERATING SYSTEM :





A program that is executed by the processor that frequently relinquishes
control and must depend on the processor to regain( taking again ) control.
A program that mediates between application programs and the hardware.
A set of procedures that enable a group of people to use a computer system.
A program that controls the execution of application programs.
An interface between applications and hardware.
The common idea behind these definitions:
Operating systems control and support the usage of computer systems.
Concepts to be discussed:

Usage

Computer system

Control

Support
a. Usage
users of a computer system:
programs - use memory, use CPU time, use I/O devices.
Human users:
programmers - use program development tools such as debuggers,
editors
end users - use application programs, e.g. Internet explorer
b. Computer system = hardware + software
OS is a part of the computer software, it is a program. It is a very special program,
that is the first to be executed when the computer is switched on, and is supposed to
control and support the execution of other programs and the overall usage of the
computer system.
c. Control
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
The operating system controls the usage of the computer resources - hardware
devices and software utilities. We can think of an operating system as a Resource
Manager. Here are some of the resources managed by the OS:





Processors,
Main memory,
Secondary Memory,
Peripheral devices,
Information.
d. Support
The operating system provides a number of services to assist (help) the users of the
computer system:
For the programmers:
Utilities - debuggers, editors, file management, etc.
For the end users - provides the interface to the application programs
For programs - loads instructions and data into memory, prepares I/O
devises for usage, handles interrupts and error conditions.
The hierarchical view of the computer system illustrates how the operating system
interacts with the users of the computer system:
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Main Objectives in OS design:
Convenience – makes computer user friendly.
Efficiency- allows computer to use resources efficiently.
Ability to evolve- constructed in a way to permit effective development, testing and
introduction of new functions without interfering with service.
---------------------------------------------------------------------------------------------------------------Q. Various Input Output Techniques in Computer organization?
Ans:
•
Programmed
•
Interrupt driven
•
Direct Memory Access (DMA)
Programmed I/O
•
CPU has direct control over I/O
— Sensing status
— Read/write commands
— Transferring data
•
CPU waits for I/O module to complete operation
•
Wastes CPU time
•
CPU requests I/O operation
•
I/O module performs operation
•
I/O module sets status bits
•
CPU checks status bits periodically
•
I/O module does not inform CPU directly
•
I/O module does not interrupt CPU
•
CPU may wait or come back later
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Interrupt Driven I/O
•
Overcomes CPU waiting
•
No repeated CPU checking of device
•
I/O module interrupts when ready
Interrupt Driven I/O
Basic Operation
•
CPU issues read command
•
I/O module gets data from peripheral whilst CPU does other work
•
I/O module interrupts CPU
•
CPU requests data
•
I/O module transfers data
Direct Memory Access
•
Interrupt driven and programmed I/O require active CPU intervention
•
Transfer rate is limited
•
CPU is tied up
•
Additional Module (hardware) on bus
•
DMA controller takes over from CPU for I/O
•
Typical DMA Module Diagram
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
•
DMA Operation
•
CPU tells DMA controller:-
•
Read/Write
•
Device address
•
Starting address of memory block for data
•
Amount of data to be transferred
•
CPU carries on with other work
•
DMA controller deals with transfer
•
DMA controller sends interrupt when finished
DMA Configurations
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Q. Describe the Evolution of Operating Systems?
Ans:
Serial Processing - 1940’s – 1950’s programmer interacted directly with hardware. No
operating system.
problems
Scheduling - users sign up for machine time. Wasted computing time
Setup Time- Setup included loading the compiler, source program, saving
compiled program, and loading and linking. If an error occurred - start over.
Simple Batch Systems

Improve the utilization of computers.
Jobs were submitted on cards or tape to an operator who batches jobs together
sequentially. The program that controls the execution of the jobs was called monitor - a
simple version of an operating system. The interface to the monitor was accomplished
through Job Control Language (JCL). For example, a JCL request could be to run the
compiler for a particular programming language, then to link and load the program, then
to run the user program.
Hardware features:
Memory protection: do not allow the memory area containing the monitor to be
altered
Timer: prevents a job from monopolizing the system
Problems:
Bad utilization of CPU time - the processor stays idle while I/O devices are in use.
Multi programmed Batch Systems
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
New features:
Memory management - to have several jobs ready to run, they must be kept in main
memory.
Job scheduling - the processor must decide which program to run.
Time-Sharing Systems
Multiprogramming systems : several programs use the computer system
Time-sharing systems : several (human) users use the computer system interactively.
Characteristics:



Using multiprogramming to handle multiple interactive jobs
Processor’s time is shared among multiple users
Multiple users simultaneously access the system through terminals
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
Batch Multiprogramming
PGI MOTIDAU MEHSANA
Time Sharing
Principal objective
Maximize processor use
Minimize response time
Source of directives to
operating system
Job control language commands
provided with the job
Commands entered at the
terminal
Time sharing is multiprogramming. The key differences between time-sharing systems and
batch multiprogramming systems are given in the table above.
3. Major Achievements
Five major theoretical advances in development





Processes
Memory Management
Information protection and security
Scheduling and resource management
System structure
------------------------------------------------------------------------------------------------------Q. What is process? Describe the elements of PCB in details.
Q.Discuss the five state process model. Explain also various queue developed for different
state.
Q.What is Process Management? Write a short note on UNIX SVR4 Process Management.
Ans :
Process
•
•
•
•
A program in execution
An instance of a program running on a computer
The entity that can be assigned to and executed on a processor
A unit of activity characterized by the execution of a sequence of instructions,
a current state, and an associated set of system instructions
Process is a program in execution. A process is more than the program code, which is some
time known as text section.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Modern operating systems manage multiple processes on a computer. Each of these runs in
its own address space and can be independently scheduled for execution.
Process Elements (PCB Details)
While the process is running it has a number of elements including
 Identifier
 State
 Priority
 Program counter
 Memory pointers
 Context data
 I/O status information
 Accounting information.
Process Control Block
Process State: The state may be new, ready, running, waiting or halted.
Process Number: each process is identified by its process number, called process ID.
Program Counter: The counter indicates the address of the next instruction to be executed
for this process.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
CPU Register: The registers vary in number and type, depending on the concrete
microprocessor architecture. They include accumulators, index registers, stack pointers,
and general purpose registers, plus any condition code information.
Memory Management Information: This information may include base and limit
registers or page table or the segment table depending on the memory system used by the
operating system.
Accounting Information: this information includes the amount of CPU and real time used,
time limits, account numbers, job or process numbers and so on.
I/O status information: This information includes the list of I/O devices allocated to the
process, a list of open files and so on.
CPU scheduling information: This information includes process priority, pointers to
scheduling queues and any other scheduling parameters.
In brief, the PCB simply serves as the repository for any information that may vary from
process to process.
Trace of the Process
•
The behavior of an individual process is shown by listing the sequence of
instructions that are executed
•
This list is called a Trace
•
Dispatcher is a small program which switches the processor from one process
to another
Process Execution
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
TracefromProcessors point of view
Process management
Process management is an integral part of any modern day operating system (OS). The OS
must allocate resources to processes, enable processes to share and exchange information,
protect the resources of each process from other processes and enable synchronization
among processes. To meet these requirements, the OS must maintain a data structure for
each process, which describes the state and resource ownership of that process, and which
enables the OS to exert control over each process.
Five-State
Process Model / Process States:
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
New: the process is being created.
Ready: the process is waiting to be assigned to a processor.
Running: instructions are being executed.
Waiting/Suspended/Blocked: the process is waiting for some event to occur(such as an
I/O completion or reception of a signal)
Terminated/Halted: the process has finished execution.
Using Two Queues
Multiple Blocked Queues
One Suspend State
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
UNIX SVR4 PROCESS MANAGEMENT
UNIX System V makes use of a simple but powerful process facility that is highly visible to
the user. UNIX follows the model of Figure 3.15b, in which most of the OS executes within
the environment of a user process. UNIX uses two categories of processes: system
processes and user processes. System processes run in kernel mode and execute operating
system code to perform administrative and housekeeping functions, such as allocation of
memory and process swapping. User processes operate in user mode to execute user
programs and utilities and in kernel mode to execute instructions that belong to the
kernel.A user process enters kernel mode by issuing a system call, when an exception
(fault) is generated, or when an interrupt occurs
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Q.5 Indicate various types of threats to security of computer systems and networks. Explain
Seven-state Process Model mentioning all its transitions.
Ans:
Thread:
A thread is a sequence of execution within a process. It does not have its own
address space but uses the memory and other resources of the process in which it executes.
There may be several threads in one process.
Threads some times called lightweight processes (LWPs) are independently scheduled
parts of single program. We say that a task is multi threaded if it is composed of several
independent sub process which do work on common data and if each of those pieces could
run in parallel.
If we write a program which uses threads there is only one program, one executable file,
one task in the normal sense. Threads simply enable us to split up that program into
logically separate pieces, and have the pieces run independently of one another, until they
need to communicate.
Threads can be assigned priorities – a higher priority thread will get put to the front of the
queue. Let’s define heavy and lightweight processes with the help of a table.
Object
Thread(Light Weight Processes)
Task(Heavy Weight Processes)
Multithreaded task
Resources
Stack, set of CPU registers, CPU time
1 thread, PCB, Program code, memory
segment
n- threads, PCB, program code, memory
segment
Relationship between Processes and Threads
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
TYPES OF THREAD :
1:1 (Kernel-level threading)
Threads created by the user are in 1-1 correspondence with schedulable entities in the
kernel. This is the simplest possible threading implementation. Win32 used this approach
from the start. On Linux, the usual C library implements this approach (via the NPTL or
older Linux Threads). The same approach is used by Solaris, NetBSD and FreeBSD.
N:1 (User-level threading)
An N:1 model implies that all application-level threads map to a single kernel-level
scheduled entity; the kernel has no knowledge of the application threads. With this
approach, context switching can be done very quickly and, in addition, it can be
implemented even on simple kernels which do not support threading. One of the major
drawbacks however is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread
being scheduled at the same time. For example: If one of the threads needs to execute an
I/O request, the whole process is blocked and the threading advantage cannot be utilized.
The GNU Portable Threads uses User-level threading, as does State Threads.
M:N (Hybrid threading)
M:N maps some M number of application threads onto some N number of kernel entities, or
"virtual processors." This is a compromise between kernel-level ("1:1") and user-level
("N:1") threading. In general, "M:N" threading systems are more complex to implement
than either kernel or user threads, because changes to both kernel and user-space code are
required. In the M:N implementation, the threading library is responsible for scheduling
user threads on the available schedulable entities; this makes context switching of threads
very fast, as it avoids system calls. However, this increases complexity and the likelihood of
priority inversion, as well as suboptimal scheduling without extensive (and expensive)
coordination between the userland scheduler and the kernel scheduler.
VARIOUS TRANSITION OVER PROCESS STATES:
The states that they represent are found on all systems. Four transitions are possible
among the states.
Transition 1 appears when a process discovers that it can not continue. In order to get into
blocked state. Some systems must execute a system call block. In other systems when a
process reads from a pipe or special file and there is no input available, the process is
automatically blocked.
Transition 2 occurs when the scheduler decides that the running process has run long
enough, and it is time to let another process have some CPU time.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Transition 3 occurs when all other processes have had their share and it is time for the
first process to run again.
Transition 4 appears when the external event for which a process was waiting was
happened. If no other process is running at that instant, transition 3 will be triggered
immediately, and the process will start running. Otherwise it may have to wait in ready
state for a little while until the CPU is available.
Using the process Model, it becomes easier to think about what is going on inside the
system. There are many processes like user processes, disk processes, terminal processes,
and so on, which may be blocked when they are waiting for something to happen. When the
disk block has been read or the character typed, the process waiting for it is unblocked and
is ready to run again.
The process model, an integral part of an operating system, can be summarized as follows.
The lowest level of the operating system is the scheduler with a number of processes on
top level of it. All the process handling, such as starting and stopping processes are done by
the scheduler.
Why use Threads?
Using threads it is possible to organize the execution of a program in such a way that
something is always being done, whenever the scheduler gives the heavyweight process
CPU time.
Threads allow a programmer to switch between lightweight processes when it is best for
the program. (The programmer has control).
A process which uses threads does not get more CPU time than an than an ordinary process
but the CPU time it gets is used to do work on the threads. It is possible to write a more
efficient program by making use of threads.
Inside a heavyweight process, threads are scheduled on FCFS basis, unless the program
decides to force certain threads to wait for other threads. If there is only one CPU, then only
one thread can be running at a time.
Threads context switch without any need to involve the kernel the switching is performed
by a user level library. So time is saved because the kernel does not need to know about
the threads.
System Calls For Process Management: System calls typically provided by the kernels of
multiprogramming operating system for process management. System calls provide the
interface between a process and the operating system. These system calls are the routine
services of the operating system.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Q.12 What is semaphore? Write down semWait and semSignal procedures for Binary
semaphore.
Q.15 Write a short note on Binary Semaphore with primitives also differentiate the strong
semaphore and weak semaphore
Q.3 What is semaphore? Give and explain the algorithm of producer/consumer problem
with bounded using general semaphore.
Ans:
Semaphore
•
Semaphore:
– An integer value used for signalling among processes.
•
Only three operations may be performed on a semaphore, all of which are atomic:
– initialize,
– Decrement (semWait)
– increment. (semSignal)
. A semaphore may be initialized to a nonnegative integer value.
2. The semWait operation decrements the semaphore value.
•
If the value becomes negative, then the process executing the semWait is
blocked.
•
Otherwise, the process continues execution.
3. The semSignal operation increments the semaphore value.
•
If the resulting value is less than or equal to zero, then a process blocked by a
semWait operation, if any, is unblocked.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Semaphore Primitives
Strong/Weak Semaphore
•
A queue is used to hold processes waiting on the semaphore
– In what order are processes removed from the queue?
•
Strong Semaphores use FIFO
•
Weak Semaphores don’t specify the order of removal from the queue
Example of Strong Semaphore Mechanism
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
DATE : 20 Sep. 2013
UNIT – 1
PGI MOTIDAU MEHSANA
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Animated Slide – animation shows sections of the diagram to focus and remove
distraction
Processes A, B, and C depend on a result from process D.
Initially (1), A is running;
• B, C, and D are ready;
• the semaphore count is 1, indicating that one of D’s results is available.
• When A issues a semWait instruction on semaphore s, the semaphore
decrements to 0, and A can continue to execute;
• subsequently it rejoins the ready queue.
Then B runs (2), eventually issues a semWait instruction, and is blocked,
allowing D to run (3).
When D completes a new result, it issues a semSignal instruction,
which allows B to move to the ready queue (4).
D rejoins the ready queue and …
C begins to run (5)
• but is blocked when it issues a semWait instruction.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
• Similarly, A and B run and are blocked on the semaphore,
allowing D to resume execution (6).When D has a result, it issues a semSignal, which
transfers C to the ready queue. Later cycles of D will release A and B from the Blocked state.
Processes Using Semaphore
Producer/Consumer Problem :
• General Situation:
– One or more producers are generating data and placing these in a buffer
– A single consumer is taking items out of the buffer one at time
– Only one producer or consumer may access the buffer at any one time
• The Problem:
– Ensure that the Producer can’t add data into full buffer and consumer can’t
remove data from empty buffer
The general statement is this:
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
There are one or more producers generating some type of data (records,
characters) and placing these in a buffer.
• There is a single consumer that is taking items out of the buffer one at a time.
• The system is to be constrained to prevent the overlap of buffer operations.
That is, only one agent (producer or consumer) may access the buffer at any
one time.
The problem is to make sure that the producer won’t try to add data into the buffer if it’s
full and that the consumer won’t try to remove data from an empty buffer.
We will look at a number of solutions to this problem to illustrate both the power and the
pitfalls of semaphores.
•
Functions
• Assume an infinite buffer b with a linear array of elements
Producer
Consumer
while (true) {
/* produce item v */
b[in] = v;
in++;
}
while (true) {
while (in <= out)
/*do nothing */;
w = b[out];
out++;
/* consume item w */
}
Buffer
Incorrect Solution
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Monitors
• The monitor is a programming-language construct that provides equivalent
functionality to that of semaphores and that is easier to control.
• Implemented in a number of programming languages, including
– Concurrent Pascal, Pascal-Plus,
– Modula-2, Modula-3, and Java.
• Local data variables are accessible only by the monitor
• Process enters monitor by invoking one of its procedures
• Only one process may be executing in the monitor at a time
The chief characteristics of a monitor are the following:
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
•
•
•
UNIT – 1
PGI MOTIDAU MEHSANA
1. The local data variables are accessible only by the monitor’s procedures and not
by any external procedure.
2. A process enters the monitor by invoking one of its procedures.
3. Only one process may be executing in the monitor at a time; any other processes
that have invoked the monitor are blocked, waiting for the monitor to become
available.
Synchronization
•
•
Synchronisation achieved by condition variables within a monitor
– only accessible by the monitor.
Monitor Functions:
– Cwait(c): Suspend execution of the calling process on condition c
– Csignal(c) Resume execution of some process blocked after a cwait on
the same condition
A monitor supports synchronization by the use of condition variables that are contained
within the monitor and accessible only within the monitor.
cwait(c): Suspend execution of the calling process on condition c.
The monitor is now available for use by another process.
csignal(c): Resume execution of some process blocked after a cwait on the same condition.
If there are several such processes, choose one of them; if there is no such process, do
nothing.
Structure of a Monitor
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Although a process can enter the monitor by invoking any of its procedures, we can think of
the monitor as having a single entry point that is guarded so that only one process may be
in the monitor at a time.
• Other processes that attempt to enter the monitor join a queue of processes
blocked waiting for monitor availability.
Once a process is in the monitor, it may temporarily block itself on condition x by issuing
cwait(x);
• it is then placed in a queue of processes waiting to re-enter the monitor
when the condition changes, and resume execution at the point in its
program following the cwait(x) call.
If a process that is executing in the monitor detects a change in the condition variable x, it
issues csignal(x),
which alerts the corresponding condition queue that the condition has changed.
*****************************************************************************************
Q.4 Describe the Readers and Writers Problem.
Ans:
• A data area is shared among many processes
– Some processes only read the data area, some only write to the area
• Conditions to satisfy:
– Multiple readers may read the file at once.
– Only one writer at a time may write
– If a writer is writing to the file, no reader may read it.
The readers/writers problem is:
• There is a data area shared among a number of processes.
• The data area could be a file, a block of main memory,or even a bank
of processor registers.
• There are a number of processes that only read the data area (readers) and a
number that only write to the data area (writers).
The conditions that must be satisfied are as follows:
1. Any number of readers may simultaneously read the file.
2. Only one writer at a time may write to the file.
3. If a writer is writing to the file, no reader may read it.
Readers have Priority
This solution uses semaphores, showing one instance each of a reader and a writer; the
solution does not change for multiple readers and writers.
Once a single reader has begun to access the data area, it is possible for readers to retain
control of the data area as long as there is at least one reader in the act of reading.
Therefore, writers are subject to starvation.
***************************************************************************************
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Q.4 Describe the necessary condition for deadlock occurrence. Discuss the deadlock
avoidance using Banker’s algorithm. Also discuss data structure for implementing this
algorithm
Q.6 Define the term Deadlock. Discuss the necessary and sufficient conditions for a
Deadlock to occur. State the general approaches to deal with Deadlock situation.
Q.7 What is race condition? What is mutual exclusion? Define Semaphore, the permissible
operations with Semaphore and how they are used to achieve the mutual exclusion.
Q.8 What is deadlock? State necessary conditions for deadlock to occur. Explain banker’s
algorithm for deadlock avoidance.
Q.13 Write a short note on Banker’s Algorithm with suitable example.
Deadlock
• A set of processes is deadlocked when each process in the set is blocked
awaiting an event that can only be triggered by another blocked process in the
set
– Typically involves processes competing for the same set of resources
• No efficient solution
A set of processes is deadlocked when each process in the set is blocked awaiting an
event that can only be triggered by another blocked process in the set
• typically processes are waiting the freeing up of some requested resource.
• Deadlock is permanent because none of the events is ever triggered.
• Unlike other problems in concurrent process management, there is no efficient
solution in the general case.
Actual Deadlock
•
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Resource Allocation Graphs
• Directed graph that depicts a state of the system of resources and processes
A graph edge directed from a process to a resource indicates a resource that has
been requested by the process but not yet granted.
Within a resource node, a dot is shown for each instance of that resource.
A graph edge directed from a reusable resource node dot to a process indicates a
request that has been granted
•
•
•
•
Conditions for possible Deadlock
Mutual exclusion
– Only one process may use a resource at a time
Hold-and-wait
– A process may hold allocated resources while awaiting assignment of
others
No pre-emption
No resource can be forcibly removed form a process holding it
All previous 3 conditions plus:
Circular wait
– A closed chain of processes exists, such that each process holds at least
one resource needed by the next process in the chain
This is actually a potential consequence of the first three.
Given that the first three conditions exist, a sequence of events may occur that lead
to an unresolvable circular wait.
The unresolvable circular wait is in fact the definition of deadlock.
• The circular wait listed as condition 4 is unresolvable because the first three
conditions hold.
• Thus, the four conditions, taken together, constitute necessary and sufficient
conditions for deadlock.
Resource Allocation Graphs of deadlock
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
•
UNIT – 1
PGI MOTIDAU MEHSANA
Three general approaches exist for dealing with deadlock.
– Prevent deadlock
– Avoid deadlock
– Detect Deadlock
Deadlock Prevention Strategy
• Design a system in such a way that the possibility of deadlock is excluded.
• Two main methods
– Indirect – prevent all three of the necessary conditions occurring at
once
– Direct – prevent circular waits
Deadlock prevention is strategy simply to design a system in such a way that the possibility
of deadlock is excluded.
We can view deadlock prevention methods as falling into two classes.
• indirect method of deadlock prevention is to prevent the occurrence of one
of the three necessary conditions listed previously (items 1 through 3).
• direct method of deadlock prevention is to prevent the occurrence of a
circular wait (item 4).
We now examine techniques related to each of the four
conditions.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
•
UNIT – 1
PGI MOTIDAU MEHSANA
Mutual Exclusion
– Must be supported by the OS
– Hold and Wait
– Require a process request all of its required resources at one time
Mutual Exclusion
The first of the four listed conditions cannot be disallowed (in general).
• If access to a resource requires mutual exclusion, then mutual exclusion
must be supported by the OS.
• Some resources, such as files, may allow multiple accesses for reads but only
exclusive access for writes.
• Even in this case, deadlock can occur if more than one process requires write
permission.
Hold an Wait
Can be prevented by requiring that a process request all of its required
resources at one time and blocking the process until all requests can be granted
simultaneously.
This approach is inefficient in two ways.
1) a process may be held up for a long time waiting for all of its resource
requests to be filled, when in fact it could have proceeded with only some of the
resources.
1) resources allocated to a process may remain unused for a considerable
period, during which time they are denied to other processes.
Another problem is that a process may not know in advance all of the resources
that it will require.
There is also the practical problem created by the use of modular programming
or a multithreaded structure for an application.
An application would need to be aware of all resources that will be requested at
all levels or in all modules to make the simultaneous request.
•
No Preemption
– Process must release resource and request again
– OS may preempt a process to require it releases its resources
– Circular Wait
– Define a linear ordering of resource types
No Preemption
can be prevented in several ways.
1) If a process holding certain resources is denied a further request, that
process must release its original resources and, if necessary, request them
again together with the additional resource.
2) If a process requests a resource that is currently held by another process, the
OS may preempt the second process and require it to release its resources.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
This latter scheme would prevent deadlock only if no two processes possessed the same
priority.
This approach is practical only with resources whose state can be easily saved and restored
later, as is the case with a processor.
Circular Wait Can be prevented by defining a linear ordering of resource types.
As with hold-and-wait prevention, circular-wait prevention may be inefficient, slowing
down processes and denying resource access unnecessarily.
Deadlock Avoidance
Deadlock avoidance allows the three necessary conditions
but makes judicious choices to assure that the deadlock point is never reached.
Avoidance allows more concurrency than prevention.
With deadlock avoidance, a decision is made dynamically whether the current resource
allocation request will, if granted, potentially lead to a deadlock.
Deadlock avoidance requires knowledge of future process resource requests.
Referred to as the banker’s algorithm
– A strategy of resource allocation denial
• Consider a system with fixed number of resources
– State of the system is the current allocation of resources to process
– Safe state is where there is at least one sequence that does not result in
deadlock
Unsafe state is a state that is not safe
•
Banker's algorithm
Theory:
The Banker's algorithm is run by the operating system whenever a process requests
resources. The algorithm prevents deadlock by denying or postponing the request if it
determines that accepting the request could put the system in an unsafe state (one where
deadlock could occur). When a new process enters a system, it must declare the maximum
number of instances of each resource type that may not exceed the total number of
resources in the system. Also, when a process gets all its requested resources it must return
them in a finite amount of time.
There are two algorithm to be executed:
1. Resource allocation algorithm.
2. Safety algorithm.
Safety Algorithm (part of the bankers Algorithm):
Let m = number of resource types, and n = number of processes
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
1.
UNIT – 1
PGI MOTIDAU MEHSANA
Let work and finish be vectors of length m and n respectively.
Initialize: work = available, and finish = all zeros for i = 1, 2, ..., n.
Think of the work vector as a “dynamic available vector”.
2.
Find an i such that both
Finish[i] == 0 AND
Needi <= work. (Needi is the ith row of need[i,j].)
If no such i exists goto step 4
3.
Work=work+allocationi
where allocationi is the ith row if allocation[i,j]
,
Finish[i] = 1
Gotostep2
4.
If finish[i] = 1 for all i, then the system is in a safe state.
else not safe.
Banker’s Algorithm (or Resource-Request Algorithm) – the “main” Algorithm
Let requesti be the request vector for process Pi .
If requesti[j] = k, then process Pi wants k instances of resource type Rj
When a request for resources is made by Pi , the following actions are taken:
1.
If requesti  needi (the old or original needi), goto step 2. Otherwise,
raise an error condition, since the process has exceeded its maximum
claim
2.
If requesti  available (ie., old or original available), goto step 3.
Otherwise, Pi must wait , since the resources are not available. .... all
must be available to go on.
3.
Have the system pretend to have allocated the requested resources to
Pi by modifying the state as follows:
Available = available - requesti;
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Allocationi = allocationi + requesti;
Needi = needi - requesti;
Now using the safety algorithm, check the “new” state for safety. If the
resulting resource-allocation state is safe, the transaction is completed and
process Pi is allocated its resources. If the new state is unsafe, then Pi must
wait for requesti and the old resource- allocation state is restored.
NOTE that steps 1 and 2 are a preliminary “sanity” or consistency check,
and not part of the main “loop”
Banker’s Algorithm - Example
Given: 5 processes: p0, p1, p2, p3, p4
3 resource types: A, B, C
A has 10 instances
B has 5 instances
C has 7 instances
Total Resource Vector = TRV = [10, 5, 7]
Resource-Allocation State:
MAX - Allocation =
Process
Allocation
ABC
MAX
ABC
Available
ABC
Need
ABC
p0
010
753
332
743
p1
200
322
= [10, 5, 7] - [7, 2, 5]
p2
302
902
600
p3
211
222
011
p4
002
433
431
122
-----Total Alloc:
725
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Show that this is a Safe State using the safety algorithm:
Initialization:
work = available = [3 3 2]
finish = 0 0 0 0 0
Notation: “>” means “NOT <=”
Search for a safe sequence:
p0: need0 = 7 4 3 > work = 3 3 2, doesn’t work - try later, finish = 0 0 0 0 0
p1: need1 = 1 2 2 <= work = 3 3 2, finish = 0 1 0 0 0, work = 3 3 2 + 2 0 0 = 5 3 2
p2: need2 = 6 0 0 > work = 5 3 2, doesn’t work - try later, finish = 0 1 0 0 0
p3: need3 = 0 1 1 <= work = 5 3 2, finish = 0 1 0 1 0, work = 5 3 2 + 2 1 1 = 7 4 3
p4: need4 = 4 3 1 <= work = 7 4 3, finish = 0 1 0 1 1, work = 7 4 3 + 0 0 2 = 7 4 5
p2: need2 = 6 0 0 <= work = 7 4 5, finish = 0 1 1 1 1, work = 7 4 5 + 3 0 2 = 10, 4, 7
p0: need0 = 7 4 3 <= work = 10,4,7 finish = 1 1 1 1 1, work = 10 4 7 + 0 1 0 = 10, 5, 7
State is Safe:
sequence is <p1, p3, p4, p2, p0>
Solution is not unique:
Alternate Safe Sequence: <p1, p3, p4, p0, p2> ... reverse p0 and p2
Race Condition


The race condition is a situation where several processes access (read/write)
shared data concurrently and the final value of the shared data depends upon which
process finishes last
 The actions performed by concurrent processes will then depend on the
order in which their execution is interleaved.
To prevent race conditions, concurrent processes must be coordinated or
synchronized.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
 It means that neither process will proceed beyond a certain point in the
computation until both have reached their respective synchronization point.
Critical Section/Region
1. Consider a system consisting of n processes all competing to use some shared data.
2. Each process has a code segment, called critical section, in which the shared data is
accessed.
Example: Race condition updating a variable
The Critical-Section Problem
1. The critical-section problem is to design a protocol that the processes can
cooperate. The protocol must ensure that when one process is executing in its
critical section, no other process is allowed to execute in its critical section.
2. The critical section problem is to design a protocol that the processes can use so
that their action will not depend on the order in which their execution is interleaved
(possibly on many processors).
Solution to Critical Section Problem - Requirements

A solution to the critical-section problem must satisfy the following three
requirements:
1. Mutual Exclusion. If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
 Implications:
 Critical sections better be focused and short.
 Better not get into an infinite loop in there.
 If a process somehow halts/waits in its critical section, it must
not interfere with other processes.
2. Progress. If no process is executing in its critical section and there exist
some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
 If only one process wants to enter, it should be able to.
 If two or more want to enter, one of them should succeed.
3. Bounded Waiting. A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes.
Types of Solutions




Software solutions
– Algorithms whose correctness does not rely on any other assumptions.
Hardware solutions
– Rely on some special machine instructions.
Operating System solutions
– Provide some functions and data structures to the programmer through
system/library calls.
Programming Language solutions
– Linguistic constructs provided as part of a language.
do {
entry section
critical section
exit section
reminder section
} while (1);

Processes may share some common variables to synchronize/coordinate their
actions.
Algorithm 1
Shared variables:
int turn; //initially turn = 0
// turn = i, Pi can enter its critical section
Process Pi
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
do {
while (turn != i);
critical section
turn = j;
reminder section
} while (1);

Satisfies mutual exclusion, but not progress.
Types of Semaphores



Counting semaphore – integer value can range over an unrestricted domain.
Binary semaphore – integer value can range only between 0 and 1; can be
simpler to implement.
Can implement a counting semaphore S as a binary semaphore.
Implementing S as a Binary Semaphore
Data structures:
binary-semaphore S1, S2;
int C:
Initialization:
S1 = 1
S2 = 0
C = initial value of semaphore S
Implementing S
wait operation
wait(S1);
C--;
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
if (C < 0) {
signal(S1);
wait(S2);
}
signal(S1);
signal operation
wait(S1);
C ++;
if (C <= 0)
signal(S2);
else
signal(S1);
Classical Problems of Synchronization



Bounded-Buffer Problem
Readers and Writers Problem
Dining-Philosophers Problem
Dining philosophers problem
In computer science, the dining philosophers problem is an exampleproblem often
used in concurrent algorithm design to illustrate synchrnization issues and
techniques for resolving them.
It wa originally formulated in 1965 by Edsger Dijkstra as a student exam exerise, in
terms of computers competing for access to tape drive perpherals. Soon after, Tony
Hoare gave the problem its present fomulation.
Poblem statement
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
Illustration of the dining philosophers problem
Five silent philosophers sit at a table around a bowl of spaghetti. A fork is placed
between each pair of adjacent philosophers.
Each philosopher must alternately think and eat. However, a philosopher only eat
spaghetti when he has both left and right forks. Each fork an be held by only one
philosopher and so a philosopher can use the fork only if it's not being used by
another philosopher. After he finishes eating, he needs to put down both the forks so
they become available to others. A philosopher can grab the fork on his right or the
one on his left as they become available, but can't start eating before getting both of
them.
•
Eating is not limited by the amount of spaghetti left: assume an infinite supply. An
alternative problem formulation uses rice and chopsticks instead of spaghetti and
forks.
•
The problem is how to design a discipline of behavior (a concurrent algorithm) such
that each philosopher won't starve, i.e. can forever continue to alternate between
eating and thinking, assuming that any philosopher can not know when others may
want to eat or think.
Issues
•
The problem was designed to illustrate the problem of avoiding deadlock, a system
state in which no progress is possible.
•
One idea is to instruct each philosopher to behave as follows:
•
think until the left fork is available and when it is pick it up
•
think until the right fork is available and when it is pick it up
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
•
eat for a fixed amount of time
•
put the right fork down
•
put the left fork down
•
repeat from the beginning
•
This attempt at a solution fails: It allows the system to reach a deadlock state in
which each philosopher has picked up the fork to the left, waiting for the fork to the
right to be put down—which never happens, because A) each right fork is another
philosopher's left fork, and no philosopher will put down that fork until s/he eats,
and B) no philosopher can eat until s/he acquires the fork to his/her own right,
which has already been picked up by the philosopher to his/her right as described
above.[4]
•
Resource starvation might also occur independently of deadlock if a particular
philosopher is unable to acquire both forks because of a timing problem. For
example there might be a rule that the philosophers put down a fork after waiting
ten minutes for the other fork to become available and wait a further ten minutes
before making their next attempt. This scheme eliminates the possibility of deadlock
(the system can always advance to a different state) but still suffers from the
problem of livelock. If all five philosophers appear in the dining room at exactly the
same time and each picks up the left fork at the same time the philosophers will wait
ten minutes until they all put their forks down and then wait a further ten minutes
before they all pick them up again.
•
Mutual exclusion is the core idea of the problem; the dining philosophers create a
generic and abstract scenario useful for explaining issues of this type. The failures
these philosophers may experience are analogous to the difficulties that arise in real
computer programming when multiple programs need exclusive access to shared
resources. These issues are studied in the branch of Concurrent Programming. The
original problems of Dijkstra were related to external devices like tape drives.
However, the difficulties studied in the Dining philosophers problem arise far more
often when multiple processes access sets of data that are being updated. Systems
that must deal with a large number of parallel processes, such as operating system
kernels, use thousands of locks and synchronizations that require strict adherence
to methods and protocols if such problems as deadlock, starvation, or data
corruption are to be avoided.
•
Solutions
•
Conductor solution
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
•
A relatively simple solution is achieved by introducing a waiter at the table.
Philosophers must ask his permission before taking up any forks. Because the
waiter is aware of how many forks are in use, he is able to arbitrate and prevent
deadlock. When four of the forks are in use, the next philosopher to request one has
to wait for the waiter's permission, which is not given until a fork has been released.
The logic is kept simple by specifying that philosophers always seek to pick up their
left hand fork before their right hand fork (or vice versa). The waiter acts as a
semaphore, a concept introduced by Dijkstra in 1965.[5]
•
To illustrate how this works, consider that the philosophers are labelled clockwise
from A to E. If A and C are eating, four forks are in use. B sits between A and C so has
neither fork available, whereas D and E have one unused fork between them.
Suppose D wants to eat. When he wants to take up the fifth fork, deadlock becomes
likely. If instead he asks the waiter and is told to wait, we can be sure that next time
two forks are released there will certainly be at least one philosopher who could
successfully request a pair of forks. Therefore deadlock cannot happen.
Monitor solution
•
The example below shows a solution where the forks are not represented explicitly.
Philosophers can eat if neither of their neighbors are eating. This is comparable to a
system where philosophers that cannot get the second fork must put down the first
fork before they try again.
•
In the absence of locks associated with the forks, philosophers must ensure that the
decision to begin eating is not based on stale information about the state of the
neighbors. E.g. if philosopher B sees that A is not eating, then turns and looks at C, A
could begin eating while B looks at C. This solution avoids this problem by using a
single mutual exclusion lock. This lock is not associated with the forks but with the
decision procedures that can change the states of the philosophers. This is ensured
by the monitor. The procedures test, pickup and putdown are local to the monitor
and share a mutual exclusion lock. Notice that philosophers wanting to eat do not
hold a fork. When the monitor allows a philosopher who wants to eat to continue,
the philosopher will reacquire the first fork before picking up the now available
second fork. When done eating, the philosopher will signal to the monitor that both
forks are now available.
•
Notice that this example does not tackle the starvation problem. For example,
philosopher B can wait forever if the eating periods of philosophers A and C always
overlap.
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
SUB: OS- 630004
UNIT – 1
PGI MOTIDAU MEHSANA
•
To also guarantee that no philosopher starves, one could keep track of the number
of times a hungry philosopher cannot eat when his neighbors put down their forks.
If this number exceeds some limit, the state of the philosopher could change to
Starving, and the decision procedure to pick up forks could be augmented to require
that none of the neighbors are starving.
•
A philosopher who cannot pick up forks because a neighbor is starving, is effectively
waiting for the neighbor's neighbor to finish eating. This additional dependency
reduces concurrency. Raising the threshold for transition to the Starving state
reduces this effect.
•
Readers-writers problem
•
In computer science, the first and second readers-writers problems are examples of
a common computing problem in concurrency. The two problems deal with
situations in which many threads must access the same shared memory at one time,
some reading and some writing, with the natural constraint that no process may
access the share for reading or writing while another process is in the act of writing
to it. (In particular, it is allowed for two or more readers to access the share at the
same time.) A readers-writer lock is a data structure that solves one or more of the
readers-writers problems.
•
The first readers-writers problem
•
Suppose we have a shared memory area with the constraints detailed above. It is
possible to protect the shared data behind a mutual exclusion mutex, in which case
no two threads can access the data at the same time. However, this solution is
suboptimal, because it is possible that a reader R1 might have the lock, and then
another reader R2 request access. It would be foolish for R2 to wait until R1 was
done before starting its own read operation; instead, R2 should start right away.
This is the motivation for the first readers-writers problem, in which the constraint
is added that no reader shall be kept waiting if the share is currently opened for
reading. This is also called readers-preference.
******************************* ******* Thanks ******************************************
DATE : 20 Sep. 2013
DESIGN BY: ASST.PROF.VIKAS KATARIYA 8980936828
Download