Uploaded by Prajwal K

Operating System Notes

advertisement
1. Operating System
There is no specific definition of operating system. Many authors suggest different definition of
operating system. But basic definitions of operating systems are as under:




An operating system is system software that acts as an interface between the user of computer and
the computer hardware and controls the execution of all kinds of application programs.
It is system software that works as an interface between a user of computer and the computer
hardware.
An operating system is system software that manages computer hardware, software resources, and
provides common services for computer programs.
An operating system is system software that acts as an intermediatary between hardware and user.
It also act as resource manager which manages the system resources and also it provides an
environment on which user can execute its application program.
Explanation:
Computer System
Operating System
User
Is this is necessary to use operating system? Can we access the computer system without operating
system? The answer is yes but only few people those having knowledge to access the hardware only those
people can do. But this is not convenient and efficient. The general user unable to access the computer
system without operating system. The operating system is nothing but a system software and interface
between computer and user. Operating system is also act as a resource manager which manages resources
like CPU, Memory, registers etc. the operating system is responsible for the following:

It act as an interface between computer and user.

It manage system resources to accomplished user task.

It provides an environment on which user can execute its application program.
2. Types of Operating system






Batch Operating System.
Multitasking / Time Sharing Operating System.
Multiprocessing Operating System.
Real Time Operating System.
Distributed Operating System.
Network Operating System.
Batch Operating System.
The users of a batch operating system do not interact with the computer directly. Each user
prepares his job on an off-line device like punch cards and submits it to the computer operator. To
speed up processing, jobs with similar needs are batched together and run as a group. The
programmers leave their programs with the operator and the operator then sorts the programs with
similar requirements into batches.
Examples of Batch based Operating System: Payroll System, Bank Statements etc.
Advantages of Batch Operating System:




It is very difficult to guess or know the time required by any job to complete.
Processors of the batch systems know how long the job would be when it is in queue
Multiple users can share the batch systems
The idle time for batch system is very less
It is easy to manage large work repeatedly in batch systems
Disadvantages of Batch Operating System:




The computer operators should be well known with batch systems
Batch systems are hard to debug
It is sometime costly
The other jobs will have to wait for an unknown time if any job fails
Multitasking / Time Sharing Operating System.
Time-sharing is a technique which enables many people, located at various
terminals, to use a particular computer system at the same time. Time-sharing or
multitasking is a logical extension of multiprogramming. Processor's time which is shared
among multiple users simultaneously is termed as time-sharing.
Examples of Time-Sharing OSs are: Multics, Unix etc.
Advantages of Timesharing operating systems



Provides the advantage of quick response.
Avoids duplication of software.
Reduces CPU idle time.
Disadvantages of Time-sharing operating systems



Problem of reliability.
Question of security and integrity of user programs and data.
Problem of data communication.
Multiprocessing Operating System.
Multiprocessor Operating System refers to the use of two or more central
processing units (CPU) within a single computer system. These multiple CPUs are in a
close communication sharing the computer bus, memory and other peripheral devices.
These types of systems are used when very high speed is required to process a large
volume of data. These systems are generally used in environment like satellite control,
weather forecasting etc.
Advantages of Multiprocessing Operating System



Multiprocessor systems can save money.
Sharing power supplies housings, and peripherals.
Can execute programs more quickly and can have increased reliability.
Disadvantages of Time-sharing operating systems

Multiprocessor systems are more complex in both hardware and software.
Real Time Operating System (RTOS).
A real-time system is defined as a data processing system in which the time interval
required to process and respond to inputs is so small that it controls the environment. The
time taken by the system to respond to an input and display of required updated
information is termed as the response time. So in this method, the response time is very less
as compared to online processing.
Real-time systems are used when there are rigid time requirements on the operation
of a processor or the flow of data and real-time systems can be used as a control device in a
dedicated application. A real-time operating system must have well-defined, fixed time
constraints, otherwise the system will fail.
For example, Scientific experiments, medical imaging systems, industrial control
systems, weapon systems, robots, air traffic control systems, etc.
There are two types of real-time operating systems.
Hard real-time systems
Hard real-time systems guarantee that critical tasks complete on time. In hard realtime systems, secondary storage is limited or missing and the data is stored in ROM. In
these systems, virtual memory is almost never found.
Soft real-time systems
Soft real-time systems are less restrictive. A critical real-time task gets priority over
other tasks and retains the priority until it completes. Soft real-time systems have limited
utility than hard real-time systems. For example, multimedia, virtual reality, Advanced
Scientific Projects like undersea exploration and planetary rovers, etc.
Advantages of RTOS:






Maximum Consumption: Maximum utilization of devices and system,thus more
output from all the resources
Task Shifting: Time assigned for shifting tasks in these systems are very less. For
example in older systems it takes about 10 micro seconds in shifting one task to another
and in latest systems it takes 3 micro seconds.
Focus on Application: Focus on running applications and less importance to
applications which are in queue.
Real time operating system in embedded system: Since size of programs are small,
RTOS can also be used in embedded systems like in transport and others.
Error Free: These types of systems are error free.
Memory Allocation: Memory allocation is best managed in these type of systems.
Disadvantages of RTOS:





Limited Tasks: Very few tasks run at the same time and their concentration is very less
on few applications to avoid errors.
Use heavy system resources: Sometimes the system resources are not so good and
they are expensive as well.
Complex Algorithms: The algorithms are very complex and difficult for the designer
to write on.
Device driver and interrupt signals: It needs specific device drivers and interrupt
signals to response earliest to interrupts.
Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Distributed Operating System.
These types of operating system is a recent advancement in the world of computer
technology and are being widely accepted all-over the world and, that too, with a great
pace. Various autonomous interconnected computers communicate each other using a
shared communication network. Independent systems possess their own memory unit and
CPU. These are referred as loosely coupled systems or distributed systems. These system’s
processors differ in size and function. The major benefit of working with these types of
operating system is that it is always possible that one user can access the files or software
which are not actually present on his system but on some other system connected within
this network i.e., remote access is enabled within the devices connected in that network.
Examples of Distributed Operating System are- LOCUS etc.
Advantages of Distributed Operating System:






Failure of one will not affect the other network communication, as all systems are
independent from each other
Electronic mail increases the data exchange speed
Since resources are being shared, computation is highly fast and durable
Load on host computer reduces
These systems are easily scalable as many systems can be easily added to the network
Delay in data processing reduces
Disadvantages of Distributed Operating System:



Failure of the main network will stop the entire communication
To establish distributed systems the language which are used are not well defined yet
These types of systems are not readily available as they are very expensive. Not only
that the underlying software is highly complex and not understood well yet
Network Operating System.
A Network Operating System runs on a server and provides the server the capability
to manage data, users, groups, security, applications, and other networking functions. The
primary purpose of the network operating system is to allow shared file and printer access
among multiple computers in a network, typically a local area network (LAN), a private
network or to other networks.
Examples of network operating systems include Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD.
Advantages of network operating systems




Centralized servers are highly stable.
Security is server managed.
Upgrades to new technologies and hardware can be easily integrated into the system.
Remote access to servers is possible from different locations and types of systems.
Disadvantages of network operating systems



High cost of buying and running a server.
Dependency on a central location for most operations.
Regular maintenance and updates are required.
3. System Components of Operating system
Process Management „
Main Memory Management„
File Management„
I/O System Management„
Secondary Management„
Protection System
Process Management
A process is a program in execution. A process needs certain resources, including
CPU time, memory, files, and I/O devices, to accomplish its task.
The operating system is responsible for the following activities in connection with
process management.– Process creation and deletion.– process suspension and
resumption.– Provision of mechanisms for:• process synchronization• process
communication.
Main Memory Management
Memory is a large array of words or bytes, each with its own address. It is a
repository of quickly accessible data shared by the CPU and I/O devices. Main memory is a
volatile storage device. It loses its contents in the case of system failure.
The operating system is responsible for the following activities in connections with
memory management:– Keep track of which parts of memory are currently being used and
by whom.– Decide which processes to load when memory space becomes available.–
Allocate and deallocate memory space as needed.
File Management
A file is a collection of related information defined by its creator. Commonly, files
represent programs (both source and object forms) and data.„
The operating system is responsible for the following activities in connections with
file management:– File creation and deletion.– Directory creation and deletion.– Support of
primitives for manipulating files and directories.– Mapping files onto secondary storage.–
File backup on stable (nonvolatile) storage media.
I/O System Management
One of the important use of an operating system that helps you to hide the
variations of specific hardware devices from the user.

The operating system is responsible for the following activities in connections with I/O
system management. It offers buffer caching system



It provides general device driver code
It provides drivers for particular hardware devices.
I/O helps you to knows the individualities of a specific device.
Secondary Management
The most important task of a computer system is to execute programs. These
programs, along with the data, helps you to access, which is in the main memory during
execution.
This Memory of the computer is very small to store all data and programs
permanently. The computer system offers secondary storage to back up the main Memory.
Today modern computers use hard drives/SSD as the primary storage of both programs and
data. However, the secondary storage management also works with storage devices, like a
USB flash drive, and CD/DVD drives.
Programs like assemblers, compilers, stored on the disk until it is loaded into
memory, and then use the disk as a source and destination for processing.
Responsibility of secondary storage management in OS:



Storage allocation
Free space management
Disk scheduling
Protection System
Protection refers to a mechanism for controlling access by programs, processes, or
users to both system and user resources.„The protection mechanism must: – distinguish
between authorized and unauthorized usage.– specify the controls to be imposed.– provide
a means of enforcement.
4. Services provided by Operating System
An Operating System provides services to both the users and to the programs.


It provides programs an environment to execute.
It provides users the services to execute the programs in a convenient manner.
Following are a few common services provided by an operating system −







Program execution
I/O operations
File System manipulation
Communication
Error Detection
Resource Allocation
Protection
Program execution
Operating systems handle many kinds of activities from user programs to system programs
like printer spooler, name servers, file server, etc. Each of these activities is encapsulated as a
process.
A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use). Following are the major activities of an operating system with
respect to program management −






Loads a program into memory.
Executes the program.
Handles program's execution.
Provides a mechanism for process synchronization.
Provides a mechanism for process communication.
Provides a mechanism for deadlock handling.
I/O Operation
An I/O subsystem comprises of I/O devices and their corresponding driver software.
Drivers hide the peculiarities of specific hardware devices from the users.
An Operating System manages the communication between user and device drivers.


I/O operation means read or write operation with any file or any specific I/O device.
Operating system provides the access to the required I/O device when required.
File system manipulation
A file represents a collection of related information. Computers can store files on
the disk (secondary storage), for long-term storage purpose. Examples of storage media
include magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these
media has its own properties like speed, capacity, data transfer rate and data access
methods.
A file system is normally organized into directories for easy navigation and usage.
These directories may contain files and other directions. Following are the major activities
of an operating system with respect to file management −






Program needs to read a file or write a file.
The operating system gives the permission to the program for operation on file.
Permission varies from read-only, read-write, denied and so on.
Operating System provides an interface to the user to create/delete files.
Operating System provides an interface to the user to create/delete directories.
Operating System provides an interface to create the backup of file system.
Communication
In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, the operating system manages communications
between all the processes. Multiple processes communicate with one another through
communication lines in the network.
The OS handles routing and connection strategies, and the problems of contention and
security. Following are the major activities of an operating system with respect to
communication −



Two processes often require data to be transferred between them
Both the processes can be on one computer or on different computers, but are connected
through a computer network.
Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
Error handling
Errors can occur anytime and anywhere. An error may occur in CPU, in I/O devices
or in the memory hardware. Following are the major activities of an operating system with
respect to error handling −


The OS constantly checks for possible errors.
The OS takes an appropriate action to ensure correct and consistent computing.
Resource Allocation
In case of multi-user or multi-tasking environment, resources such as main memory,
CPU cycles and files storage are to be allocated to each user or job. Following are the
major activities of an operating system with respect to resource management −


The OS manages all kinds of resources using schedulers.
CPU scheduling algorithms are used for better utilization of CPU.
Protection
Considering a computer system having multiple users and concurrent execution of
multiple processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes,
or users to the resources defined by a computer system. Following are the major activities
of an operating system with respect to protection −



The OS ensures that all access to system resources is controlled.
The OS ensures that external I/O devices are protected from invalid access attempts.
The OS provides authentication features for each user by means of passwords.
5. System Call
The interface between a process and an operating system is provided by system
calls. In general, system calls are available as assembly language instructions. They are also
included in the manuals used by the assembly level programmers. System calls are usually made
when a process in user mode requires access to a resource. Then it requests the kernel to provide the
resource via a system call.
In general, system calls are required in the following situations −




If a file system requires the creation or deletion of files. Reading and writing from files
also require a system call.
Creation and management of new processes.
Network connections also require system calls. This includes sending and receiving
packets.
Access to a hardware devices such as a printer, scanner etc. requires a system call.
Types of System Calls
There are mainly five types of system calls. These are explained in detail as follows
Process Control
These system calls deal with processes such as process creation, process termination etc.
File Management
These system calls are responsible for file manipulation such as creating a file,
reading a file, writing into a file etc.
Device Management
These system calls are responsible for device manipulation such as reading from
device buffers, writing into device buffers etc.
Information Maintenance
These system calls handle information and its transfer between the operating system
and the user program.
Communication
These system calls are useful for inter process communication. They also deal with
creating and deleting a communication connection.
Types of System Calls
Windows
Process Control
CreateProcess()
ExitProcess()
WaitForSingleObject()
Linux
fork()
exit()
wait()
CreateFile()
ReadFile()
WriteFile()
CloseHandle()
SetConsoleMode()
ReadConsole()
WriteConsole()
GetCurrentProcessID()
SetTimer()
Sleep()
CreatePipe()
CreateFileMapping()
MapViewOfFile()
open()
read()
write()
close()
ioctl()
read()
write()
getpid()
alarm()
sleep()
pipe()
shmget()
mmap()
File Management
Device Management
Information Maintenance
Communication
6. System Programs
System programs provide an environment where programs can be developed and
executed. In the simplest sense, system programs also provide a bridge between the user
interface and system calls. In reality, they are much more complex. For example, a
compiler is a complex system program.
The system program serves as a part of the operating system. It traditionally lies
between the user interface and the system calls. The user view of the system is actually
defined by system programs and not system calls because that is what they interact with
and system programs are closer to the user interface.
An image that describes system programs in the operating system hierarchy is as follows:
In the above image, system programs as well as application programs form a bridge
between the user interface and the system calls. So, from the user view the operating
system observed is actually the system programs and not the system calls.
Types of System Programs
System programs can be divided into seven parts. These are given as follows:
Status Information
The status information system programs provide required data on the current or past
status of the system. This may include the system date, system time, available memory in
system, disk space, logged in users etc.
Communications
These system programs are needed for system communications such as web
browsers. Web browsers allow systems to communicate and access information from the
network as required.
File Manipulation
These system programs are used to manipulate system files. This can be done using
various commands like create, delete, copy, rename, print etc. These commands can create
files, delete files, copy the contents of one file into another, rename files, print them etc.
Program Loading and Execution
The system programs that deal with program loading and execution make sure that
programs can be loaded into memory and executed correctly. Loaders and Linkers are a
prime example of this type of system programs.
File Modification
System programs that are used for file modification basically change the data in the
file or modify it in some other way. Text editors are a big example of file modification
system programs.
Application Programs
Application programs can perform a wide range of services as per the needs of the
users. These include programs for database systems, word processors, plotting tools,
spreadsheets, games, scientific applications etc.
Programming Language Support
These system programs provide additional support features for different
programming languages. Some examples of these are compilers, debuggers etc. These
compile a program and make sure it is error free respectively.
7. Structure of Operating system
An operating system is a construct that allows the user application programs to
interact with the system hardware. Since the operating system is such a complex structure,
it should be created with utmost care so it can be used and modified easily. An easy way to
do this is to create the operating system in parts. Each of these parts should be well defined
with clear inputs, outputs and functions.
Simple Structure
There are many operating systems that have a rather simple structure. These started
as small systems and rapidly expanded much further than their scope. A common example
of this is MS-DOS. It was designed simply for a niche amount for people. There was no
indication that it would become so popular.
An image to illustrate the structure of MS-DOS is as follows:
It is better that operating systems have a modular structure, unlike MS-DOS. That
would lead to greater control over the computer system and its various applications. The
modular structure would also allow the programmers to hide information as required and
implement internal routines as they see fit without changing the outer specifications.
Layered Structure
One way to achieve modularity in the operating system is the layered approach. In
this, the bottom layer is the hardware and the topmost layer is the user interface.
An image demonstrating the layered approach is as follows:
As seen from the image, each upper layer is built on the bottom layer. All the layers
hide some structures, operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be carefully
defined. This is necessary because the upper layers can only use the functionalities of the
layers below them.
Virtual Machine
The layered approach of operating systems is taken to its logical conclusion in the
concept of virtual machine. The fundamental idea behind a virtual machine is to abstract
the hardware of a single computer (the CPU, Memory, Disk drives, Network Interface
Cards, and so forth) into several different execution environments and thereby creating the
illusion that each separate execution environment is running its own private computer. By
using CPU Scheduling and Virtual Memory techniques, an operating system can create the
illusion that a process has its own processor with its own (virtual) memory. Normally a
process has additional features, such as system calls and a file system, which are not
provided by the hardware. The Virtual machine approach does not provide any such
additional functionality but rather an interface that is identical to the underlying bare
hardware. Each process is provided with a (virtual) copy of the underlying computer.
Hardware Virtual machine
The original meaning of virtual machine, sometimes called a hardware virtual
machine, is that of a number of discrete identical execution environments on a single
computer, each of which runs an operating system (OS). This can allow applications
written for one OS to be executed on a machine which runs a different OS, or provide
execution “sandboxes” which provide a greater level of isolation between processes than is
achieved when running multiple processes on the same instance of an OS. One use is to
provide multiple users the illusion of having an entire computer, one that is their “private”
machine, isolated from other users, all on a single physical machine. Another advantage is
that booting and restarting a virtual machine can be much faster than with a physical
machine, since it may be possible to skip tasks such as hardware initialization.
Such software is now often referred to with the terms virtualization and virtual
servers. The host software which provides this capability is often referred to as a virtual
machine
Application virtual machine: Another meaning of virtual machine is a piece of
computer software that isolates the application being used by the user from the computer.
Because versions of the virtual machine are written for various computer platforms, any
application written for the virtual machine can be operated on any of the platforms, instead
of having to produce separate versions of the application for each computer and operating
system. The application is run on the computer using an interpreter or Just in Time
compilation. One of the best known examples of an application virtual machine is Sun
Microsystem’s Java Virtual Machine.
8. System Design and Implementation
An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but it
provides an atmosphere in which different applications and programs can do useful work.
There are many problems that can occur while designing and implementing an
operating system. These are covered in operating system design and implementation.
Operating System Design Goals
It is quite complicated to define all the goals and specifications of the operating
system while designing it. The design changes depending on the type of the operating
system i.e if it is batch system, time shared system, single user system, multi user system,
distributed system etc.
There are basically two types of goals while designing an operating system. These are:
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast
according to the users. However, these specifications are not very useful as there is no set
method to achieve these goals.
System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system. But
there is not specific method to achieve these goals as well.
Operating System Mechanisms and Policies
There is no specific way to design an operating system as it is a highly creative task.
However, there are general software principles that are applicable to all operating systems.
A subtle difference between mechanism and policy is that mechanism shows how to
do something and policy shows what to do. Policies may change over time and this would
lead to changes in mechanism. So, it is better to have a general mechanism that would
require few changes even when a policy change occurs.
For example - If the mechanism and policy are independent, then few changes are
required in mechanism if policy changes. If a policy favours I/O intensive processes over
CPU intensive processes, then a policy change to preference of CPU intensive processes
will not change the mechanism.
Operating System Implementation
The operating system needs to be implemented after it is designed. Earlier they
were written in assembly language but now higher level languages are used. The first
system not written in assembly language was the Master Control Program (MCP) for
Burroughs Computers.
Advantages of Higher Level Language
There are multiple advantages to implementing an operating system using a higher
level language such as: the code is written more fast, it is compact and also easier to debug
and understand. Also, the operating system can be easily moved from one hardware to
another if it is written in a high level language.
Disadvantages of Higher Level Language
Using high level language for implementing an operating system leads to a loss in
speed and increase in storage requirements. However in modern systems only a small
amount of code is needed for high performance, such as the CPU scheduler and memory
manager. Also, the bottleneck routines in the system can be replaced by assembly language
equivalents if required.
9. System Generations
Operating Systems have evolved over the years. So, their evolution through the
years can be mapped using generations of operating systems. There are four generations of
operating systems. These can be described as follows:
The First Generation ( 1945 - 1955 ): Vacuum Tubes and Plugboards
Digital computers were not constructed until the second world war. Calculating
engines with mechanical relays were built at that time. However, the mechanical relays
were very slow and were later replaced with vacuum tubes. These machines were
enormous but were still very slow.
These early computers were designed, built and maintained by a single group of
people. Programming languages were unknown and there were no operating systems so all
the programming was done in machine language. All the problems were simple numerical
calculations.
By the 1950’s punch cards were introduced and this improved the computer system.
Instead of using plugboards, programs were written on cards and read into the system.
The Second Generation ( 1955 - 1965 ): Transistors and Batch Systems
Transistors led to the development of the computer systems that could be
manufactured and sold to paying customers. These machines were known as mainframes
and were locked in air-conditioned computer rooms with staff to operate them.
The Batch System was introduced to reduce the wasted time in the computer. A tray
full of jobs was collected in the input room and read into the magnetic tape. After that, the
tape was rewound and mounted on a tape drive. Then the batch operating system was
loaded in which read the first job from the tape and ran it. The output was written on the
second tape. After the whole batch was done, the input and output tapes were removed and
the output tape was printed.
The Third Generation ( 1965 - 1980 ): Integrated Circuits and
Multiprogramming
Until the 1960’s, there were two types of computer systems i.e the scientific and the
commercial computers. These were combined by IBM in the System/360. This used
integrated circuits and provided a major price and performance advantage over the second
generation systems.
The third generation operating systems also introduced multiprogramming. This
meant that the processor was not idle while a job was completing its I/O operation. Another
job was scheduled on the processor so that its time would not be wasted.
The Fourth Generation ( 1980 - Present ): Personal Computers
Personal Computers were easy to create with the development of large-scale
integrated circuits. These were chips containing thousands of transistors on a square
centimeter of silicon. Because of these, microcomputers were much cheaper than
minicomputers and that made it possible for a single individual to own one of them.
The advent of personal computers also led to the growth of networks. This created
network operating systems and distributed operating systems. The users were aware of a
network while using a network operating system and could log in to remote machines and
copy files from one machine to another.
UNIT –II
2. Process
A process is basically a program in execution. The execution of a process must progress in a
sequential fashion.
A process is defined as an entity which represents the basic unit of work to be
implemented in the system.
When a program is loaded into the memory and it becomes a process, it can be divided into
four sections ─ stack, heap, text and data. The following image shows a simplified layout of a
process inside main memory −
S.N.
1
Component & Description
Stack The process Stack contains the temporary data such as method/function parameters, return
address and local variables.
2
Heap This is dynamically allocated memory to a process during its run time.
3
Text This includes the current activity represented by the value of Program Counter and the contents
of the processor's registers.
4
Data This section contains the global and static variables.
.
2.10 Scheduling Criteria
2.12
Multiprocessing scheduling
In the multiprocessor scheduling, there are multiple CPU’s which share the load so that
various process run simultaneously. In general, the multiprocessor scheduling is complex as
compared to single processor scheduling. In the multiprocessor scheduling, there are many
processors and they are identical and we can run any process at any time.
The multiple CPU’s in the system are in the close communication which shares a
common bus, memory and other peripheral devices. So we can say that the system is a
tightly coupled system. These systems are used when we want to process a bulk amount of
data. These systems are mainly used in satellite, weather forecasting etc.
Multiprocessing system work on the concept of symmetric multiprocessing model.
In this system, each processor work on the identical copy of the operating system and these
copies communicate with each other. We the help of this system we can save money
because of other devices like peripherals. Power supplies and other devices are shared.
The most important thing is that we can do more work in a short period of time. If
one system fails in the multiprocessor system the whole system will not halt only the speed
of the processor will be slow down.
The whole performance of the multiprocessing system is managed by the operating
system . operating system assigns different task to the different processor in the system. In
the multiprocessing system, the process is broken into the thread which they can be run
independently.
These type of system allow the threads to run on more than one processor
simultaneously. In these systems the various process in the parallel so this is called parallel
processor. Parallel processing is the ability of the CPU to run various process
simultaneously. In the multiprocessing system, there is dynamically sharing of resources
among the various processors.
Multiprocessor operating system is a kind of regular OS which handles many
systems calls at the same time, do memory management, provide file management also the
input-output devices.
2.11
Real Time Scheduling
2.12
Scheduling Algorithm
Types of CPU scheduling Algorithm
There are mainly six types of process scheduling algorithms
1.
2.
3.
4.
5.
6.
First Come First Serve (FCFS)
Shortest-Job-First (SJF) Scheduling
Shortest Remaining Time
Priority Scheduling
Round Robin Scheduling
Multilevel Queue Scheduling
First Come First Serve
First Come First Serve is the full form of FCFS. It is the easiest and most simple CPU
scheduling algorithm. In this type of algorithm, the process which requests the CPU gets the CPU
allocation first. This scheduling method can be managed with a FIFO queue.
Characteristics of FCFS method:




It offers non-preemptive and pre-emptive scheduling algorithm.
Jobs are always executed on a first-come, first-serve basis
It is easy to implement and use.
However, this method is poor in performance, and the general wait time is quite high.
Shortest Remaining Time
The full form of SRT is Shortest remaining time. It is also known as SJF preemptive
scheduling. In this method, the process will be allocated to the task, which is closest to its
completion. This method prevents a newer ready state process from holding the completion of an
older process.
Characteristics of SRT scheduling method:



This method is mostly applied in batch environments where short jobs are required to be given
preference.
This is not an ideal method to implement it in a shared system where the required CPU time is
unknown.
Associate with each process as the length of its next CPU burst. So that operating system uses these
lengths, which helps to schedule the process with the shortest possible time.
Priority Based Scheduling
Priority scheduling is a method of scheduling processes based on priority. In this method,
the scheduler selects the tasks to work as per the priority.
Priority scheduling also helps OS to involve priority assignments. The processes with higher
priority should be carried out first, whereas jobs with equal priorities are carried out on a roundrobin or FCFS basis. Priority can be decided based on memory requirements, time requirements,
etc.
Round-Robin Scheduling
Round robin is the oldest, simplest scheduling algorithm. The name of this algorithm comes
from the round-robin principle, where each person gets an equal share of something in turn. It is
mostly used for scheduling algorithms in multitasking. This algorithm method helps for starvation
free execution of processes.
Characteristics of Round-Robin Scheduling



Round robin is a hybrid model which is clock-driven
Time slice should be minimum, which is assigned for a specific task to be processed. However, it may
vary for different processes.
It is a real time system which responds to the event within a specific time limit.
Shortest Job First
SJF is a full form of (Shortest job first) is a scheduling algorithm in which the process with the
shortest execution time should be selected for execution next. This scheduling method can be
preemptive or non-preemptive. It significantly reduces the average waiting time for other processes
awaiting execution.
Characteristics of SJF Scheduling





It is associated with each job as a unit of time to complete.
In this method, when the CPU is available, the next process or job with the shortest completion time
will be executed first.
It is Implemented with non-preemptive policy.
This algorithm method is useful for batch-type processing, where waiting for jobs to complete is not
critical.
It improves job output by offering shorter jobs, which should be executed first, which mostly have a
shorter turnaround time.
Multiple-Level Queues Scheduling
This algorithm separates the ready queue into various separate queues. In this method,
processes are assigned to a queue based on a specific property of the process, like the process
priority, size of the memory, etc.
Characteristic of Multiple-Level Queues Scheduling:


Multiple queues should be maintained for processes with some characteristics.
Every queue may have its separate scheduling algorithms.
3.
Introduction of Process Synchronization
 What is process synchronization
Process Synchronization is a technique which is used to coordinate the process that use
shared Data. There are two types of Processes in Operating Systems:Independent Process –
The process that does not affect or is affected by the other process while its execution
then the process is called Independent Process. Example The process that does not share
any shared variable, database, files, etc.
Cooperating Process –
The process that affect or is affected by the other process while execution, is called a
Cooperating Process. Example The process that share file, variable, database, etc are the
Cooperating Process.
Process Synchronization is the task of coordinating the execution of processes in a way
that no two processes can have access to the same shared data and resources.
It is specially needed in a multi-process system when multiple processes are running
together, and more than one processes try to gain access to the same shared resource or
data at the same time.
This can lead to the inconsistency of shared data. So the change made by one process not
necessarily reflected when other processes accessed the same shared data. To avoid this
type of inconsistency of data, the processes need to be synchronized with each other.
Race Condition
It is the condition where several processes tries to access the resources and modify the
shared data concurrently and outcome of the process depends on the particular order of
execution that leads to data inconsistency, this condition is called Race Condition. This
condition can be avoided using the technique called Synchronization or Process
Synchronization, in which we allow only one process to enter and manipulates the shared
data in Critical Section.
Why we need process synchronization
Process synchronization needs to be implemented to prevent data inconsistency among
processes, process deadlocks, and prevent race conditions, which are when two or more
operations are executed at the same time, not scheduled in the proper sequence and not
exited in the critical section correctly. For such we need process synchronization.
How Process Synchronization Works?
For Example, process A changing the data in a memory location while another process B
is trying to read the data from the same memory location. There is a high probability that
data read by the second process will be erroneous.
 Critical Section
Critical Section is the part of a program which tries to access shared resources. The critical
section cannot be executed by more than one process at the same time; operating system faces
the difficulties in allowing and disallowing the processes from entering the critical section.
Critical section is a code segment that can be accessed by only one process at a time.
Critical section contains shared variables which need to be synchronized to maintain.
This setup can be defined in various regions like:
Entry Section
It is part of the process which decides the entry of a particular process in the Critical
Section, out of many other processes.
Critical Section
It is the part in which only one process is allowed to enter and modify the shared variable.
This part of the process ensures that only no other process can access the resource of
shared data.
Exit Section
This process allows the other process that are waiting in the Entry Section, to enter into
the Critical Sections. It checks that a process that after a process has finished execution in
Critical Section can be removed through this Exit Section.
Remainder Section
The other parts of the Code other than Entry Section, Critical Section and Exit Section
are known as Remainder Section.
 Critical Section Problem
A Critical Section is a code segment that accesses shared variables and has to be executed as an
atomic action. It means that in a group of cooperating processes, at a given point of time, only
one process must be executing its critical section.
Any solution to the critical section problem must satisfy three requirements:
Mutual Exclusion: If a process is executing in its critical section, then no other process
is allowed to execute in the critical section.
Progress: If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their
remainder section can participate in deciding which will enter in the critical section next,
and the selection cannot be postponed indefinitely.
Bounded Wait: A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its
critical section and before that request is granted.
 Synchronization Hardware
Many systems provide hardware support for critical section code. The critical section
problem could be solved easily in a single-processor environment if we could disallow
interrupts to occur while a shared variable or resource is being modified.
In this manner, we could be sure that the current sequence of instructions would be
allowed to execute in order without pre-emption. Unfortunately, this solution is not
feasible in a multiprocessor environment.
Disabling interrupt on a multiprocessor environment can be time consuming as the
message is passed to all the processors.
This message transmission lag, delays entry of threads into critical section and the system
efficiency decreases.
Mutex Locks
As the synchronization hardware solution is not easy to implement for everyone, a strict
software approach called Mutex Locks was introduced. In this approach, in the entry
section of code, a LOCK is acquired over the critical resources modified and used inside
critical section, and in the exit section that LOCK is released.
As the resource is locked while a process executes its critical section hence no other
process can access it.
 Semaphores
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent
processes by using the value of a simple integer variable to synchronize the progress of
interacting processes. This integer variable is called semaphore. So it is basically a
synchronizing tool and is accessed only through two low standard atomic operations,
wait and signal designated by P(S) and V(S) respectively.
In very simple words, semaphore is a variable which can hold only a non-negative
Integer value, shared between all the threads, with operations wait and signal, which
work as follow:
P(S): if S ≥ 1 then S := S - 1
else <block and enqueue the process>;
V(S): if <some process is blocked on the queue>
then <unblock a process>
else S := S + 1;
The classical definitions of wait and signal are:
Wait: Decrements the value of its argument S, as soon as it would become nonnegative(greater than or equal to 1).
Signal: Increments the value of its argument S, as there is no more process blocked on
the queue.
Properties of Semaphores
1.
2.
3.
4.
It's simple and always have a non-negative Integer value.
Works with many processes.
Can have many different critical sections with different semaphores.
Each critical section has unique access semaphores.
5. Can permit multiple processes into the critical section at once, if desirable.
Types of Semaphores
Semaphores are mainly of two types:
Binary Semaphore:
It is a special form of semaphore used for implementing mutual exclusion, hence
it is often called a Mutex. A binary semaphore is initialized to 1 and only takes
the values 0 and 1 during execution of a program.
Counting Semaphores:
These are used to implement bounded concurrency.
Example of Use
Here is a simple step wise implementation involving declaration and usage of
semaphore.
Shared var mutex: semaphore = 1;
Process i
begin
.
.
P(mutex);
execute CS;
V(mutex);
.
.
End;
Limitations of Semaphores
1. Priority Inversion is a big limitation of semaphores.
2. Their use is not enforced, but is by convention only.
3. With improper use, a process may block indefinitely. Such a situation is called
Deadlock. We will be studying deadlocks in details in coming lessons.
 Classical Problems of Synchronization
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in systems
where cooperating processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
2. Dining Philosophers Problem
3. The Readers Writers Problem
Bounded Buffer Problem
 This problem is generalized in terms of the Producer Consumer problem, where a finite buffer
pool is used to exchange messages between producer and consumer processes.
Because the buffer pool has a maximum size, this problem is often called the Bounded
buffer problem.
 Solution to this problem is, creating two counting semaphores "full" and "empty" to keep track
of the current number of full and empty buffers respectively.
Dining Philosophers Problem
 The dining philosopher's problem involves the allocation of limited resources to a group of
processes in a deadlock-free and starvation-free manner.
 There are five philosophers sitting around a table, in which there are five chopsticks/forks kept
beside them and a bowl of rice in the centre, When a philosopher wants to eat, he uses two
chopsticks - one from their left and one from their right. When a philosopher wants to think, he
keeps down both chopsticks at their original place.
The Readers Writers Problem
 In this problem there are some processes(called readers) that only read the shared data, and
never change it, and there are other processes(called writers) who may change the data in
addition to reading, or instead of reading it.
 There are various type of readers-writers problem, most centred on relative priorities of readers
and writers.
Bounded Buffer Problem
Bounded buffer problem, which is also called producer consumer problem, is one of
the classic problems of synchronization. Let's start by understanding the problem here,
before moving on to the solution and program code.
What is the Problem Statement?
There is a buffer of n slots and each slot is capable of storing one unit of data. There are
two processes running, namely, producer and consumer, which are operating on the
buffer.
Bounded Buffer Problem
A producer tries to insert data into an empty slot of the buffer. A consumer tries to
remove data from a filled slot in the buffer. As you might have guessed by now, those
two processes won't produce the expected output if they are being executed concurrently.
There needs to be a way to make the producer and consumer work in an independent
manner.
Here's a Solution
One solution of this problem is to use semaphores. The semaphores which will be used
here are:
 m, a binary semaphore which is used to acquire and release the lock.
 empty, a counting semaphore whose initial value is the number of slots in the
buffer, since, initially all slots are empty.
 full, a counting semaphore whose initial value is 0.
At any instant, the current value of empty represents the number of empty slots in
the buffer and full represents the number of occupied slots in the buffer.
The Producer Operation
The pseudocode of the producer function looks like this:
do
{
// wait until empty > 0 and then decrement 'empty'
wait(empty);
// acquire lock
wait(mutex);
/* perform the insert operation in a slot */
// release lock
signal(mutex);
// increment 'full'
signal(full);
}
while(TRUE)
i.
ii.
iii.
iv.
Looking at the above code for a producer, we can see that a producer first waits until
there is atleast one empty slot.
Then it decrements the empty semaphore because, there will now be one less empty
slot, since the producer is going to insert data in one of those slots.
Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until
producer completes its operation.
After performing the insert operation, the lock is released and the value of full is
incremented because the producer has just filled a slot in the buffer.
The Consumer Operation
The pseudocode for the consumer function looks like this:
do
{
// wait until full > 0 and then decrement 'full'
wait(full);
// acquire the lock
wait(mutex);
/* perform the remove operation in a slot */
// release the lock
signal(mutex);
// increment 'empty'
signal(empty);
}
while(TRUE);
i.
ii.
iii.
iv.
v.
vi.
The consumer waits until there is atleast one full slot in the buffer.
Then it decrements the full semaphore because the number of occupied slots will be
decreased by one, after the consumer completes its operation.
After that, the consumer acquires lock on the buffer.
Following that, the consumer completes the removal operation so that the data from
one of the full slots is removed.
Then, the consumer releases the lock.
Finally, the empty semaphore is incremented by 1, because the consumer has just
removed data from an occupied slot, thus making it empty.
Dining Philosophers Problem
The dining philosophers problem is another classic synchronization problem which is
used to evaluate situations where there is a need of allocating multiple resources to
multiple processes.
What is the Problem Statement?
Consider there are five philosophers sitting around a circular dining table. The dining
table has five chopsticks and a bowl of rice in the middle as shown in the below figure.
Dining Philosophers Problem
At any instant, a philosopher is either eating or thinking. When a philosopher wants to
eat, he uses two chopsticks - one from their left and one from their right. When a
philosopher wants to think, he keeps down both chopsticks at their original place.
Here's the Solution
From the problem statement, it is clear that a philosopher can think for an indefinite
amount of time. But when a philosopher starts eating, he has to stop at some point of
time. The philosopher is in an endless cycle of thinking and eating.
An array of five semaphores, stick[5], for each of the five chopsticks.
The code for each philosopher looks like:
while(TRUE)
{
wait(stick[i]);
/*
mod is used because if i=5, next
chopstick is 1 (dining table is circular)
*/
wait(stick[(i+1) % 5]);
/* eat */
signal(stick[i]);
signal(stick[(i+1) % 5]);
/* think */
}
When a philosopher wants to eat the rice, he will wait for the chopstick at his left and
picks up that chopstick. Then he waits for the right chopstick to be available, and then
picks it too. After eating, he puts both the chopsticks down.
But if all five philosophers are hungry simultaneously, and each of them pickup one
chopstick, then a deadlock situation occurs because they will be waiting for another
chopstick forever. The possible solutions for this are:
i.
ii.
iii.
A philosopher must be allowed to pick up the chopsticks only if both the left and right
chopsticks are available.
Allow only four philosophers to sit at the table. That way, if all the four philosophers
pick up four chopsticks, there will be one chopstick left on the table.
So, one philosopher can start eating and eventually, two chopsticks will be available. In
this way, deadlocks can be avoided.
What is Readers Writer Problem?
Readers writer problem is another example of a classic synchronization problem. There
are many variants of this problem, one of which is examined below.
The Problem Statement
There is a shared resource which should be accessed by multiple processes. There are two
types of processes in this context. They are reader and writer. Any number of readers
can read from the shared resource simultaneously, but only one writer can write to the
shared resource. When a writer is writing data to the resource, no other process can
access the resource. A writer cannot write to the resource if there are non zero number of
readers accessing the resource at that time.
The Solution
From the above problem statement, it is evident that readers have higher priority than
writer. If a writer wants to write to the resource, it must wait until there are no readers
currently accessing that resource.
Here, we use one mutex m and a semaphore w. An integer variable read_count is used
to maintain the number of readers currently accessing the resource. The variable
read_count is initialized to 0. A value of 1 is given initially to m and w.
Instead of having the process to acquire lock on the shared resource, we use the mutex m
to make the process to acquire and release lock whenever it is updating the read_count
variable.
The code for the writer process looks like this:
while(TRUE)
{
wait(w);
/* perform the write operation */
signal(w);
}
And, the code for the reader process looks like this:
while(TRUE)
{
//acquire lock
wait(m);
read_count++;
if(read_count == 1)
wait(w);
//release lock
signal(m);
/* perform the reading operation */
// acquire lock
wait(m);
read_count--;
if(read_count == 0)
signal(w);
// release lock
signal(m);
}
Here is the Code uncoded (explained)
As seen above in the code for the writer, the writer just waits on the w semaphore until it
gets a chance to write to the resource.
i.
ii.
iii.
iv.
v.
vi.
After performing the write operation, it increments w so that the next writer can access
the resource.
On the other hand, in the code for the reader, the lock is acquired whenever the
read_count is updated by a process.
When a reader wants to access the resource, first it increments the read_count value,
then accesses the resource and then decrements the read_count value.
The semaphore w is used by the first reader which enters the critical section and the last
reader which exits the critical section.
The reason for this is, when the first readers enters the critical section, the writer is
blocked from the resource. Only new readers can access the resource now.
Similarly, when the last reader exits the critical section, it signals the writer using the w
semaphore because there are zero readers now and a writer can have the chance to
access the resource.
 Monitors in Process Synchronization
The monitor is one of the ways to achieve Process synchronization. The monitor is
supported by programming languages to achieve mutual exclusion between processes.
For example Java Synchronized methods. Java provides wait() and notify()
constructs.
i.
ii.
iii.
It is the collection of condition variables and procedures combined together in a
special kind of module or a package.
The processes running outside the monitor can’t access the internal variable of the
monitor but can call procedures of the monitor.
Only one process at a time can execute code inside monitors.
Syntax:
Condition Variables:
Two different operations are performed on the condition variables of the monitor.
Wait.
signal.
let say we have 2 condition variables
condition x, y; // Declaring variable
Wait operation
x.wait() : Process performing wait operation on any condition variable are suspended.
The suspended processes are placed in block queue of that condition variable.
Note: Each condition variable has its unique block queue.
Signal operation
x.signal(): When a process performs signal operation on condition variable, one of the
blocked processes is given chance.
If (x block queue empty)
// Ignore signal
else
// Resume a process from block queue.
Advantages of Monitor:
Monitors have the advantage of making parallel programming easier and less error prone
than using techniques such as semaphore.
Disadvantages of Monitor:
Monitors have to be implemented as part of the programming language . The compiler
must generate code for them. This gives the compiler the additional burden of having to
know what operating system facilities are available to control access to critical sections in
concurrent processes. Some languages that do support monitors are Java,C#,Visual
Basic,Ada and concurrent Euclid.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical section problem.
In Peterson’s solution, we have two shared variables:
 boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical
section
 int turn : The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions :
 Mutual Exclusion is assured as only one process can access the critical section at any
time.
 Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
 Bounded Waiting is preserved as every process gets a fair chance.
Disadvantages of Peterson’s Solution
o
o

It involves Busy waiting
It is limited to 2 processes.
Solutions To The Critical Section
In Process Synchronization, critical section plays the main role so that the problem must be
solved.
Here are some widely used methods to solve the critical section problem.
Peterson Solution
Peterson's solution is widely used solution to critical section problems. This algorithm was
developed by a computer scientist Peterson that's why it is named as a Peterson's solution.
In this solution, when a process is executing in a critical state, then the other process only
executes the rest of the code, and the opposite can happen. This method also helps to make sure
that only a single process runs in the critical section at a specific time.
Example
PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
i.
ii.
iii.
iv.
v.
Assume there are N processes (P1, P2, ... PN) and every process at some point of
time requires to enter the Critical Section
A FLAG[] array of size N is maintained which is by default false. So, whenever a
process requires to enter the critical section, it has to set its flag as true. For
example, If Pi wants to enter it will set FLAG[i]=TRUE.
Another variable called TURN indicates the process number which is currently
wafting to enter into the CS.
The process which enters into the critical section while exiting would change the
TURN to another number from the list of ready processes.
Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads.
It is another algorithm or solution to the critical section problem. It is a signaling
mechanism and a thread that is waiting on a semaphore, which can be signaled by
another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
Summary:
 Process synchronization is the task of coordinating the execution of processes in a way that no
two processes can have access to the same shared data and resources.
 Four elements of critical section are 1) Entry section 2) Critical section 3) Exit section 4)
Reminder section
 A critical section is a segment of code which can be accessed by a signal process at a specific
point of time.
 Three must rules which must enforce by critical section are : 1) Mutual Exclusion 2) Process
solution 3)Bound waiting.
 Mutual Exclusion is a special type of binary semaphore which is used for controlling access to
the shared resource.
 Process solution is used when no one is in the critical section, and someone wants in.
 In bound waiting solution, after a process makes a request for getting into its critical section,
there is a limit for how many other processes can get into their critical section.
 Peterson's solution is widely used solution to critical section problems.
 Problems of the Critical Section are also resolved by synchronization of hardware
 Synchronization hardware is not a simple method to implement for everyone, so the strict
software method known as Mutex Locks was also introduced.
 Semaphore is another algorithm or solution to the critical section problem.
UNIT-IV
Deadlocks
Deadlock – In Real world
1.2
Deadlock - In Real world
1.3
Deadlock – In OS
1.4
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for
an event that can be caused by only one of the waiting
processes
 Let
S and Q be two semaphores initialized to 1
P0
P1
wait(S);
wait(Q);
wait(Q);
wait(S);
...
...
signal(S);
signal(Q);
signal(Q);
signal(S);
 Starvation – indefinite blocking

A process may never be removed from the semaphore queue in which
it is suspended
 Priority Inversion – Scheduling problem when lower-priority
process holds a lock needed by higher-priority process
1.5
System Model
 System consists of resources
 Resource types R1, R2, . . ., Rm
CPU cycles, memory space, I/O devices
 Each resource type Ri has Wi instances.
 Each process utilizes a resource as follows:

request

use

release
1.6
Deadlock Characterization
Deadlock can arise if four conditions hold
simultaneously.
 Mutual exclusion:

Resources can shareable or non sharable

e.g. Printer as a non sharable resource

e.g. Ethernet/Internet connection as a sharable resource

only one process at a time can use a resource i.e. mutual
exclusion
 Hold and wait:
a process holding at least one resource is
waiting to acquire additional resources held by other
processes
 No preemption:
a resource can be released only voluntarily
by the process holding it, after that process has completed
its task
there exists a set {P0, P1, …, Pn} of waiting
processes such that P0 is waiting for a resource that is held
by P1, P1 is waiting for a resource that is held by P2, …, Pn–1
is waiting for a resource that is held by Pn, and Pn is
See Video
waiting for a resource 1.7
that is held by P0.
 Circular wait:
Deadlock Characterization Cont..
P1
P2
P3
there exists a set {P1, P2, P3} of waiting
processes such that P1 is waiting for a resource that is held
by P2, P2 is waiting for a resource that is held by P3, and P3
is waiting for a resource that is held by P1.
 Circular wait:
1.8
Resource-Allocation Graph
A set of vertices V and a set of edges E.
 V is partitioned into two types:

P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system

R = {R1, R2, …, Rm}, the set consisting of all
resource types in the system
 request edge – directed edge Pi  Rj
 assignment edge – directed edge Rj  Pi
1.9
Resource-Allocation Graph (Cont.)
 Process
 Resource Type with 4 instances
 Pi requests instance of Rj
Pi
 Pi is holding an instance of Rj
Rj
Pi
Rj
1.10
Example of a Resource Allocation Graph
1.11
Resource Allocation Graph With A Deadlock
1.12
Graph With A Cycle But No Deadlock
1.13
Basic Facts
 If graph contains no cycles  no deadlock
 If graph contains a cycle 

if only one instance per resource type, then
deadlock

if several instances per resource type,
possibility of deadlock
1.14
Methods for Handling Deadlocks
 Ensure that the system will never enter a deadlock state:

Deadlock prevention

Deadlock avoidence
 Allow the system to enter a deadlock state and then recover
 Ignore the problem and pretend that deadlocks never occur in
the system; used by most operating systems, including UNIX
1.15
Deadlock Prevention
Restrain the ways request can be made
 Mutual Exclusion – not required for sharable resources
(e.g., read-only files); must hold for non-sharable
resources
Hardware issues.
See Video

 Hold and Wait –

Conservative approach: Process starts only if all the
available resources are available.

Do not Hold: must guarantee that whenever a process
requests a resource, it does not hold any other
resources
Wait timeout: Place a maximum time a process can hold a
See Video
resource, after that it releases the resource.

1.16
Deadlock Prevention (Cont.)
 No Preemption –

Allow process to forcefully preempt the resource holding
by other process.

If a process that is holding some resources requests
another resource that cannot be immediately allocated to
it, then all resources currently being held are released

Preempted resources are added to the list of resources for
which the process is waiting
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting
See Video

 Circular Wait – impose a total ordering of all resource
types, and require that each process requests resources in an
increasing order of list
See Video
1.17
Deadlock Avoidance
Requires that the system has some additional a priori information
available
 Simplest and most useful model requires that each
process declare the maximum number of resources of
each type that it may need
 The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there
can never be a circular-wait condition
 Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes
1.18
Safe State
 When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe
state
 System is in safe state if there exists a sequence <P1, P2,
…, Pn> of ALL the processes in the systems such that for
each Pi, the resources that Pi can still request can be
satisfied by currently available resources + resources held
by all the Pj, with j < I
 That is:

If Pi resource needs are not immediately available, then
Pi can wait until all Pj have finished

When Pj is finished, Pi can obtain needed resources,
execute, return allocated resources, and terminate

When Pi terminates, Pi
resources, and so on
+1
can obtain its needed
1.19
Basic Facts
 If a system is in safe state  no deadlocks
 If a system is in unsafe state  possibility of
deadlock
 Avoidance  ensure that a system will never enter an
unsafe state.
1.20
Safe, Unsafe, Deadlock State
1.21
Avoidance Algorithms
 Single instance of a resource type

Use a resource-allocation graph
 Multiple instances of a resource type

Use the banker’s algorithm
1.22
Resource-Allocation Graph Scheme
 Claim edge Pi  Rj indicated that process Pj may request
resource Rj; represented by a dashed line
 Claim edge converts to request edge when a process
requests a resource
 Request edge converted to an assignment edge when the
resource is allocated to the process
 When a resource is released by a process, assignment
edge reconverts to a claim edge
 Resources must be claimed a priori in the system
1.23
Resource-Allocation Graph
Request Edge
Claim Edge
1.24
Unsafe State In Resource-Allocation Graph
Assignment Edge
1.25
Resource-Allocation Graph Algorithm
 Suppose that process Pi requests a resource Rj
 The request can be granted only if converting
the request edge to an assignment edge does not
result in the formation of a cycle in the
resource allocation graph
1.26
Banker’s Algorithm
 Multiple instances
 Each process must a priori claim maximum use
 When a process requests a resource it may have to wait
 When a process gets all its resources it must return
them in a finite amount of time
See Video
1.27
Data Structures for the Banker’s Algorithm
Let n = number of processes, and m = number of resources types.
 Available:
Vector (array) of length m. If available [j] =
k, there are k instances of resource Rj available
 Max: n x m matrix.
If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj
 Allocation:
n x m matrix. If Allocation[i,j] = k then Pi
is currently allocated k instances of Rj
 Need:
n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task
Need [i,j] = Max[i,j] – Allocation [i,j]
1.28
Safety Algorithm
1. Let Work and Finish be vectors of length m and n,
respectively. Initialize:
Work = Available
Finish [i] = false for i = 0, 1, …, n- 1
2. Find an i such that both:
(a) Finish [i] = false
(b) Needi  Work
If no such i exists, go to step 4
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish [i] = = true for all i, then the system safe
is instate
a
1.29
Example of Banker’s Algorithm
 5 processes P0 through P4;
3 resource types:
A (10 instances), B (5 instances), and C (7
instances)
 Snapshot at time T0:
Allocation
Max Available
A B C
A B C
0 1 0
7 5 3 3 3 2
P1
2 0 0
3 2 2
P2
3 0 2
9 0 2
P3
2 1 1
2 2 2
P4
0 0 2
4 3 3
P0
1.30
A B C
Example (Cont.)
 The content of the matrix Need is defined to be Max –
Allocation
Need
A B C
P0
7 4 3
P1
1 2 2
P2
6 0 0
P3
0 1 1
P4
4 3 1
 The system is in a safe state since the sequence < P1, P3, P4,
P2, P0> satisfies safety criteria
1.31
Resource-Request Algorithm for Process Pi
Requesti = request vector for process Pi. If Requesti [j] =
k then process Pi wants k instances of resource type Rj
1. If Requesti  Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim
2. If Requesti  Available, go to step 3. Otherwise Pi must
wait, since resources are not available
3. Pretend to allocate requested resources to Pi by modifying
the state as follows:
Available = Available – Requesti;


Allocationi = Allocationi + Requesti;
Needi = Needi – Requesti;
If safe  the resources are allocated to Pi
If unsafe  Pi must wait, and the old resourceallocation state is restored
1.32
Example: P1 Request (1,0,2)
 Check that Request  Available (that is, (1,0,2)  (3,3,2) 
true
Allocation
Need
Available
A B C
A B C
A B C
P0
0 1 0
7 4 3
2 3 0
P1
3 0 2
0 2 0
P2
3 0 2
6 0 0
P3
2 1 1
0 1 1
P4
0 0 2
4 3 1
 Executing safety algorithm shows that sequence < P1, P3, P4, P0,
P2> satisfies safety requirement
 Can request for (3,3,0) by P4 be granted?
 Can request for (0,2,0) by P0 be granted?
1.33
Deadlock Detection
 Allow system to enter deadlock state
 Detection algorithm
 Recovery scheme
1.34
Single Instance of Each Resource Type
 Maintain wait-for graph

Nodes are processes

Pi  Pj if Pi is waiting for Pj
 Periodically invoke an algorithm that searches for a cycle in
the graph. If there is a cycle, there exists a deadlock
 An algorithm to detect a cycle in a graph requires an order
of n2 operations, where n is the number of vertices in the
graph
1.35
Resource-Allocation Graph and Wait-for Graph
Resource-Allocation Graph
1.36
Corresponding wait-for graph
Several Instances of a Resource Type
 Available:
A vector of length m indicates the number of
available resources of each type
 Allocation:
An n x m matrix defines the number of
resources of each type currently allocated to each
process
 Request:
An n x m matrix indicates the current request
of each process. If Request [i][j] = k, then process Pi
is requesting k more instances of resource type Rj.
1.37
Detection Algorithm
1. Let Work and Finish be vectors of length m and n, respectively
Initialize:
(a) Work = Available
(b) For i = 1,2, …, n, if Allocationi  0, then
Finish[i] = false; otherwise, Finish[i] = true
2. Find an index i such that both:
(a) Finish[i] == false
(b) Requesti  Work
If no such i exists, go to step 4
1.38
Detection Algorithm (Cont.)
3. Work = Work + Allocationi
Finish[i] = true
go to step 2
4. If Finish[i] == false, for some i, 1  i  n, then the
system is in deadlock state. Moreover, if Finish[i] ==
false, then Pi is deadlocked
Algorithm requires an order of O(m x n2) operations to detect
whether the system is in deadlocked state
1.39
Example of Detection Algorithm
 Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances)
 Snapshot at time T0:
Allocation Request
Available
A B C
A B C
A B C
P0
0 1 0
0 0 0
0 0 0
P1
2 0 0
2 0 2
P2
3 0 3
0 0 0
P3
2 1 1
1 0 0
P4
0 0 2
0 0 2
 Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for
all i
1.40
Example (Cont.)
 P2 requests an additional instance of type C
Request
A B C
P0
0 0 0
P1
2 0 2
P2
0 0 1
P3
1 0 0
P4
0 0 2
 State of system?

Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests

Deadlock exists, consisting of processes P1, P2, P3, and P4
1.41
Detection-Algorithm Usage
 When, and how often, to invoke depends on:

How often a deadlock is likely to occur?

How many processes will need to be rolled back?

one for each disjoint cycle
 If detection algorithm is invoked arbitrarily, there may
be many cycles in the resource graph and so we would not
be able to tell which of the many deadlocked processes
“caused” the deadlock.
1.42
Recovery from Deadlock: Process Termination
 Abort all deadlocked processes
 Abort one process at a time until the deadlock cycle is
eliminated
 In which order should we choose to abort?
1.
Priority of the process
2.
How long process has computed, and how much longer to
completion
3.
Resources the process has used
4.
Resources process needs to complete
5.
How many processes will need to be terminated
6.
Is process interactive or batch?
1.43
Recovery from Deadlock: Resource Preemption
 Selecting a victim – minimize cost
 Rollback – return to some safe state, restart process
for that state
 Starvation –
same process may always be picked as
victim, include number of rollback in cost factor
1.44
End
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.
Chapter 7: Deadlocks!
Chapter 7: Deadlocks!
■
The Deadlock Problem"
■
System Model"
■
Deadlock Characterization"
■
Methods for Handling Deadlocks"
■ Deadlock Prevention"
■
Deadlock Avoidance"
■
Deadlock Detection "
■
Recovery from Deadlock "
Operating System Concepts!
7.2!
Silberschatz, Galvin and Gagne ©2005!
Chapter Objectives!
■ To develop a description of deadlocks, which prevent
sets of concurrent processes from completing their tasks"
■ To present a number of different methods for preventing
or avoiding deadlocks in a computer system."
"
Operating System Concepts!
7.3!
Silberschatz, Galvin and Gagne ©2005!
The Deadlock Problem!
■ A set of blocked processes each holding a resource and waiting to
acquire a resource held by another process in the set."
■
■
Example "
●
System has 2 tape drives."
●
P0 and P1 each hold one tape drive and each needs another
one."
Example "
●
semaphores A and B, initialized to 1"
P0
Operating System Concepts!
"
" P1"
wait (A);
!
!wait(B)!
wait (B);
!
!wait(A)!
7.4!
Silberschatz, Galvin and Gagne ©2005!
Bridge Crossing Example!
■ Traffic only in one direction."
■ Each section of a bridge can be viewed as a resource."
■ If a deadlock occurs, it can be resolved if one car backs up
(preempt resources and rollback)."
■ Several cars may have to be backed up if a deadlock
occurs."
■ Starvation is possible."
Operating System Concepts!
7.5!
Silberschatz, Galvin and Gagne ©2005!
System Model!
■ Resource types R1, R2, . . ., Rm"
CPU cycles, memory space, I/O devices!
■ Each resource type Ri has Wi instances."
■ Each process utilizes a resource as follows:"
●
request "
●
use "
●
release"
Operating System Concepts!
7.6!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Characterization!
Deadlock can arise if four conditions hold simultaneously."
■ Mutual exclusion: only one process at a time can use a
resource."
■ Hold and wait: a process holding at least one resource is
waiting to acquire additional resources held by other
processes."
■ No preemption: a resource can be released only
voluntarily by the process holding it, after that process has
completed its task."
■ Circular wait: there exists a set {P0, P1, …, P0} of waiting
processes such that P0 is waiting for a resource that is held
by P1, P1 is waiting for a resource that is held by "
!P2, …, Pn–1 is waiting for a resource that is held by
Pn, and P0 is waiting for a resource that is held by P0."
Operating System Concepts!
7.7!
Silberschatz, Galvin and Gagne ©2005!
Resource-Allocation Graph!
A set of vertices V and a set of edges E."
■ V is partitioned into two types:"
●
P = {P1, P2, …, Pn}, the set consisting of all the
processes in the system.
"
●
R = {R1, R2, …, Rm}, the set consisting of all resource
types in the system."
■ request edge – directed edge P1 → Rj!
■ assignment edge – directed edge Rj → Pi"
Operating System Concepts!
7.8!
Silberschatz, Galvin and Gagne ©2005!
Resource-Allocation Graph (Cont.)!
■ Process
"
■ Resource Type with 4 instances"
"
■ Pi requests instance of Rj"
"
Pi!
R j!
■ Pi is holding an instance of Rj!
Pi"
R j!
Operating System Concepts!
7.9!
Silberschatz, Galvin and Gagne ©2005!
Example of a Resource Allocation Graph!
Operating System Concepts!
7.10!
Silberschatz, Galvin and Gagne ©2005!
Will there be a deadlock here?!
Operating System Concepts!
7.11!
Silberschatz, Galvin and Gagne ©2005!
Resource Allocation Graph With A Cycle But No Deadlock!
Operating System Concepts!
7.12!
Silberschatz, Galvin and Gagne ©2005!
Basic Facts!
■ If graph contains no cycles ⇒ no deadlock.
"
■ If graph contains a cycle ⇒"
●
if only one instance per resource type, then deadlock."
●
if several instances per resource type, possibility of
deadlock."
Operating System Concepts!
7.13!
Silberschatz, Galvin and Gagne ©2005!
Methods for Handling Deadlocks!
■ Ensure that the system will never enter a deadlock state.
"
■ Allow the system to enter a deadlock state and then
recover.
"
■ Ignore the problem and pretend that deadlocks never occur
in the system; used by most operating systems, including
UNIX."
Operating System Concepts!
7.14!
Silberschatz, Galvin and Gagne ©2005!
The Methods (continued)!
■ Deadlock Prevention"
■ Deadlock Avoidance"
■
Deadlock Detection"
Operating System Concepts!
7.15!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Prevention!
Restrain the ways request can be made."
■ Mutual Exclusion – not required for sharable resources;
must hold for nonsharable resources.
"
■ Hold and Wait – must guarantee that whenever a process
requests a resource, it does not hold any other resources."
●
Require process to request and be allocated all its
resources before it begins execution, or allow process
to request resources only when the process has none."
●
Low resource utilization; starvation possible."
Operating System Concepts!
7.16!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Prevention (Cont.)!
■ No Preemption –"
●
If a process that is holding some resources requests
another resource that cannot be immediately allocated to
it, then all resources currently being held are released."
●
Preempted resources are added to the list of resources
for which the process is waiting."
●
Process will be restarted only when it can regain its old
resources, as well as the new ones that it is requesting.
"
■ Circular Wait – impose a total ordering of all resource types,
and require that each process requests resources in an
increasing order of enumeration."
Operating System Concepts!
7.17!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Avoidance!
Requires that the system has some additional a priori information
available."
■ Simplest and most useful model requires that each process
declare the maximum number of resources of each type
that it may need.
"
■ The deadlock-avoidance algorithm dynamically examines
the resource-allocation state to ensure that there can never
be a circular-wait condition.
"
■ Resource-allocation state is defined by the number of
available and allocated resources, and the maximum
demands of the processes."
Operating System Concepts!
7.18!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Detection!
■ Allow system to enter deadlock state
"
■ Detection algorithm
"
■ Recovery scheme"
Operating System Concepts!
7.19!
Silberschatz, Galvin and Gagne ©2005!
Safe State!
■ When a process requests an available resource, system must
decide if immediate allocation leaves the system in a safe state.
"
■ System is in safe state if there exists a safe sequence of all
processes.
"
■ Sequence <P1, P2, …, Pn> is safe if for each Pi, the resources that
Pi can still request can be satisfied by currently available resources
+ resources held by all the Pj, with j<I."
●
●
●
If Pi resource needs are not immediately available, then Pi can
wait until all Pj have finished."
When Pj is finished, Pi can obtain needed resources, execute,
return allocated resources, and terminate. "
When Pi terminates, Pi+1 can obtain its needed resources, and
so on. "
Operating System Concepts!
7.20!
Silberschatz, Galvin and Gagne ©2005!
Basic Facts!
■ If a system is in safe state ⇒ no deadlocks.
"
■ If a system is in unsafe state ⇒ possibility of deadlock.
"
■ Avoidance ⇒ ensure that a system will never enter an
unsafe state. "
Operating System Concepts!
7.21!
Silberschatz, Galvin and Gagne ©2005!
Safe, Unsafe , Deadlock State!
Operating System Concepts!
7.22!
Silberschatz, Galvin and Gagne ©2005!
Resource-Allocation Graph Algorithm!
■ Claim edge Pi → Rj indicated that process Pj may request
resource Rj; represented by a dashed line.
"
■ Claim edge converts to request edge when a process
requests a resource.
"
■ When a resource is released by a process, assignment edge
reconverts to a claim edge.
"
■ Resources must be claimed a priori in the system."
Operating System Concepts!
7.23!
Silberschatz, Galvin and Gagne ©2005!
Resource-Allocation Graph For Deadlock Avoidance!
Operating System Concepts!
7.24!
Silberschatz, Galvin and Gagne ©2005!
Unsafe State In Resource-Allocation Graph!
Operating System Concepts!
7.25!
Silberschatz, Galvin and Gagne ©2005!
Example formal algorithms!
■ Banker’s Algorithm"
■ Resource-Request Algorithm"
■ Safety Algorithm"
Operating System Concepts!
7.26!
Silberschatz, Galvin and Gagne ©2005!
Stop Here!
Operating System Concepts!
7.27!
Silberschatz, Galvin and Gagne ©2005!
Banker’s Algorithm!
■ Multiple instances.
"
■ Each process must a priori claim maximum use.
"
■ When a process requests a resource it may have to wait.
"
■ When a process gets all its resources it must return them in
a finite amount of time."
Operating System Concepts!
7.28!
Silberschatz, Galvin and Gagne ©2005!
Data Structures for the Banker’s Algorithm!
Let n = number of processes, and m = number of resources types. "
■ Available: Vector of length m. If available [j] = k, there are k
instances of resource type Rj available."
■ Max: n x m matrix. If Max [i,j] = k, then process Pi may
request at most k instances of resource type Rj."
■ Allocation: n x m matrix. If Allocation[i,j] = k then Pi is
currently allocated k instances of Rj."
■ Need: n x m matrix. If Need[i,j] = k, then Pi may need k
more instances of Rj to complete its task."
Need [i,j] = Max[i,j] – Allocation [i,j]."
Operating System Concepts!
7.29!
Silberschatz, Galvin and Gagne ©2005!
Safety Algorithm!
1. "Let Work and Finish be vectors of length m and n,
respectively. Initialize:"
Work = Available!
Finish [i] = false for i - 1,3, …, n."
2. "Find and i such that both: "
(a) Finish [i] = false"
(b) Needi ≤ Work!
If no such i exists, go to step 4."
3. "Work = Work + Allocationi
Finish[i] = true
go to step 2."
4. "If Finish [i] == true for all i, then the system is in a safe
state."
Operating System Concepts!
7.30!
Silberschatz, Galvin and Gagne ©2005!
Resource-Request Algorithm for Process P!i
Request = request vector for process Pi. If Requesti [j] = k then
process Pi wants k instances of resource type Rj."
1. "If Requesti ≤ Needi go to step 2. Otherwise, raise error
condition, since process has exceeded its maximum claim."
2. "If Requesti ≤ Available, go to step 3. Otherwise Pi must
wait, since resources are not available."
3. "Pretend to allocate requested resources to Pi by modifying
the state as follows:"
●
●
Operating System Concepts!
" "Available = Available = Requesti;!
" "Allocationi = Allocationi + Requesti;"
" "Needi = Needi – Requesti;!
If safe ⇒ the resources are allocated to Pi. !
If unsafe ⇒ Pi must wait, and the old resource-allocation
state is restored!
7.31!
Silberschatz, Galvin and Gagne ©2005!
Example of Banker’s Algorithm!
■ 5 processes P0 through P4; 3 resource types A
(10 instances),
B (5instances, and C (7 instances)."
■ Snapshot at time T0:"
"
"
"Allocation
!Max
!Available!
!
!
!A B C
!A B C
!A B C!
"
"P0
"0
10
"7 5 3
"3 3 2"
"
" P1
"2
00
"3 2 2 "
"
" P2
"3 0 2
"9 0 2"
"
" P3
"2 1 1
"2 2 2"
"
" P4
"0 0 2
"4 3 3
Operating System Concepts!
7.32!
""
Silberschatz, Galvin and Gagne ©2005!
Example (Cont.)!
■ The content of the matrix. Need is defined to be Max – Allocation."
"
"
"Need"
"
"
"A B C!
"
" P0
"7
43"
"
" P1
"1
22"
"
" P2
"6 0 0 "
"
" P3
"0 1 1"
"
"
" P4
"4 3 1
■ The system is in a safe state since the sequence < P1, P3, P4, P2,
P0> satisfies safety criteria. "
Operating System Concepts!
7.33!
Silberschatz, Galvin and Gagne ©2005!
Example P1 Request (1,0,2) (Cont.)!
■ Check that Request ≤ Available (that is, (1,0,2) ≤ (3,3,2) ⇒ true.!
!
!
!Allocation
!Need
!Available!
!
!
!A B C
!A B C
!A B C !
"
"P0
"0 1 0
"7 4 3
"2 3 0"
"
"P1
"3 0 2
"0 2 0
""
"
"P2
"3 0 1
"6 0 0 "
"
"P3
"2 1 1
"0 1 1"
"
"P4
"0 0 2
"4 3 1 "
■ Executing safety algorithm shows that sequence <P1, P3, P4, P0,
P2> satisfies safety requirement. "
■ Can request for (3,3,0) by P4 be granted?"
■ Can request for (0,2,0) by P0 be granted?"
Operating System Concepts!
7.34!
Silberschatz, Galvin and Gagne ©2005!
Deadlock Detection!
■ Allow system to enter deadlock state
"
■ Detection algorithm
"
■ Recovery scheme"
Operating System Concepts!
7.35!
Silberschatz, Galvin and Gagne ©2005!
Single Instance of Each Resource Type!
■ Maintain wait-for graph"
●
Nodes are processes."
●
Pi → Pj if Pi is waiting for Pj.
!
■ Periodically invoke an algorithm that searches for a cycle in
the graph.
"
■ An algorithm to detect a cycle in a graph requires an order
of n2 operations, where n is the number of vertices in the
graph."
Operating System Concepts!
7.36!
Silberschatz, Galvin and Gagne ©2005!
Resource-Allocation Graph and Wait-for Graph!
Resource-Allocation Graph"
Operating System Concepts!
7.37!
Corresponding wait-for graph"
Silberschatz, Galvin and Gagne ©2005!
Several Instances of a Resource Type!
■ Available: A vector of length m indicates the number of
available resources of each type.
"
■ Allocation: An n x m matrix defines the number of
resources of each type currently allocated to each process.
"
■ Request: An n x m matrix indicates the current request of
each process. If Request [ij] = k, then process Pi is
requesting k more instances of resource type. Rj."
Operating System Concepts!
7.38!
Silberschatz, Galvin and Gagne ©2005!
Detection Algorithm!
1. "Let Work and Finish be vectors of length m and n, respectively
Initialize:"
(a) Work = Available"
(b) "For i = 1,2, …, n, if Allocationi ≠ 0, then
Finish[i] = false;otherwise, Finish[i] = true."
2. "Find an index i such that both:"
(a) "Finish[i] == false"
(b) "Requesti ≤ Work
"
If no such i exists, go to step 4. "
Operating System Concepts!
7.39!
Silberschatz, Galvin and Gagne ©2005!
Detection Algorithm (Cont.)!
3. "Work = Work + Allocationi
Finish[i] = true
go to step 2.
"
4. "If Finish[i] == false, for some i, 1 ≤ i ≤ n, then the system is in
deadlock state. Moreover, if Finish[i] == false, then Pi is
deadlocked."
""
Algorithm requires an order of O(m x n2) operations to detect whether the
system is in deadlocked state. "
"
Operating System Concepts!
7.40!
Silberschatz, Galvin and Gagne ©2005!
Example of Detection Algorithm!
■ Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances)."
■ Snapshot at time T0:"
"
"
"Allocation
!Request
!Available!
"
"
"A B C
!A B C
!A B C!
"
"P0
"0 1 0
"0 0 0
"0 0 0"
"
"P1
"2 0 0
"2 0 2"
"
"P2
"3 0 3
"0 0 0 "
"
"P3
"2 1 1
"1 0 0 "
"
"P4
"0 0 2
"0 0 2"
■ Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i. "
"
Operating System Concepts!
7.41!
Silberschatz, Galvin and Gagne ©2005!
Example (Cont.)!
■ P2 requests an additional instance of type C."
"
"
"Request!
!
!
!A B C!
"
" P0
"0 0 0"
"
" P1
"2 0 1"
"
"P2
"0 0 1"
"
"P3
"1 0 0 "
"
"P4
"0 0 2"
■ State of system?"
●
Can reclaim resources held by process P0, but insufficient
resources to fulfill other processes; requests."
●
Deadlock exists, consisting of processes P1, P2, P3, and P4."
Operating System Concepts!
7.42!
Silberschatz, Galvin and Gagne ©2005!
Detection-Algorithm Usage!
■ When, and how often, to invoke depends on:"
●
How often a deadlock is likely to occur?"
●
How many processes will need to be rolled back?"
!
one for each disjoint cycle
"
■ If detection algorithm is invoked arbitrarily, there may be many
cycles in the resource graph and so we would not be able to tell
which of the many deadlocked processes “caused” the deadlock."
Operating System Concepts!
7.43!
Silberschatz, Galvin and Gagne ©2005!
Recovery from Deadlock: Process Termination!
■ Abort all deadlocked processes.
"
■ Abort one process at a time until the deadlock cycle is eliminated.
"
■ In which order should we choose to abort?"
●
Priority of the process."
●
How long process has computed, and how much longer to
completion."
●
Resources the process has used."
●
Resources process needs to complete."
●
How many processes will need to be terminated. "
●
Is process interactive or batch?"
Operating System Concepts!
7.44!
Silberschatz, Galvin and Gagne ©2005!
Recovery from Deadlock: Resource Preemption!
■ Selecting a victim – minimize cost.
"
■ Rollback – return to some safe state, restart process for that state.
"
■ Starvation – same process may always be picked as victim,
include number of rollback in cost factor."
Operating System Concepts!
7.45!
Silberschatz, Galvin and Gagne ©2005!
The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may have been corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then
insert it again.
End of Chapter 7!
UNIT-V
Part 1: Memory Management
Background
 Program must be brought (from disk) into memory and placed within
a process for it to be run
 Main memory and registers are only storage CPU can access directly
 Memory unit only sees a stream of addresses + read requests, or
address + data and write requests
 Register access in one CPU clock (or less)
 Cache sits between main memory and CPU registers
 Protection of memory required to ensure correct operation
1.2
Multistep Processing of a User Program
1.3
Binding of Instructions and Data to Memory
 Address binding of instructions and data to memory addresses
can happen at three different stages

Compile time: If memory location known a priori, absolute
code can be generated; must recompile code if starting
location changes

Load time: Must generate relocatable code if memory
location is not known at compile time

Execution time: Binding delayed until run time if the
process can be moved during its execution from one memory
segment to another

Need hardware support for address maps (e.g., base and
limit registers)
1.4
Base and Limit Registers
 A pair of base or relocation register and limit registers define
the logical address space
 CPU must check every memory access generated in user mode to
be sure it is between base and limit for that user
1.5
Logical vs. Physical Address Space
 Logical address – generated by the CPU; also referred to as virtual
address
 Physical address – address seen by the memory unit
 Logical address space is the set of all logical addresses generated
by a program
 Physical address space is the set of all physical addresses
generated by a program
1.6
Dynamic relocation using a relocation register

Routine is not loaded until it is called

All routines kept on disk in
relocatable load format

Useful when large amounts of code
are needed to handle infrequently
occurring cases

No special support from the operating
system is required

Implemented through program design

OS can help by providing libraries to
implement dynamic loading
See Video
1.7
Swapping
 A process can be swapped temporarily out of memory to a backing
store, and then brought back into memory for continued execution

Total physical memory space of processes can exceed physical
memory
 Backing store – fast disk large enough to accommodate copies of
all memory images for all users; must provide direct access to these
memory images
 Roll out, roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out so
higher-priority process can be loaded and executed
 Major part of swap time is transfer time; total transfer time is directly
proportional to the amount of memory swapped
 System maintains a ready queue of ready-to-run processes which
have memory images on disk
1.8
Schematic View of Swapping
1.9
Contiguous Allocation
 Contiguous allocation is one early method
 Main memory usually into two partitions:

Resident operating system, usually held in low memory with
interrupt vector

User processes then held in high memory

Each process contained in single contiguous section of
memory
 Leads to External Fragmentation: Total memory space exists to
satisfy a request, but it is not contiguous.
3KB
4KB
 Process with size = 5KB can be allocated?
See Video
1.10
3KB
Fixed Size-partition allocation
 This scheme divided
memory into a
number of separate
fixed areas.

Each memory area
could hold one
process.
See Video
1.11
Multiple-partition allocation
 Multiple-partition allocation

Degree of multiprogramming limited by number of partitions

Variable-partition sizes for efficiency (sized to a given process’ needs)

Hole – block of available memory; holes of various size are scattered
throughout memory

When a process arrives, it is allocated memory from a hole large enough to
accommodate it

Process exiting frees its partition, adjacent free partitions combined

Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
See Video
1.12
Dynamic Storage-Allocation Problem
How to satisfy a request of size n from a list of free holes?
 First-fit: Allocate the first hole that is big enough
 Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size
 Produces the smallest leftover hole
 Worst-fit: Allocate the largest hole; must also search entire list

Produces the largest leftover hole
1.13
Fragmentation
 External Fragmentation – total memory space exists to
satisfy a request, but it is not contiguous
 Internal Fragmentation – allocated memory may be slightly
larger than requested memory; this size difference is memory
internal to a partition, but not being used
 First fit analysis reveals that given N blocks allocated, 0.5 N
blocks lost to fragmentation

1/3 may be unusable -> 50-percent rule
1.14
Segmentation Architecture
 Logical address consists of a two tuple:
<segment-number, offset> i.e. < S , d >
 Segment table – maps two-dimensional physical addresses; each
table entry has:

base – contains the starting physical address where the
segments reside in memory

limit – specifies the length of the segment
 Segment-table base register (STBR) points to the segment
table’s location in memory
 Segment-table length register (STLR) indicates number of
segments used by a program;
segment number s is legal if s < STLR
See Video
1.15
Segmentation Hardware
1.16
Segmentation Hardware
1.17
Paging
 Physical address space of a process can be noncontiguous; process is
allocated physical memory whenever the latter is available

Avoids external fragmentation

Avoids problem of varying sized memory chunks
 Divide physical memory into fixed-sized blocks called frames

Size is power of 2, between 512 bytes and 16 Mbytes
 Divide logical memory into blocks of same size called pages
 Keep track of all free frames
 To run a program of size N pages, need to find N free frames and load
program
 Set up a page table to translate logical to physical addresses
 Backing store likewise split into pages
 Still have Internal fragmentation
See Video
1.18
Address Translation Scheme
 Address generated by CPU is divided into:

Page number (p) – used as an index into a page table which
contains base address of each page in physical memory

Page offset (d) – combined with base address to define the
physical memory address that is sent to the memory unit
page number

page offset
p
d
m -n
n
For given logical address space 2m and page size 2n
1.19
Paging Hardware
1.20
Paging Model of Logical and Physical Memory
1.21
Paging Example
n=2 and m=4 32-byte memory and 4-byte pages
1.22
End of Part 1
UNIT-V
Part 2: Virtual Memory
Virtual Memory
 Code needs to be in memory to execute, but entire program rarely used

Error code, unusual routines, large data structures
 Entire program code not needed at same time
 Consider ability to execute partially-loaded program

Program no longer constrained by limits of physical memory

Each program takes less memory while running -> more programs
run at the same time


Increased CPU utilization and throughput with no increase in
response time or turnaround time
Less I/O needed to load or swap programs into memory -> each user
program runs faster
- Virtual memory is commonly implemented by demand paging
- Demand segmentation can also be used to provide virtual memory.
1.2
Memory Management Unit (MMU)
 MMU, is built into the hardware.
 job is to translate virtual addresses into physical addresses.
1.3
Demand Paging
 Quite similar to a paging system with swapping
 processes reside in secondary memory and pages are loaded only on
demand, not in advance
1.4
Page Fault
 While executing a program, if the program references a page which is not
available in the main memory because it was swapped out a little ago, the
processor treats this invalid memory reference as a page fault
 Then transfers control from the program to the operating system to
demand the page back into the memory.
1.5
Page Replacement Algorithm
 It is a technique using which an Operating System decides which memory
pages to swap out, write to disk when a page of memory needs to be
allocated.
 Reference String

If we have a reference to a page p, then any immediately following
references to page p will never cause a page fault.

For example, consider the following sequence of addresses −
123,215,600,1234,76,96
If page size is 100, then the reference string is 1,2,6,12,0,0
1.6
First In First Out (FIFO) algorithm
 Oldest page in main memory is the one which will be selected for
replacement.
1.7
Optimal Page algorithm
 Also called OPT or MIN & has the lowest page-fault rate of all algorithms
 Replace the page that will not be used for the longest period of time.
Use the time when a page is to be used.
1.8
Least Recently Used (LRU) algorithm
 Page which has not been used for the longest time in main memory is the
one which will be selected for replacement.
 keep a list, replace pages by looking back into time.
1.9
End of Part 2
Download