Uploaded by yogesh waran

OS Unit 1 Notes

advertisement
SRM Institute of Science and Technology, Ramapuram
18CSC205J – Operating Systems
Department & Semester: Computer Science and Engineering & IV
Unit 1
An operating system is a program that acts as an intermediate part between a user of a computer and the
computer hardware and controls the execution of all kinds of programs. An operating system is the program that,
after being initially loaded into the computer by a boot program, manages all the other programs in a computer.
OBJECTIVES OF OS:
1. Convenience: An OS makes a computer more convenient to use.
2. Efficiency: An OS allows the computer system resources to be used in an efficient manner.
3. Ability to evolve: An OS should be constructed in such a way as to permit the effective development, testing,
and introduction of new system functions without interfering with service.
FUNCTIONS OF AN OPERATING SYSTEM ARE AS FOLLOWS:
Memory Management: Memory management refers to management of Primary Memory or Main Memory. An
Operating System does the following activities for memory management: OS Keeps tracks of primary memory,
i.e., what part of it are in use by whom, what part are not in use. In multi-programming, the OS decides which
process will get memory when and how much. OS allocates the memory when a process requests it to do so. It
de-allocates the memory when a process no longer needs it or has been terminated.
Processor Management: In multi-programming environment, the OS decides which process gets the processor
when and for how much time. This function is called process scheduling. An Operating System does the following
activities for processor management: OS keeps tracks of processor and status of process. OS allocates the
processor (CPU) to a process. It de-allocates processor when a process is no longer required.
Device Management: An Operating System manages device communication via their respective drivers. It does
the following activities for device management: Keeps tracks of all devices. The program responsible for this task
is known as the I/O controller. Decides which process gets the device when and for how much time. OS allocates
the device in the most efficient way. It de-allocates devices in most efficient way.
File Management: A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. An Operating System does the following activities for file
management: Keeps track of information, location, uses, status etc. The collective facilities are often known as file
system. OS Decides who gets the resources. It allocates the resources and also de-allocates the resources when
not in need.
Unit 1
Page 1
18CSC205J – Operating Systems
Year & Semester: II & IV
Security: OS prevents unauthorized access to programs and data. For shared or public systems, the OS controls
access to the system as a whole and to specific system resources.
Control over system performance: OS will collect usage statistics for various resources and monitor
performance parameters such as response time, Recording delays between request for a service and response
from the system.
Job accounting: OS Keeps track of time and resources used by various jobs and users. On any system, this
information is useful in anticipating the need for future enhancements and in tuning the system to improve
performance and can be used for job accounting purposes.
Error detection & Response: A variety of errors can occur while a computer system is running. These include
internal and external hardware errors, such as a memory error, or a device failure or malfunction; and various
software errors. In each case, the OS must provide a response that clears the error condition with the least impact
on running applications. The response may range from ending the program that caused the error, to retrying the
operation, to simply reporting the error to the application, Production of dumps, traces, error messages, and other
debugging and error detecting aids.
Booting the computer: Booting is the process of starting or restarting the computer. If computer is switched off
completely and then turned on then it is cold booting. If computer is restarted then it is warm booting. Booting of
the computer is done by OS.
Coordination between other software and users: An OS enables coordination of hardware components,
coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the
computer systems.
A computer system has many resources (hardware and software), which may be require to complete a
task. The commonly required resources are input/output devices, memory, file storage space, CPU etc. The
operating system acts as a manager of the above resources and allocates them to specific programs and users
as necessary for their task. Therefore operating system is the resource manager i.e. it can manage the resource
of a computer system internally. The resources are processor, memory, files, and I/O devices.
Unit 1
Page 2
18CSC205J – Operating Systems
Year & Semester: II & IV
TWO VIEWS OF OPERATING SYSTEM:
1. User's View: The user view of the computer refers to the interface being used. Such systems are designed for
one user to monopolize its resources, to maximize the work that the user is performing. In these cases, the
operating system is designed mostly for ease of use, with some attention paid to performance, and none paid to
resource utilization.
2. System View: Operating system can be viewed as a resource allocator also. A computer system consists of
many resources like - hardware and software - that must be managed efficiently. The operating system acts as
the manager of the resources, decides between conflicting requests, controls execution of programs etc.
OPERATING SYSTEM EVOLUTION:
The first computers used batch operating systems, in which the computer ran batches of jobs without
stop. Programs were punched into cards that were usually copied to tape for processing. When the computer
finished one job, it would immediately start the next one on the tape. Professional operators, not the users,
interacted with the machine. Users dropped jobs off, then returned to pick up the results after their jobs had run.
This was inconvenient for the users, but the expensive computer was kept busy with a steady stream of jobs.
In the 1960s, time-shared operating systems began replacing batch systems. Users interacted directly
with the computer via a printing terminal like the Western Electric Teletype shown here.
Several users shared the computer at the same time, and it spent a fraction of a second on each one's
job before moving on to the next. A fast computer could work on many user's jobs at the same time, while creating
the illusion that they were receiving its full attention. Printing terminals required that programs had character or
command-line user interfaces (CLI), in which the user typed responses to prompts or typed commands. The
interaction scrolled down a roll of paper.
Printing terminals were later replaced by video terminals that could only display fixed size characters.
Some could be used to create forms on the screen, but many simply scrolled like a "glass Teletype”. Personal
computers became affordable in the mid-1970s.
The Altair 8800, shown here, was the first commercially viable personal computer marketed to individuals.
Beginning in January 1975, the Altair was sold to hobbyists in kit form.
The Altair did not have an operating system, since it had only toggle switches and light-emitting diodes for
input and output. People soon connected terminals and floppy disk drives to Altairs. In 1976, Digital Research
introduced the CP/M operating system for the Altair and computers like it. CP/M and later DOS had CLIs that
Unit 1
Page 3
18CSC205J – Operating Systems
Year & Semester: II & IV
were similar to those of the time-shared operating systems, but the computer was dedicated to a single user, not
shared.
As hardware prices fell, personal computers with bit-mapped displays that could control individual pixels
were developed. These made personal computer with graphical user interfaces (GUIs) possible. The first
commercial success was the Apple Macintosh which was introduced in 1984.
The initial Macintosh pushed the state of the hardware art, and was restricted to a small, monochrome
display. As hardware continued to evolve, larger, color Macs were developed and Microsoft introduced Windows,
their GUI operating system. The Macintosh operating system was based on decades of research on graphicallyoriented personal computer operating systems and applications. This photo of shows Ivan Sutherland's
pioneering program Sketchpad in the early 1960s. Sketchpad foreshadowed many of the characteristics of a
modern GUI, but the hardware cost millions of dollars and filled a room.
After many generations of research projects on large computers and improvement in hardware, the
Macintosh became economically feasible. Research prototypes like Sketchpad are still being developed at
universities and in research labs. They will form the basis of future products.
TYPES OF OPERATING SYSTEM:
1. Serial Processing Operating System
2. Batch Processing Operating System
3. Multiprogramming Operating System
4. Time-Sharing Operating System
5. Embedded Operating System
6. Network Operating System
7. Multiprocessing Operating System
8. Real-Time Operating System
9. Distributed Operating System
Figure: Evolution of Operating System
Unit 1
Page 4
18CSC205J – Operating Systems
Year & Semester: II & IV
SERIAL PROCESSING
History of the operating system started in 1950. Before 1950, the programmers directly interact with the hardware
there was no operating system at that time. If a programmer wishes to execute a program on those days, the
following serial steps are necessary.
 Type the program or punched card.
 Convert the punched card to a card reader.
 submit to the computing machine, is there any errors, the error was indicated by the lights.
 The programmer examined the register and main memory to identify the cause of an error
 Take outputs on the printers.
 Then the programmer ready for the next program.
Drawback:
This type of processing is difficult for users, it takes much time and the next program should wait for the
completion of the previous one. The programs are submitted to the machine one after one, therefore the method
is said to be serial processing.
BATCH PROCESSING
Before 1960, it is difficult to execute a program using a computer because of the computer located in three
different rooms, one room for the card reader, one room for executing the program and another room for printing
the result. The user/machine operator runs between three rooms to complete a job. We can solve this problem by
using batch processing. In batch processing technique, the same type of jobs batch together and execute at a
time. The carrier carries the group of jobs at a time from one room to another.
Therefore, the programmer need not run between these three rooms several times.
MULTIPROGRAMMING
Multiprogramming is a technique to execute the number of programs simultaneously by a single processor. In
multiprogramming, a number of processes reside in main memory at a time. The OS(Operating System) picks and
begins to execute one of the jobs in main memory. Consider the following figure, it depicts the layout of the
multiprogramming system. The main memory consisting of 5 jobs at a time, the CPU executes one by one.
In the non-multiprogramming system, the CPU can execute only one program at a time, if the running program is
waiting for any I/O device, the CPU becomes idle so it will effect on the performance of the CPU. But in a
multiprogramming environment, if any I/O wait happened in a process, then the CPU switches from that job to
another job in the job pool. So, the CPU is not idle at any time.
Advantages:



Unit 1
Can get efficient memory utilization.
CPU is never idle so the performance of CPU will increase.
The throughput of CPU may also increase.
Page 5
18CSC205J – Operating Systems

Year & Semester: II & IV
In the non-multiprogramming environment, the user/program has to wait for CPU much time. But waiting time is
limited in multiprogramming.
TIME SHARING SYSTEM
Time-sharing or multitasking is a logical extension of multiprogramming. Multiple jobs are executed by the CPU
switching between them. The CPU scheduler selects a job from the ready queue and switches the CPU to that
job. When the time slot expires, the CPU switches from this job to another.
In this method, the CPU time is shared by different processes. So, it is said to be "Time-Sharing System".
Generally, time slots are defined by the operating system.
Advantages:



The main advantage of the time-sharing system is efficient CPU utilization. It was developed to provide interactive
use of a computer system at a reasonable cost.
A time shared OS uses CPU scheduling and multiprogramming to provide each user with a small portion of a timeshared computer.
Another advantage of the time-sharing system over the batch processing system is, the user can interact with the
job when it is executing, but it is not possible in batch systems.
PARALLEL SYSTEM
There is a trend multiprocessor system, such system have more than one processor in close communication,
sharing the computer bus, the clock, and sometimes memory and peripheral devices. These systems are referred
to as "Tightly Coupled" system. Then the system is called a parallel system. In the parallel system, a number of
processors are executing there job in parallel.
Advantages:


It increases the throughput.
By increasing the number of processors(CPU), to get more work done in a shorter period of time.
DISTRIBUTED SYSTEM
In a distributed operating system, the processors cannot share a memory or a clock, each processor has its own
local memory. The processor communicates with one another through various communication lines, such as highspeed buses. These systems are referred to as "Loosely Coupled" systems.
Advantages:



Unit 1
If a number of sites connected by high-speed communication lines, it is possible to share the resources from one
site to another site, for example, s1 and s2 are two sites. These are connected by some communication lines. The
site s1 having a printer, but the site does not have any print. Then the system can be altered without moving from
s2 to s1. Therefore, resource sharing is possible in the distributed operating system.
A big computer that is partitioned into a number of partitions, these sub-partitions are run concurrently in distributed
systems.
If a resource or a system fails in one site due to technical problems, we can use other systems/resources in some
other sites. So, the reliability will increase in the distributed system.
Page 6
18CSC205J – Operating Systems
Year & Semester: II & IV
MAJOR ACHIEVEMENTS OF OS:
Major Achievements of OS manages computer hardware and software resources. Major Achievements of OS are
given as follows:
1. Process
2. Memory Management
3. Information Protection & Security
4. System Structure
PROCESS:
A process is a program at the time of execution. The term ‘process‘ was first used by Daley and Dennis.R in 1960.
At the time of developing multi-programming, time-sharing and real-time systems some problems are raised due
to the timing and synchronization.
MEMORY MANAGEMENT:
Here memory means main memory (RAM), and the term memory management specifies how to utilize the
memory efficiently. So, the main task of memory management is ‘efficient memory utilization and efficient
processor utilization.
The responsibility of memory management are given as follows:
(i) Process isolation: It simply means that controlling one process interacts with the data and memory of another
process.
(ii) Automatic allocation and management: Memory should be allocated dynamically based on the priorities of
the process. Otherwise, the process waiting time will increase, which decreases CPU utilization and memory
utilization.
(iii) Protection and access control: Do not apply protection techniques and access control to all the processes,
better to apply to the important application only. It will save execution time.
(iv) Long-term Storage: Long-term storage of process reduces memory utilization.
INFORMATION PROTECTION AND SECURITY:
Here the term protection means that secure the resources and information from unauthorized persons. The
operating system follows a variety of methods for protection and security.
Generally, the protection mechanism is mainly of three types:
(i) Access control: The operating system provides access permissions to the users about important files and
applications.
(ii) Information flow control: The operating system regulates the flow of data within the system.
(iii) Certification: The operating system provides the priorities and hierarchies to the resources using this then we
can control unauthorized processes.
SYSTEM STRUCTURE:
In the Olden Days, the code for an operating system is very few. Later more and more features have been added
to the operating systems. The code of the operating system is generally increased.
OS DESIGN CONSIDERATIONS FOR MULTIPROCESSOR AND MULTICORE
Multicore and multiprocessor systems both serve to accelerate the computing process. A multicore contains
multiple cores or processing units in a single CPU. A multiprocessor is made up of several CPUs. A multicore
processor does not need complex configurations like a multiprocessor. In contrast, A multiprocessor is much
Unit 1
Page 7
18CSC205J – Operating Systems
Year & Semester: II & IV
reliable and capable of running many programs. In this article, you will learn about the Multiprocessor and
Multicore system in the operating system with their advantages and disadvantages.
WHAT IS A MULTIPROCESSOR SYSTEM?
A multiprocessor has multiple CPUs or processors in the system. Multiple instructions are executed
simultaneously by these systems. As a result, throughput is increased. If one CPU fails, the other processors will
continue to work normally. So, multiprocessors are more reliable.
Shared memory or distributed memory can be used in multiprocessor systems. Each processor in a shared
memory multiprocessor shares main memory and peripherals to execute instructions concurrently. In these
systems, all CPUs access the main memory over the same bus. Most CPUs will be idle as the bus traffic
increases. This type of multiprocessor is also known as the symmetric multiprocessor. It provides a single
memory space for all processors.
Each CPU in a distributed memory multiprocessor has its own private memory. Each processor can use local data
to accomplish the computational tasks. The processor may use the bus to communicate with other processors or
access the main memory if remote data is required.
ADVANTAGES AND DISADVANTAGES OF MULTIPROCESSOR SYSTEM:
There are various advantages and disadvantages of the multiprocessor system. Some advantages and
disadvantages of the multiprocessor system are as follows:
Advantages:
There are various advantages of the multiprocessor system. Some advantages of the multiprocessor system are
as follows:





It is a very reliable system because multiple processors may share their work between the systems, and the work is
completed with collaboration.
It requires complex configuration.
Parallel processing is achieved via multiprocessing.
If multiple processors work at the same time, the throughput may increase.
Multiple processors execute the multiple processes a few times.
Disadvantages:
There are various disadvantages of the multiprocessor system. Some disadvantages of the multiprocessor
system are as follows:





Multiprocessors work with different systems, so processors require memory space.
If one of the processors fails, the work is shared among the remaining processors.
These types of systems are very expensive.
If any processor is already utilizing an I/O device, additional processors may not utilize the same I/O device that
creates deadlock.
The operating system implementation is complicated because multiple processors communicate with each other.
WHAT IS A MULTICORE SYSTEM?
A single computing component with multiple cores (independent processing units) is known as a
multicore processor. It denotes the presence of a single CPU with several cores in the system. Individually, these
cores may read and run computer instructions. They work in such a way that the computer system appears to
Unit 1
Page 8
18CSC205J – Operating Systems
Year & Semester: II & IV
have several processors, although they are cores, not processors. These cores may execute normal processors
instructions, including add, move data, and branch.
A single processor in a multicore system may run many instructions simultaneously, increasing the overall
speed of the system's program execution. It decreases the amount of heat generated by the CPU while
enhancing the speed with which instructions are executed. Multicore processors are used in various applications,
including general-purpose, embedded, network, and graphics processing (GPU).
The software techniques used to implement the cores in a multicore system are responsible for the
system's performance. The extra focus has been put on developing software that may execute in parallel because
you want to achieve parallel execution with the help of many cores'.
ADVANTAGES AND DISADVANTAGES OF MULTICORE SYSTEM
There are various advantages and disadvantages of the multicore system. Some advantages and disadvantages
of the multicore system are as follows:
Advantages
There are various advantages of the multicore system. Some advantages of the multicore system are as follows:





Multicore processors may execute more data than single-core processors.
When you are using multicore processors, the PCB requires less space.
It will have less traffic.
Multicores are often integrated into a single integrated circuit die or onto numerous dies but packaged as a single
chip. As a result, Cache Coherency is increased.
These systems are energy efficient because they provide increased performance while using less energy.
Disadvantages
There are various disadvantages of the multicore system. Some disadvantages of the multicore system are as
follows:






Some OSs are still using the single-core processor.
These are very difficult to manage than single-core processors.
These systems use huge electricity.
Multicore systems become hot while doing the work.
These are much expensive than single-core processors.
Operating systems designed for multicore processors will run slightly slower on single-core processors.
MAIN DIFFERENCES BETWEEN THE MULTIPROCESSOR AND MULTICORE SYSTEM
Various differences between the Multiprocessor and Multicore system are as follows:



Unit 1
A multiprocessor system with multiple CPUs allows programs to be processed simultaneously. On the other hand,
the multicore system is a single processor with multiple independent processing units called cores that may read
and execute program instructions.
Multiprocessor systems outperform multicore systems in terms of reliability. A multiprocessor is a computer with
many processors. If one of any processors fails in the system, the other processors will not be affected.
Multiprocessors run multiple programs faster than the multicore system. On the other hand, a multicore system
quickly executes a single program.
Page 9
18CSC205J – Operating Systems



Year & Semester: II & IV
Multicore systems have less traffic than multiprocessors system because the cores are integrated into a single
chip.
Multiprocessors require complex configuration. On the other hand, a multicore system doesn't need to be
configured.
Multiprocessors are expensive as compared to multicore systems. On the other hand, multicore systems are
cheaper than multiprocessors systems.
HEAD-TO-HEAD COMPARISON BETWEEN THE MULTIPROCESSORS AND MULTICORE SYSTEMS
The main differences between the Multiprocessors and Multicore systems are as follows:
Features
Multiprocessors
Multicore
Definition
It is a system with multiple CPUs that allows
processing programs simultaneously.
A multicore processor is a single processor that
contains multiple independent processing units
known as cores that may read and execute
program instructions.
Execution
Multiprocessors run multiple programs faster
than a multicore system.
The multicore executes a single program faster.
Reliability
It is more reliable than the multicore system. If
one of any processors fails in the system, the
other processors will not be affected.
It is not much reliable than the multiprocessors.
Traffic
It has high traffic than the multicore system.
It has less traffic than the multiprocessors.
Cost
It is more expensive as compared to a multicore
system.
These are cheaper than the multiprocessors
system.
Configuration
It requires complex configuration.
It doesn't need to be configured.
PROCESS CONCEPT
A question that arises in discussing operating systems involves what to call all the CPU activities. A batch
system executes jobs, whereas a time-shared system has user programs, or tasks. Even on a single-user system
such as Microsoft Windows, a user may be able to run several programs at one time: a word processor, a web
browser, and an e-mail package. Even if the user can execute only one program at a time, the operating system
may need to support its own internal programmed activities, such as memory management. In many respects, all
these activities are similar, so we call all of them processes. The terms job and process are used almost
interchangeably. Although we personally prefer the term process, much of operating-system theory and
terminology was developed during a time when the major activity of operating systems was job processing. It
would be misleading to avoid the use of commonly accepted terms that include the word job (such as job
scheduling) simply because process has superseded job.
THE PROCESS
Informally, as mentioned earlier, a process is a program in execution. A process is more than the
program code, which is sometimes known as the text section. It also includes the current activity, as represented
by the value of the program counter and the contents of the processor's registers. A process generally also
includes the process stack, which contains temporary data (such as function parameters, return addresses, and
local variables), and a data section, which contains global variables.
Unit 1
Page 10
18CSC205J – Operating Systems
Year & Semester: II & IV
A process may also include a heap, which is memory that is dynamically allocated during process run
time. We emphasize that a program by itself is not a process; a program is a passive entity, such as a file
containing a list of instructions stored on disk (often called an executable file), whereas a process is an active
entity, with a program counter specifying the next instruction to execute and a set of associated resources. A
program becomes a process when an executable file is loaded into memory. Two common techniques for
loading executable files are double-clicking an icon representing the executable file and entering the name of the
executable file on the command line (as in prog. exe or a. out.) Although two processes may be associated with
the same program, they are nevertheless considered two separate execution sequences. For instance, several
users may be running different copies of the mail program, or the same user may invoke many copies of the web
browser program. Each of these is a separate process; and although the text sections are equivalent, the data,
heap, and stack sections vary. It is also common to have a process that spawns many processes as it runs.
PROCESS STATE
As a process executes, it changes state. The state of a process is defined in part by the current activity of
that process. Each process may be in one of the following states:





New. The process is being created.
Running. Instructions are being executed.
Waiting. The process is waiting for some event to occur (such as an I/O completion or reception of a signal).
Ready. The process is waiting to be assigned to a processor.
Terminated.
PROCESS CONTROL BLOCK:
Process Control Block is a data structure that contains information of the process related to it. The
process control block is also known as a task control block, entry of the process table, etc. It is very important for
process management as the data structuring for processes is done in terms of the PCB. It also defines the current
state of the operating system.
Unit 1
Page 11
18CSC205J – Operating Systems
Year & Semester: II & IV
STRUCTURE OF THE PROCESS CONTROL BLOCK:
The process control stores many data items that are needed for efficient process management.
The following are the data items −
Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.
Process Number
This shows the number of the particular process.
Program Counter
This contains the address of the next instruction that needs to be executed in the process.
Registers
This specifies the registers that are used by the process. They may include accumulators, index registers, stack
pointers, general purpose registers etc.
List of Open Files
These are the different files that are associated with the process
CPU Scheduling Information
The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is contained in the
PCB. This may also include any other scheduling parameters.
Memory Management Information
The memory management information includes the page tables or the segment tables depending on the memory
system used. It also contains the value of the base registers, limit registers etc.
I/O Status Information
This information includes the list of I/O devices used by the process, the list of files etc.
Accounting information
The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the PCB
accounting information.
Location of the Process Control Block
The process control block is kept in a memory area that is protected from the normal user access. This is done
because it contains important process information. Some of the operating systems place the PCB at the beginning
of the kernel stack for the process as it is a safe location.
Unit 1
Page 12
18CSC205J – Operating Systems
Year & Semester: II & IV
THREADS IN OPERATING SYSTEM:
A thread is a sequential flow of tasks within a process. Each thread has its own set of registers and stack
space. There can be multiple threads in a single process having the same or different functionality. Threads are
also termed lightweight processes.
Let us take an example of a human body. A human body has different parts having different
functionalities which are working parallelly ( Eg: Eyes, ears, hands, etc). Similarly in computers, a single process
might have multiple functionalities running parallelly where each functionality can be considered as a thread.
WHAT IS THREAD IN OS?
Thread is a sequential flow of tasks within a process. Threads in OS can be of the same or different
types. Threads are used to increase the performance of the applications. Each thread has its own program
counter, stack, and set of registers. But the threads of a single process might share the same code and data/file.
Threads are also termed as lightweight processes as they share common resources.
Eg: While playing a movie on a device the audio and video are controlled by different threads in the background.
The above diagram shows the difference between a single-threaded process and a multithreaded process and the
resources that are shared among threads in a multithreaded process.
COMPONENTS OF THREAD
A thread has the following three components:



Program Counter
Register Set
Stack space
WHY DO WE NEED THREADS?
Threads in the operating system provide multiple benefits and improve the overall performance of the system.
Some of the reasons threads are needed in the operating system are:



Since threads use the same data and code, the operational cost between threads is low.
Creating and terminating a thread is faster compared to creating or terminating a process.
Context switching is faster in threads compared to processes.
WHY MULTITHREADING?
In Multithreading, the idea is to divide a single process into multiple threads instead of creating a whole new
process. Multithreading is done to achieve parallelism and to improve the performance of the applications as it is
faster in many ways which were discussed above. The other advantages of multithreading are mentioned below.

Unit 1
Resource Sharing: Threads of a single process share the same resources such as code, data/file.
Page 13
18CSC205J – Operating Systems


Year & Semester: II & IV
Responsiveness: Program responsiveness enables a program to run even if part of the program is blocked or
executing a lengthy operation. Thus, increasing the responsiveness to the user.
Economy: It is more economical to use threads as they share the resources of a single process. On the other
hand, creating processes is expensive.
PROCESS VS THREAD
Process simply means any program in execution while the thread is a segment of a process. The main differences
between process and thread are mentioned below:
Process
Thread
Processes use more resources and hence they are
Threads share resources and hence they are termed as
termed as heavyweight processes.
lightweight processes.
Creation and termination times of processes are
Creation and termination times of threads are faster
slower.
compared to processes.
Processes have their own code and data/file.
Threads share code and data/file within a process.
Communication between processes is slower.
Communication between threads is faster.
Context Switching in processes is slower.
Context switching in threads is faster.
Threads, on the other hand, are interdependent. (i.e they
Processes are independent of each other.
can read, write or change another thread’s data)
Eg: Opening two different browsers.
Eg: Opening two tabs in the same browser.
PROCESS SCHEDULING IN OS:
Process Scheduling is an OS task that schedules processes of different states like ready, waiting, and running.
Process scheduling allows OS to allocate a time interval of CPU execution for each process. Another important
reason for using a process scheduling system is that it keeps the CPU busy all the time. This allows you to get the
minimum response time for programs.
PROCESS SCHEDULING QUEUES:
Process Scheduling Queues help you to maintain a distinct queue for each and every process states and PCBs.
All the process of the same execution state are placed in the same queue. Therefore, whenever the state of a
process is modified, its PCB needs to be unlinked from its existing queue, which moves back to the new state
queue.
Three types of operating system queues are:
1. Job queue – It helps you to store all the processes in the system.
Unit 1
Page 14
18CSC205J – Operating Systems
Year & Semester: II & IV
2. Ready queue – This type of queue helps you to set every process residing in the main memory, which is ready
and waiting to execute.
3. Device queues – It is a process that is blocked because of the absence of an I/O device.
In the above-given Diagram,



Rectangle represents a queue.
Circle denotes the resource
Arrow indicates the flow of the process.
1. Every new process first put in the Ready queue .It waits in the ready queue until it is finally processed for execution.
Here, the new process is put in the ready queue and wait until it is selected for execution or it is dispatched.
2. One of the processes is allocated the CPU and it is executing
3. The process should issue an I/O request
4. Then, it should be placed in the I/O queue.
5. The process should create a new subprocess
6. The process should be waiting for its termination.
7. It should remove forcefully from the CPU, as a result interrupt. Once interrupt is completed, it should be sent back to
ready queue.
TWO STATE PROCESS MODEL
Two-state process models are:
1. Running State
2. Not Running State
RUNNING
In the Operating system, whenever a new process is built, it is entered into the system, which should be running.
NOT RUNNING
The process that are not running are kept in a queue, which is waiting for their turn to execute. Each entry in the
queue is a point to a specific process.
Unit 1
Page 15
18CSC205J – Operating Systems
Year & Semester: II & IV
SCHEDULING OBJECTIVES
Here, are important objectives of Process scheduling




Maximize the number of interactive users within acceptable response times.
Achieve a balance between response and utilization.
Avoid indefinite postponement and enforce priorities.
It also should give reference to the processes holding the key resources.
TYPE OF PROCESS SCHEDULERS
A scheduler is a type of system software that allows you to handle process scheduling. There are mainly three
types of Process Schedulers:



Long Term Scheduler
Short Term Scheduler
Medium Term Scheduler
Long Term Scheduler
Long term scheduler is also known as a job scheduler. This scheduler regulates the program and select process
from the queue and loads them into memory for execution. It also regulates the degree of multi-programing.
However, the main goal of this type of scheduler is to offer a balanced mix of jobs, like Processor, I/O jobs., that
allows managing multiprogramming.
Medium Term Scheduler
Medium-term scheduling is an important part of swapping. It enables you to handle the swapped out-processes.
In this scheduler, a running process can become suspended, which makes an I/O request.
A running process can become suspended if it makes an I/O request. A suspended processes can’t make any
progress towards completion. In order to remove the process from memory and make space for other processes,
the suspended process should be moved to secondary storage.
Short Term Scheduler
Short term scheduling is also known as CPU scheduler. The main goal of this scheduler is to boost the system
performance according to set criteria. This helps you to select from a group of processes that are ready to
execute and allocates CPU to one of them. The dispatcher gives control of the CPU to the process selected by
the short term scheduler.
DIFFERENCE BETWEEN SCHEDULERS
Long-Term
Long term is also known as a job
scheduler
It is either absent or minimal in a
time-sharing system.
Speed is less compared to the
short term scheduler.
Short-Term
Medium-Term
Short term is also known as CPU Medium-term is also called swapping
scheduler
scheduler.
It is insignificant in the time-sharing This scheduler is an element of Timeorder.
sharing systems.
Speed is the fastest compared to the
short-term and medium-term
It offers medium speed.
scheduler.
Allow you to select processes from
It only selects processes that is in a It helps you to send process back to
the loads and pool back into the
ready state of the execution.
memory.
memory
Offers full control
Offers less control
Reduce the level of multiprogramming.
Unit 1
Page 16
18CSC205J – Operating Systems
Year & Semester: II & IV
CONTEXT SWITCHING:
In computing, a context switch is the process of storing the state of a process or thread, so that it can be
restored and resume execution at a later point, and then restoring a different, previously saved, state. This allows
multiple processes to share a single central processing unit (CPU), and is an essential feature of a multitasking
operating system.
The precise meaning of the phrase "context switch" varies. In a multitasking context, it refers to the
process of storing the system state for one task, so that task can be paused and another task resumed. A context
switch can also occur as the result of an interrupt, such as when a task needs to access disk storage, freeing up
CPU time for other tasks. Some operating systems also require a context switch to move between user mode and
kernel mode tasks. The process of context switching can have a negative impact on system performance.
STEPS FOLLOWED IN CONTEXT SWITCHING
1. The state of the currently executing process must be saved so it can be restored when rescheduled for execution.
2. The process state includes all the registers that the process may be using, especially the program counter, plus
any other operating system specific data that may be necessary. This is usually stored in a data structure called a
process control block (PCB) or switchframe.
3. The PCB might be stored on a per-process stack in kernel memory (as opposed to the user-mode call stack), or
there may be some specific operating system-defined data structure for this information. A handle to the PCB is
added to a queue of processes that are ready to run, often called the ready queue.
4. Since the operating system has effectively suspended the execution of one process, it can then switch context by
choosing a process from the ready queue and restoring its PCB. In doing so, the program counter from the PCB is
loaded, and thus execution can continue in the chosen process. Process and thread priority can influence which
process is chosen from the ready queue (i.e., it may be a priority queue).
OPERATIONS ON PROCESS:
Process creation:
Processes are created with the fork system call (so the operation of creating a new process is sometimes
called forking a process). The child process created by fork is a copy of the original parent process, except that it
has its own process ID. Each process is named by a process ID number. A unique process ID is allocated to each
process when it is created. The lifetime of a process ends when its termination is reported to its parent process; at
that time, all of the process resources, including its process ID.
Unit 1
Page 17
18CSC205J – Operating Systems
Year & Semester: II & IV
After forking a child process, both the parent and child processes continue to execute normally. If the programmer
wants the program to wait for a child process to finish executing before continuing, the programmer must do this
explicitly after the fork operation, by calling wait or waitpid. A newly forked child process continues to execute the
same program as its parent process, at the point where the fork call returns. The return value can be used from
fork to tell whether the program is running in the parent process or the child.
The creation of a process requires the following steps. The order in which they are carried out is not necessarily
the same in all cases.
1. Name. The name of the program which is to run as the new process must be known.
2. Process ID and Process Control Block. The system creates a new process control block, or locates an unused
block in an array. This block is used to follow the execution of the program through its course, keeping track of its
resources and priority. Each process control block is labeled by its PID or process identifier.
3. Locate the program to be executed on disk and allocate memory for the code segment in RAM.
4. Load the program into the code segment and initialize the registers of the PCB with the start address of the
program and appropriate starting values for resources.
5. Priority. A priority must be computed for the process, using a default for the type of process and any value which
the user specified as a `nice' value (see Run levels - priorities above).
6. Schedule the process for execution.
Process termination:
Processes can be terminated in one of two ways:
1. Normal Termination occurs by a return from main or when requested by an explicit call to exit or _exit.
2. Abnormal Termination occurs as the default action of a signal or when requested by abort.
On receiving a signal, a process looks for a signal-handling function. Failure to find a signal-handling
function forces the process to call exit, and therefore to terminate. The functions _exit, exit and abort terminate a
process with the same effects except that abort makes available to wait or waitpid the status of a process
terminated by the signal SIGABRT. As a process terminates, it can set an eight-bit exit status code available to its
parent. Usually, this code indicates success (zero) or failure (non-zero).
If a signal terminated the process, the system first tries to dump an image of core, and then modifies the
exit code to indicate which signal terminated the process and whether core was dumped. This is provided that the
signal is one that produces a core dump. Next, all signals are set to be ignored, and resources owned by the
process are released, including open files and the working directory.
INTER PROCESS COMMUNICATION (IPC)
Inter process communication in OS is the way by which multiple processes can communicate with each
other. Shared memory in OS, message queues, FIFO, etc. are some of the ways to achieve IPC in os. A system
can have two types of processes:
1. Independent Process
2. Cooperating Process
Cooperating processes affect each other and may share data and information among themselves. Interprocess
Communication or IPC provides a mechanism to exchange data and information across multiple processes, which
might be on single or multiple computers connected by a network.
Why Inter Process Communication (IPC) is Required?
IPC helps achieve the following:
Unit 1
Page 18
18CSC205J – Operating Systems
1.
2.
3.
4.
5.
Year & Semester: II & IV
Computational Speedup
Modularity
Information and data sharing
Privilege separation
Processes can communicate with each other and synchronize their action.
Approaches for Inter Process Communication:
Different Ways to Implement Inter Process Communication (IPC) are:
Pipes
It is a half-duplex method (or one-way communication) used for IPC between two related processes. It is
like a scenario like filling the water with a tap into a bucket. The filling process is writing into the pipe and the
reading process is retrieved from the pipe.
Shared Memory
Multiple processes can access a common shared memory. Multiple processes communicate by shared
memory, where one process makes changes at a time and then others view the change. Shared memory does
not use kernel.
Unit 1
Page 19
18CSC205J – Operating Systems
Year & Semester: II & IV
Message Passing
1. In IPC, this is used by a process for communication and synchronization.
2. Processes can communicate without any shared variables, therefore it can be used in a distributed environment on
a network.
3. It is slower than the shared memory technique.
4. It has two actions sending (fixed size message) and receiving messages.
Message Queues
We have a linked list to store messages in a kernel of OS and a message queue is identified using
"message queue identifier".
Direct Communication
In this, process that want to communicate must name the sender or receiver.


A pair of communicating processes must have one link between them.
A link (generally bi-directional) establishes between every pair of communicating processes.
Indirect Communication



Pairs of communicating processes have shared mailboxes.
Link (uni-directional or bi-directional) is established between pairs of processes.
Sender process puts the message in the port or mailbox of a receiver process and the receiver process takes out
(or deletes) the data from the mailbox.
FIFO


Used to communicate between two processes that are not related.
Full-duplex method - Process P1 is able to communicate with Process P2, and vice versa.
PROCESS SYNCHRONIZATION:
When two or more process cooperates with each other, their order of execution must be preserved
otherwise there can be conflicts in their execution and inappropriate outputs can be produced. A cooperative
process is the one which can affect the execution of other process or can be affected by the execution of other
Unit 1
Page 20
18CSC205J – Operating Systems
Year & Semester: II & IV
process. Such processes need to be synchronized so that their order of execution can be guaranteed. The
procedure involved in preserving the appropriate order of execution of cooperative processes is known
as Process Synchronization. There are various synchronization mechanisms that are used to synchronize the
processes.
Race Condition
A Race Condition typically occurs when two or more threads try to read, write and possibly make the
decisions based on the memory that they are accessing concurrently.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions are called
critical section. To avoid race condition among the processes, we need to assure that only one process at a time
can execute within the critical section.
CRITICAL SECTION PROBLEM
The critical section problem is one of the classic problems in Operating Systems. In operating systems,
there are processes called cooperative processes that share and access a single resource. In these kinds of
processes, the problem of synchronization occurs. The critical section problem is a problem that deals with this
synchronization.
Critical Section is the part of a program which tries to access shared resources. That resource may be any
resource in a computer like a memory location, Data structure, CPU or any IO device. The critical section cannot
be executed by more than one process at the same time; operating system faces the difficulties in allowing and
disallowing the processes from entering the critical section. The critical section problem is used to design a set of
protocols which can ensure that the Race condition among the processes will never arise. In order to synchronize
the cooperative processes, our main task is to solve the critical section problem. We need to provide a solution in
such a way that the following conditions can be satisfied.
What is the Critical Section in OS?




Critical Section refers to the segment of code or the program which tries to access or modify the value of the
variables in a shared resource.
The section above the critical section is called the Entry Section. The process that is entering the critical section
must pass the entry section.
The section below the critical section is called the Exit Section.
The section below the exit section is called the Reminder Section and this section has the remaining code that is
left after execution.
What is the Critical Section Problem in OS?
Unit 1
Page 21
18CSC205J – Operating Systems
Year & Semester: II & IV
When there is more than one process accessing or modifying a shared resource at the same time, then
the value of that resource will be determined by the last process. This is called the race condition.
Consider an example of two processes, p1 and p2. Let value=3 be a variable present in the shared resource.
Let us consider the following actions are done by the two processes,
value+3 // process p1
value=6
value-3 // process p2
value=3
The original value of,value should be 6, but due to the interruption of the process p2, the value is changed back to
3. This is the problem of synchronization.
The critical section problem is to make sure that only one process should be in a critical section at a time. When a
process is in the critical section, no other processes are allowed to enter the critical section. This solves the race
condition.
Example of Critical Section Problem
Let us consider a classic bank example, this example is very similar to the example we have seen above.





Let us consider a scenario where money is withdrawn from the bank by both the cashier(through cheque) and the
ATM at the same time.
Consider an account having a balance of ₹10,000. Let us consider that, when a cashier withdraws the money, it
takes 2 seconds for the balance to be updated in the account.
It is possible to withdraw ₹7000 from the cashier and within the balance update time of 2 seconds, also withdraw
an amount of ₹6000 from the ATM.
Thus, the total money withdrawn becomes greater than the balance of the bank account.
This happened because of two withdrawals occurring at the same time. In the case of the critical section, only one
withdrawal should be possible and it can solve this problem.
Solutions to the Critical Section Problem
A solution developed for the critical section should have the following properties:
 If a process enters the critical section, then no other process should be allowed to enter the critical section. This is
called mutual exclusion.
 If a process is in the critical section and another process arrives, then the new process must wait until the first
process exits the critical section. In such cases, the process that is waiting to enter the critical section should not
wait for an unlimited period. This is called progress.
 If a process wants to enter into the critical section, then there should be a specified time that the process can be
made to wait. This property is called bounded waiting.
 The solution should be independent of the system's architecture. This is called neutrality.
Some of the software-based solutions for critical section problems are
 Peterson's solution, Semaphores and Monitors
Some of the hardware-based solutions for the critical section problem involve atomic instructions such as
 TestAndSet, compare and swap, Unlock and Lock.
The above mentioned solutions for the Critical Section Problem will be explained in UNIT-II.
Unit 1
Page 22
Download