Uploaded by Trishnav

OS2 E-book of Operating System Building Blocks (1)

advertisement
Edition 1
Year of Publication: 2016
© Confidentiality & Proprietary Information
This is a confidential document prepared by iNurture. This document, or any portion
thereof, should not be made available to any persons other than the authorised and
designated staff of the company/institution/vendor to which it has been submitted.
No part of this document may be reproduced, stored in a retrieval system, or transmitted, in
any form or by any means, electronic, mechanical, photocopying, recording, or otherwise,
without the prior written permission of iNurture.
How to use the Self Learning Material
The pedagogy used to design this course is to enable you to assimilate the concepts and
processes with ease. The course is divided into Modules. Each module is categorically divided
into Chapters. Each chapter consists of the following elements:
Table of Contents: Every chapter consists of a well-defined table of content.
For example: “1.1.8.(i)” should be read as “Module 1. Chapter 1. Topic 8. (Sub-topic i)”
and 1.2.8. (ii) should be read as “Module 1.Chapter 2. Topic 8. (Sub-topic ii)
Aim: ‘Aim’ refers to the overall goal to be achieved by going through the chapter.
Instructional Objectives: ‘Instructional Objectives’ defines what the chapter
intends to deliver.
Learning Outcomes: ‘Learning Outcomes’ refers to what you will be able to
accomplish by going through the chapter.
Advantages: ‘Advantages’ describes the positive aspects of that particular method,
theory or practice.
Disadvantages: ‘Disadvantages’ describes the drawbacks of the particular method,
theory or practice.
Summary: ‘Summary’ contains the main points of the entire chapter.
Self-assessment: ‘Self-assessment’ contains a set of questions to answer at the end
of each topic.
e-References: ‘e-References’ is a list of online resources that have been used while
designing the chapter.
External Resources: ‘External Resources’ is a list of scholarly books for additional
source of knowledge.
Video Links: ‘Video Links’ contain links to online videos that will help you to
understand the concepts better.
Did you know?: ‘Did you know’ is an interesting fact that helps improve your
knowledge about the topic.
Activity: ‘Activity’ is used to demonstrate the application of a concept. Activities can
be online and offline.
Operating Systems Building Blocks
Operating Systems Building Blocks
Course Description
The main goal of studying Operating System is to get an overview of the concepts of
Operating Systems, its capabilities and limitations. It will also give you an overview on how to
manage multiple tasks at the same time. It will also address the problems of memory
management and also the concerns of file systems.
By the end of this course, students will be able to learn about various concepts of Operating
Systems, such as Processes and Threads, Scheduling, Synchronisation, Memory Management,
File Systems, Disk Management, and Security.
iNurture’s Operating Systems Building Blocks course is designed to serve as a stepping
stone for you to build a career as a System Software Engineer, Trainer, Service Engineer, few
fields in which one can explore many opportunities.
The Operating Systems Building Blocks Course contains Five Modules.
MODULE 1: INTRODUCTION TO OPERATING SYSTEM
Introduction, Objectives and Functions of OS, Evolution of OS, OS Structures, OS
Components, OS Services, System calls, System programs, Virtual Machines.
MODULE 2: PROCESS MANAGEMENT
Processes: Process concept, Process scheduling, Co-operating processes, Operations
on processes, Inter process communication, Communication in client-server systems;
Threads: Introduction to Threads, Single and Multi-threaded processes and its
benefits, User and Kernel threads, Multithreading models, Threading issues; CPU
Scheduling: Basic concepts, Scheduling criteria, Scheduling Algorithms, Multiple
Processor Scheduling, Real-time Scheduling, Algorithm Evaluation, Process
Scheduling Models; Process Synchronisation: Mutual Exclusion, Critical – section
problem,
Synchronisation
synchronisation,
Critical
hardware,
Regions,
Semaphores,
Monitors,
OS
Classic
problems
Synchronisation,
of
Atomic
Transactions; Deadlocks: System Model, Deadlock characterisation, Methods for
handling Deadlocks, Deadlock prevention, Deadlock Avoidance, Deadlock Detection,
Recovery from Deadlock.
Operating Systems Building Blocks
MODULE 3: STORAGE MANAGEMENT
Memory Management: Logical and physical Address Space, Swapping, Contiguous
Memory Allocation, Paging, Segmentation with Paging; Virtual Management:
Demand paging, Process creation, Page Replacement Algorithms, Allocation of
Frames, Thrashing, Operating System Examples, Page size and other considerations,
Demand segmentation; File-System Interface: File concept, Access Methods,
Directory structure, File- system Mounting, File sharing, Protection and consistency
semantics;
File-System
Implementation:
File-System
structure,
File-System
Implementations, Directory Implementation, Allocation Methods, Free-space
Management, Efficiency and Performance, Recovery; Disk Management: Disk
Structure, Disk Scheduling, Disk Management, Swap-Space Management, Disk
Attachment, stable-storage Implementation.
MODULE 4: PROTECTION AND SECURITY
Protection:
Goals
of
Protection,
Domain
of
Protection,
Access
Matrix,
Implementation of Access Matrix, Revocation of Access Rights, Capability- Based
Systems, Language – Based Protection; Security: Security Problem, User
Authentication, One – Time Password, Program Threats, System Threats,
Cryptography, Computer – Security Classifications.
Operating Systems Building Blocks
Table of Contents
MODULE 1
Introduction to Operating System
Chapter 1.1 Overview of Operating System ........................................................................ 1
Chapter 1.2 Operating System Services and System Calls............................................... 31
MODULE 2
Process Management
Chapter 2.1 Process, Threads and CPU Scheduling ........................................................ 57
Chapter 2.2 Process Synchronisation and Deadlocks ................................................... 105
MODULE 3
Storage Management
Chapter 3.1 Memory Management and Virtual Management ..................................... 149
Chapter 3.2 File System Interface, Implementation and Disk Management. ............. 181
MODULE 4
Protection and Security
Chapter 4.1 Protection ....................................................................................................... 219
Chapter 4.2 Security ........................................................................................................... 249
Operating Systems Building Blocks
Operating Systems Building Blocks
MODULE - I
Introduction to
Operating Systems
MODULE 1
Introduction to Operating Systems
Module Description
The main goal of studying Introduction to Operating Systems is to be able to understand
Operating System and its evolution. This module will also help us to understand the basic
concepts of Operating Systems.
By the end of this module, students will learn about Operating system and its evolution,
Objectives and Functions of OS, OS Structures, OS Components, OS Services, System Calls,
System Programs, and Virtual Machines, System Boots and OS Design and Installation.
By the end of this module, students will be able to discuss Operating System, discuss the
evolution of Operating System, recall the various objectives of Operating System, and
summarise the various constituents of Operating System. The students will also be able to
discuss on Operating System structures, infer from the types of Operating System Structures,
discuss various services provided by an Operating System, discuss the working of system calls,
demonstrate the installation and customisation of Operating System, and summarise the
system boot process of Operating System.
Chapter 1.1
Overview of Operating Systems
Chapter 1.2
Operating System Services and System Calls
Operating Systems Building Blocks
Overview of Operating Systems
Chapter Table of Contents
Chapter 1.1
Overview of Operating Systems
Aim ......................................................................................................................................................... 1
Instructional Objectives....................................................................................................................... 1
Learning Outcomes .............................................................................................................................. 1
1.1.1 Introduction to Operating System ........................................................................................... 2
(i) What is an Operating System? ............................................................................................ 2
(ii) Evolution of Operating System ......................................................................................... 3
(iii) Objectives of Operating System ....................................................................................... 5
(iv) Functions of Operating System ........................................................................................ 6
Self-assessment Questions ........................................................................................................ 8
1.1.2 Components of Operating System ........................................................................................... 8
(i) Process Management ........................................................................................................... 8
(ii) Memory Management ........................................................................................................ 9
(iii) Files and Disk Management ............................................................................................. 9
(iv) Networks ........................................................................................................................... 10
(v) Security ............................................................................................................................... 11
Self-assessment Questions ...................................................................................................... 12
1.1.3 Operating System Structure .................................................................................................... 12
Self-assessment Questions ...................................................................................................... 15
1.1.4 Types of Operating System Structure .................................................................................... 16
(i) Monolithic Approach ........................................................................................................ 16
(ii) Layered Approach ............................................................................................................. 16
(iii) Microkernels ..................................................................................................................... 17
(iv) Client-Server Model ......................................................................................................... 18
(v) Virtual Machines ............................................................................................................... 18
(vi) Exokernels ......................................................................................................................... 18
Self-assessment Questions ...................................................................................................... 20
Summary ............................................................................................................................................. 21
Terminal Questions............................................................................................................................ 23
Answer Keys........................................................................................................................................ 23
Activity................................................................................................................................................. 24
Operating Systems Building Blocks | Introduction to Operating Systems
Overview of Operating Systems
Case Study ........................................................................................................................................... 24
Bibliography ........................................................................................................................................ 28
e-References ........................................................................................................................................ 28
External Resources ............................................................................................................................. 28
Video Links ......................................................................................................................................... 29
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Aim
To acquaint the students with the knowledge of the basics of Operating System
Instructional Objectives
After completing this chapter, you should be able to:
•
Describe the concept of Operating System
•
Explain the components of Operating System
•
Explain the Operating System Structure
•
Elaborate on the various types of Operating System Structure
Learning Outcomes
At the end of this chapter, you are expected to:
•
Summarise the concept of Operating System
•
Outline the components of Operating System
•
Outline the Operating System Structure
•
List the various types of Operating System Structure
Operating Systems Building Blocks | Introduction to Operating Systems
1
Overview of Operating Systems
1.1.1 Introduction to Operating System
A computer uses hardware and software to complete any of its tasks. These resources include
input-output (I/O) devices, memory storage, and other software which promote the
functioning of the system. Thus, an operating system acts as the resource manager as it
manages internally, all the resources and assigns them to particular programs as per the
requirement of the task.
An operating system is a computer program that handles the computer hardware. It equips a
base for application programs and plays as an emissary between the user and the computer
hardware. An unusual prospect of operating systems is how distinctive they are in managing
these tasks. Mainframe operating systems optimises the utilisation of hardware. A Personal
computer (PC) operating system provides support for complex games, business applications,
etc. In handheld computers, Operating systems create an environment in which a user can
readily associate with the computer to carry out programs. Thus, operating systems are either
convenient or efficient or a combination of both.
(i)
What is an Operating System?
An operating system (OS) is a system software that deals with computer hardware and
software resources and also supports by providing common services for computer programs.
All computer programs excluding firmware have requirement of an operating system to run.
A boot program loads an Operating system when the computer is switched on. An operating
system controls all the programs on the computer. The application programs send a request
to the operating system for services over a specified Application Program Interface (API).
With the help of command line or a Graphical User Interface (GUI), users can interact
directly with the operating system. Without a computer operating system, a computer and
software programs would be useless. One such popular example is Microsoft Windows XP.
Few other examples are Windows, Google Android, iOS, Ubuntu Linux, Apple Mac OS.
An operating system is similar to a government. Like the government, an operating system
does not perform any necessary function by itself. It provides an environment in which other
programs can do useful work.
2
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
(ii)
Evolution of Operating System
The below-given timeline for the past 60 years explains the evolution of an operating system.
The evolution is directly dependent on the development of the computer systems and how
the users use them.
Early Evolution:
•
1945- ENIAC
•
1949- EDSAC and EDVAC
•
1949- BINAC- A successor to the ENIAC
•
1951- UNIVAC
•
1952- IBM 701
•
1956- The interrupt
•
1954-1957- The development of FORTRAN
Operating systems by the late 1950s:
By the end of the 1950s the improvised operating systems started supporting the following
usages:
•
It could single stream batch processing.
•
It could use common, standardised, I/O routines for the device access.
•
The addition of program transition capabilities to reduce the overhead of starting a
new job.
•
The addition of error recovery to clean up, if a job is terminated abnormally.
•
The introduction of JCL (Job control languages) that let the users to identify the job
definition and necessity of resources.
Operating systems in the 1960s:
•
1961- The Dawn of minicomputers
•
1962- CTSS (Compatible Time-Sharing System) from MIT
•
1964 - IBM System/360
•
The1960s- Disks become mainstream
Operating Systems Building Blocks | Introduction to Operating Systems
3
Overview of Operating Systems
•
1966- Minicomputers get cheaper
•
1967-1968 – The mouse
•
1964 and onwards – Multics
•
1969 – The UNIX Time-sharing system from Bell Telephone Laboratories
Supported OS Features by 1970s:
•
The introduction of Multi-User and Multitasking.
•
Virtual machines and dynamic address translation hardware came into limelight.
•
The introduction of Personal, Interactive Systems and Modular architectures.
•
1971 – Intel declared the microprocessor
•
1972 – IBM came out with the virtual machine operating system
•
1973 – UNIX 4th Edition was published
•
1973 – The time of personal computer started
•
1974 – Gates and Allen inscribed BASIC for the Altair
•
1974 – Apple II
•
August 12, 1981 – IBM introduced the IBM PC
•
1983 - Microsoft started working on MS-Windows
•
1984 - Apple Macintosh came out
•
1990 - Microsoft Windows 3.0 came out
•
1991 - GNU/Linux
•
1992 - The first Windows virus came out
•
1993 - Windows NT
•
2007 – IOS came out
•
2008 – Android OS was introduced
•
2009-2016 - Multiple versions of Android were added.
The research and development will continue to develop the new operating systems and
upgrade the existing ones. This will intensify the overall user experience and make the
operating system fast and efficient like never before.
4
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
(iii) Objectives of Operating System
Few objectives of Operating System are:
1. Convenience: An operating system allows a computer to be more convenient to use.
An operating system aims to wrap the underneath hardware resources and gives
services to end users in a systematic way. The two types of services are:
•
Services that are directly available for end users through I/O devices
•
Services for application programs
According to end users, a computer system includes a variety of applications that are
useful. An application is created by programmers with the help of programming
language. The higher level utilities make it easier and more comfortable for the
programmers to code the programming language; or vice-versa.
2. Efficiency: An operating system supports the computer system resources to work
effectively. An OS merely utilises the hardware to give easily accessible interfaces. The
meaning of exploitation is how to manage the hardware resources and thus also
manages the entities that use the services.
By applying the hardware facilities, the OS may set up different processes to run at
different moments. This will swap the program data and the instructions between
external storage devices and main memory.
3. Ability to evolve- Design an OS in a way that it allows the effective development,
testing, and introduction of functions of the new system without intervening with the
service.
An OS emerges with time due to:
•
New or Upgraded Hardware: With the developing hardware technologies, the
OS also needs to update to apply the new mechanisms set up by the new
hardware.
•
New Services: An OS may also enlarge to introduce more services as per the
demand of the user.
Operating Systems Building Blocks | Introduction to Operating Systems
5
Overview of Operating Systems
4. Fixes: No software is perfect. A program invariably has a few defects or bugs. Thus the
need to fix those bugs from time to time. For example, Microsoft Windows releases
various patches after fixing the bugs as per the request of the users.
The OS designers need to create an operating system in a way the system can be easily
manageable and optimisable. One can equip the OS with all the software design techniques
like modularisation. With modularisation, dividing OS into multiple modules is possible with
clearly specified interfaces between them.
(iv) Functions of Operating System
An operating system has several functions. The main goal of an operating system is to provide
the link between the user and the hardware.
•
OS as a Resource Manager: An operating system is a resource manager because it
preserves all the system resources like memory, processor, I/O devices etc. The
operating system identifies the time by which the CPU performs the programs and
utilises the memory. It also helps in responding to the right input devices as per the
user's request.
•
Booting: Booting is a technique of starting the computer operating system that opens
the computer to run. Booting tests the system and lets the OS work.
•
Memory Management: Regulating memory is not possible without an operating
system. Ata single time, various data and programs execute. If there is no operating
system, the programs might blend with each other, and the system will not run
properly.
•
Loading and Execution: Before executing a program, the OS loads it into the
memory. The Operating system provides the amenity for loading programs in
memory and completing it.
•
Data Security: Data is an important part of a computer system. The operating system
protects the data stored on the computer from deletion, modification, or illegal use.
•
Disk Management: Operating system maintains the disk space. It keeps the stored
files and folders in a proper way.
6
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
•
Process Management: CPU can function one task at one time. If there are many
tasks, the operating system arranges the tasks which will be done by CPU.
•
Device Controlling: An operating system also manages all the devices of the
computer. The device drivers regulate the hardware devices.
•
Printing Controlling: Operating system also manages printing function. If a user
issues two print commands at a time. It does not combine data of the files but prints
them separately.
•
Providing Interface: It is used when the user interface operates with a computer
mutually. User interface manages how someone can input the data or instruction and
how someone can flash the information on a screen. The operating system offers two
types of the interface to the user:
•
Graphical line-Interface: It connects the visual environment to interact with
the computer. It handles windows, icons, menus and other graphical objects to
circulate commands.
•
Command line-Interface: It gives an interface to interact with the computer
by typing commands.
•
Extended Machine: Operating System also acts like an enduring machine, which
allows the users sharing of files between multiple users and provides graphical
environment and various languages for communications.
•
Mastermind: Operating Systems performs various functions. It provides booting
without an operating system and provides the amenity to enhance the logical memory
of the computer by adopting the physical memory of the computer system. NTFS and
FAT file systems are the two formats.
•
Error detection and response: Different kinds of errors can exist while a computer
system is running. These consist of internal and external hardware errors, such as a
memory error, or a device failure or malfunction. There are various software errors,
such as division by zero, which tries to approach forbidden memory location, and
incompetence of the OS to allow the request of an application. In each case, the OS
must give a response that eliminates the error condition with less effect on running
Operating Systems Building Blocks | Introduction to Operating Systems
7
Overview of Operating Systems
the applications. The feedback may stretch from ending the program which causes the
error, to retry the operation, to simply describing the error to the application.
Self-assessment Questions
1) Booting tests the system and lets the OS work. State True or False?
a) True
b) False
2) Operating System provides the amenity to enhance the logical memory of the
computer by adopting the physical memory of the computer system. State True or
False?
a) True
b) False
3) Which component creates a platform for application programs and plays as an
emissary between the user and the computer hardware?
a) OS
b) Virtual Machine
c) Memory
d) User Interface
4) Division by Zero is a hardware error. State True or False?
a) True
b) False
1.1.2 Components of Operating System
(i)
Process Management
The processes given by users are the system's own process. It creates a priority for the user
and plays an eminent role in executing the process. It also creates child process by dividing a
large process into smaller processes.
The role that the operating systems plays here are:
•
It keeps a track of the processor and status of the process. The program responsible
for this task is known as the traffic controller.
8
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
•
It assigns the processor (CPU) to a process.
•
It deallocates the processor when a process is no longer needed.
(ii)
Memory Management
Operating System allocates and deallocates the memory for the processes. Main memory
provides a fast storage that the CPU can directly access. For a program to execute, it must be
in the main memory.
The following are the activities of an operating system for memory management:
•
It keeps a track of primary memory i.e. what part of it is in use and by whom, what
part is not in use.
•
In multiprogramming, the operating system decides which process will get memory,
when will it get, and how much will it get.
•
It allocates the memory when a process requests it to do so.
•
It deallocates the memory when a process no longer needs it or has been terminated.
(iii) Files and Disk Management
All the operating systems incorporate support for numerous file systems. The modern file
systems include a hierarchy of directories. The concept is the same across all the general file
systems, but few differences exist in their implementation. Two prominent examples are case
sensitivity and the character used to separate the directories.
The differentiation of path components in UNIX is done using a slash (/). Since MS-DOS has
already embraced the CP/M convention of applying slashes for additional options to
commands, they are using a backslash (\) to separate its components. Before OS X, the
versions of Mac OS use a colon (:) as a path separator. The RISC Operating System uses a
period (.).
Unix OS allows all the characters in file name other than (/) slash, and the names are case
sensitive. However, the file names in Microsoft Windows are not case sensitive.
File systems can offer journaling, which grants safe recovery during a system crash. A
journaled file system writes the information twice: initially to the journal, which is a log of the
file system operations, then in the Specific place in the ordinary file system. During the crash,
the system recovers to its original state by implementing a portion of that journal. In contrast,
Operating Systems Building Blocks | Introduction to Operating Systems
9
Overview of Operating Systems
the non-journaled file system needs to be checked by a utility - either chkdsk or fsck. An
alternative to journaling is the soft update, which prevents the unnecessary writes by
discreetly ordering the update operations. The ZFS and Log-structured file systems also differ
from the traditional journaled file systems, where they avoid the inconsistencies by always
writing the new copies of the data, and avoiding in-place updates.
Most of the LINUX systems support few or all of the GFS, GFS2, NILFS, ext2, ext3, OCFS,
OCFS2, ReiserFS, and Reiser4. Linux also has full support for JFS and XFS, along with the
NTFS and FAT file systems.
Most of the LINUX systems support few or all of the GFS, GFS2, NILFS, ext2, ext3, OCFS,
OCFS2, ReiserFS, and Reiser4. Linux also has full support for JFS and XFS, along with the
NTFS and FAT file systems.
Microsoft Windows supports NTFS, FAT12, FAT16, and FAT32. Out of these four file
systems, the NTFS file system is the most reliable and efficient. The Windows Vista supports
only NTFS file system. The Windows Embedded CE 6.0 introduced a file system called
ExFAT, which is suitable for flash drives. Mac OS X supports various file systems out of
which the HFS+ is the primary file system, and other file systems include ZFS, NTFS, FAT16,
and FAT32.
FAT12 is the most common file system found on the floppy discs. Universal Disk Format and
ISO 9660 are the two common formats that target DVDs and Compact Discs respectively.
(iv) Networks
Most of the present operating systems are proficient enough in using the TCP/IP networking
protocols, which means that one system can appear on another network and share the
resources such as files, scanners, printers, etc. through wireless or wired connections.
Numerous operating systems also support multiple vendor-specific legacy networking
protocols. For example, DECnet on systems from Digital Equipment Corporation, SNA on
IBM systems, and Microsoft-specific protocols on Windows. The specific protocols for the
specific tasks can also be supported. For example, NFS for file access.
10
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
(v)
Security
Most of the operating systems incorporate security, which is based on:
The operating system grants access to few resources, directly or indirectly, such as privileged
system calls, files on a local disk, personal information about users, and the services that are
offered by the programs operating on the system:
•
The operating system can differentiate between few requesters of those resources –
those who are allowed to access the resource and others who are forbidden. While few
systems may solely distinguish between these two kinds of users, few systems have a
username which acts as a requester identity. The category of the requesters are:

Internal security: It is an existing running program. In few systems, a
program running has no limitations, but the program comprises of an identity
which is used to check all the requests made for the resources.

External security: It is a new request that comes from outside the computer,
such as a network connection or login at a connected console. The
authentication process is done to verify the identity, usually, a username and a
password. Few other means of authentication, such as biometric data or
magnetic cards can be used instead. In few cases, particularly the connections
from the network, the resources are accessed without authentication.
•
In addition to the above feature, a system with a high security also provides auditing
options, which allows the tracking of requests to access a resource.
•
Security in OS has always been a concern due to the highly sensitive data being held
on the computers, both military, and commercial nature. The United States of
America Department of Defense (DoD) designed the TCSEC (Trusted Computer
System Evaluation Criteria), which provides a standard basic requirement for
assessing the effectiveness of security. This became important to the makers of the OS
because the TCSEC helps in evaluating, classifying, and selecting the computer
systems for the processing, storing, and retrieving of classified information.
Operating Systems Building Blocks | Introduction to Operating Systems
11
Overview of Operating Systems
Self-assessment Questions
5) The role of the operating system is
a) Holds the process
c) Share the process
b) Starts the process
d) None of the above
6) Memory management is used to
a) Allocate memory
c) Both a & b
b) De-allocates memory
d) None of the above
1.1.3 Operating System Structure
An OS provides an environment for the execution of the programs. Internally, the OS vary in
their makeup, because they are organised across different lines. However, there are several
commonalities which we are going to discuss.
One of the most crucial aspects of OS is the potential to multiprogram. A single user generally
cannot keep either the I/O devices or the CPU busy all the time. Multiprogramming helps to
increase the CPU utilisation by organising the jobs, which ensures the CPU has one to
execute all the time.
The concept is as follows: The OS manages multiple jobs in memory simultaneously, as
shown in Fig. 1.1.1. These jobs can be a subset of the jobs in the job pool—which includes all
the jobs that enter the system—because the number of jobs that can be retained
simultaneously in the memory is less than the number of jobs that can be retained in the job
pool. The OS select and executes one of the jobs in the memory. Ultimately, the job may have
to wait for some task to complete. The CPU is idle in a non-multi programmed system. In a
multi-programmed system, the OS switches to other jobs and executes it. When that job
requires waiting, the CPU switches to another job. The CPU is never idle, as long as a job
needs to be executed.
12
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
0
Operating System
Job 1
Job 2
Job 3
512 M
Job 4
Figure 1.1.1: Memory Layout for Multiprogramming system
Multi-programmed systems present an environment where various system resources are
utilised in an effective manner, but they do not provide user interaction with the computer
system. Multi-tasking or Time-sharing is an extension of multiprogramming. In a timesharing system, the CPU executes various jobs by switching among them, but the switches are
frequent that the users can interact with each of the programs while they are running.
Time-sharing requires an interactive computer system that provides a direct communication
between the system and the user. The user gives instructions to the program or the OS
directly, using an input device like a mouse, or keyboard, and waits for the immediate results
on the output device. The response time is usually less than one second.
A time-shared OS permits several users to share the computer simultaneously. Since each
command or action in a time-shared system is short, each user requires only a little CPU
time. As the system switches from one user to the other rapidly, each user thinks that the
entire system is dedicated for his use, even though multiple users are sharing.
A time-shared operating system uses CPU scheduling and multiprogramming to provide
each user with a small portion of a time-shared computer. Each of the user have a minimum
of one separate program in the memory. The process is a program that is loaded into the
memory and executing. When a process executes, it typically executes for only a short time
before it either finishes or needs to perform I/O. The I/O can be interactive; that is, input
comes from a user keyboard, mouse, or another device and output goes to the display for the
user. Since interactive I/O typically runs at "people speeds," it may take a long time to
complete. Input, for instance, may be bounded by the typing speed of the user; seven
characters per second is fast for a human, but it is too slow for a computer. To make an
effective use of the CPU, as this interactive input takes place, the OS will quickly switch the
CPU to the program of some other user.
Operating Systems Building Blocks | Introduction to Operating Systems
13
Overview of Operating Systems
Multiprogramming and Time-sharing require many jobs to be concurrently kept in the
memory. Since the main memory is small to accommodate all the jobs, the jobs are originally
put on the disk in the job pool, which consists of all the processes residing on the disk
awaiting the allocation of the main memory. If various jobs are ready to be brought into the
memory, and if there is no space to accommodate all of them, then the system needs to
choose among them. This decision-making is known as job scheduling. When the OS selects a
job from the job pool, it gets loaded into the memory for execution. Holding several
programs in the memory at an instance requires memory management. Also, if various jobs
are ready to run at the same time, the system needs to choose among them. Making this
decision is known as CPU scheduling.
In a time-sharing system, the OS must ensure a sensible response time, which can be done
through swapping. Here, the processes are swapped in and out of the main memory to the
disk. The other way to achieve this goal is the virtual memory technique that allows the
execution of a process which is not entirely in the memory. The benefit of the virtual memory
is that it lets the users run the programs that are larger than the actual physical memory.
Moreover, it abstracts the main memory into a large, identical array of storage, separating the
physical memory from the logical memory.
Time-sharing systems also need to provide a file system that resides on a collection of disks.
Therefore, disk management is also essential. Also, the time-sharing systems protect the
resources from inappropriate use. The system needs to provide mechanisms for
communication and job synchronisation, to ensure the orderly execution, and ensure that the
jobs are not stuck in a deadlock.
The structure shown in Fig. 1.1.2 is a simple structure. In MS-DOS, the applications bypass
the operating system. The operating systems like UNIX and MS-DOS do not have welldefined structures.
Since there is no execution mode, the errors in the applications can cause the whole system to
crash.
14
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Application program
Resident system program
MS-DOS device
drivers
ROM BIOS device drivers
Figure 1.1.2: Simple Structure
Self-assessment Questions
7) The CPU or the I/O devices can be kept busy by a single user all the time in
a.) Multiprogramming
b.) Single programming
c.) Multiprocessing
d.) Single processing
8) An extension of multiprogramming is time-sharing
a.) Resource sharing
b.) Task sharing
c.) Time sharing
d.) None of the above
Operating Systems Building Blocks | Introduction to Operating Systems
15
Overview of Operating Systems
1.1.4 Types of Operating System Structure
(i)
Monolithic Approach
The simple function (large program) calls trigger the functionality of OS within the kernel.
The running kernel is loaded with the device drivers, which becomes the part of the kernel.
The below given fig. 1.1.3 is an example of a monolithic kernel such as UNIX and Linux
systems.
Figure 1.1.3: Monolithic Kernel
(ii)
Layered Approach
A layered approach is an approach in which components are grouped and layered in a
hierarchical structure so that the functions and services of the lower layers assist the functions
and services of the higher layers. The breaking up of the layers as shown in Fig. 1.1.4 helps the
developers to enhance the modularity. It also gives freedom to the developers to alter the
inner workings, as long as the external interface does not change. The lowest layer is the
hardware, and the top layer is the UI (User Interface).
16
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Figure 1.1.4: Layered Approach
(iii) Microkernels
A microkernel is a piece of software or even code that contains minimum functions and
aspects needed to run an operating system. It comprises of minimal mechanisms, enough to
execute the most basic functions of a system. To maximise the implementation flexibility, it
permits the other parts of the OS to be enforced conveniently since it does not levy a lot of
policies. The Fig. 1.1.5 describes how a microkernel structure looks.
Figure 1.1.5: Microkernel Structure
Operating Systems Building Blocks | Introduction to Operating Systems
17
Overview of Operating Systems
(iv) Client-Server Model
The client–server model is a disseminated application structure that segregates functions or
workloads between the suppliers of a resource or service which is known as servers. The
partition between service requesters is called as clients. Sometimes clients and servers interact
over a computer network on separate hardware, but both client and server may exist on the
same system. The Fig. 1.1.6 describes how a client-server model looks.
Figure 1.1.6: Client-Server model
(v) Virtual Machines
A virtual machine is well known as software computers, which like a physical computer,
operates an operating system and various applications in a system. The virtual machine is
composed of an array of specification and configuration files. The physical resources that the
host creates support the Virtual machines. Each virtual machine possesses virtual devices that
have the same functionality as the physical hardware. They also have the additional benefits
like flexibility, manageability, and security.
(vi) Exokernels
Exokernel is an operating system kernel developed by Distributed Operating Systems group
and MIT Parallel. Operating Systems introduce the hardware resources to applications
through high-level abstractions such as file system.
18
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Figure 1.1.7: Exokernel Example
Take an example shown in Fig. 1.1.7, the typical abstraction of a file. As the application sees
them, the files do not exist on disk. The disk has various disk sectors. OS abstracts the reality
of the disk to create the illusion of files and a filesystem. Normally, security is an additional
service given at this level, which is further combined with the abstraction.
On exokernels, security is given at the unabstracted hardware level. In this example, it is given
to the disk sectors.
Operating Systems Building Blocks | Introduction to Operating Systems
19
Overview of Operating Systems
Self-assessment Questions
9) If various jobs are ready to be brought into the memory and if there is no space to
accommodate all of them, then the system needs to choose among them. This
decision-making is known as ________
a) Job Scheduling
b.) CPU Scheduling
d.) Job Pool
c) System Process
10) For any program to execute, it should be in the main memory. State True or False?
a.) True
b.) False
11) Which of the following is a piece of software or even code that contains minimum
functions and aspects needed to run an operating system?
a.) Virtual Machine
b.) Microkernel
c.) Client-Server
d.) Exokernel
20
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Summary
o An operating system acts as the resource manager as it manages internally, all the
resources and assigns them to particular programs as per the requirement of the
task.
o An operating system (OS) is a system software that deals with computer hardware
and software resources and also supports by providing common services for
computer programs.
o A boot program loads an Operating system when the computer is switched on. An
operating system controls all the programs on the computer.
o The application programs send a request to the operating system for services over
a specified Application Program Interface (API).
o With the help of command line or a Graphical User Interface (GUI), users can
interact directly with the operating system.
o The main goal of an operating system is to provide the link between the user and
the hardware.
o An operating system is a resource manager because it preserves all the system
resources like memory, processor, I/O devices etc.
o Booting is a technique of starting the computer operating system that opens the
computer to run. Booting tests the system and lets the OS work.
o Before executing a program, the OS loads it into the memory. The Operating
system provides the amenity for loading programs in memory and completing it.
o Graphical line-Interface connects with of visual environment to interact with the
computer. It handles windows, icons, menus and other graphical objects to
circulate commands.
o Command line-Interface gives an interface to interact with the computer by
typing commands.
Operating Systems Building Blocks | Introduction to Operating Systems
21
Overview of Operating Systems
o Operating System also acts like an enduring machine, which allows the users to
share files between multiple users and provides graphical environments and
various languages for communications.
o Various components of OS are process management, memory management, files
and disk management, networks, and security.
o Most of the present operating systems are proficient enough in using the TCP/IP
networking protocols, which means that one system can appear on another
network and share the resources such as files, scanners, printers, etc. through
wireless or wired connections.
o In few systems, a program running has no limitations, but the program comprises
of an identity which is used to check all the requests made for the resources.
o Multiprogramming helps to increase the CPU utilisation by organising the jobs,
which ensures the CPU has one to execute all the time.
22
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Terminal Questions
1. What is an Operating System? Explain the functions of an Operating System.
2. List the various components of an Operating System.
3. List the various types of OS Structures.
Answer Keys
Self-assessment Questions
Question No.
Answer
1
a
2
a
3
a
4
b
5
b
6
b
7
a
8
c
9
a
10
a
11
b
Operating Systems Building Blocks | Introduction to Operating Systems
23
Overview of Operating Systems
Activity
Activity Type: Offline/Online
Duration: 30 Minutes
Description:
Chart the evolution of OS using a timeline. Also chart the evolution of windows OS.
Case Study
A Smartphone operating system is the Operating System that operates a Smartphone, tablet,
PDA, or other digital mobile device. Modern mobile operating systems combine the features
of a personal computer operating system with other features, including a touch screen,
cellular, Bluetooth, WiFi, GPS mobile navigation, camera, video camera, speech recognition,
voice recorder, music player, near field communication and Infrared Blaster.
The Smartphone operating system (OS) movement has grown to include competitors such as
Google, Microsoft, Apple, Symbian, and Palm. Although these operating system platforms
have come a long way since their inception, none of these companies provide an OS that is
ideal for all users. They claim that their platforms perform the best in all endeavours and will
certainly not advertise any weakness with their systems. This makes it difficult for end users
to know which platform is best suited for their need.
ANDROID
Android is intended to revolutionise the mobile market by bringing the internet to the cell
phone and allowing its use in the same way as on the PC. The term “Android” has its origin
in the Greek word andr-, meaning “man or male” and the suffix - eides, used to mean “alike
or of the species”. This together means as much as “being human”.
Android is a comprehensive operating environment that based on Linux kernel, it is also a
layered system; the architecture of Android system is shown as in picture. Applications layer
is the site of all Android applications including an email client, SMS program, maps, browser,
contacts, and others. All applications are written using the Java programming language.
Application framework layer defined the Android application framework.
24
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
All Android applications are based on the application framework. The Android application
framework including:
•
A rich and extensible set of Views that can be used to build an application with
beautiful user interface, including lists, grids, text boxes, buttons, and even an
embeddable web browser.
•
A set of Content Providers that enable applications to access data from other
applications (such as Contacts), or to share their own data.
•
A Resource Manager that provides access to non code resources such as localised
strings, graphics, and layout files
•
A Notification Manager that enables all applications to display custom alerts in the
status bar.
•
An Activity Manager that manages the lifecycle of applications and provides a
common navigation back stack.
IPHONE OS
The iPhone OS is a derivative of the Darwin open source POSIX‐compliant computer
operating system developed by Apple Inc. The current version (v2.2.1) utilised in Apple‐only
hardware products including the iPhone and iPod Touch. Though a relatively new product to
enter the mobile market in comparison to other mobile OSs, the iPhone OS has seen a rapid
rise in popularity and garnered a large and dedicated user base. The iPhone OS has risen so far
and so fast primarily due to the innovations on user interface and availability of 3rd party
applications.
SYMBIAN
The Symbian OS was designed specifically for mobile devices. It has very small memory
footprint and low power consumption. It is an open OS, enabling third party developers to
write and install applications independently from the device manufacturers. An extensive
C++ API is provided which allows access to services such as telephony and messaging, in
addition to basic OS functionality. The Symbian OS was designed so applications could run
for years without losing the user data. Also the OS can run on more than one hardware
platform.
Operating Systems Building Blocks | Introduction to Operating Systems
25
Overview of Operating Systems
WINDOWS MOBILE
This platform is based on Windows CE (WinCE). WinCe is a compact OS specifically
designed for pervasive devices. It is focused on providing a consistent interface for
applications on various hardware platforms which emphasises portability by providing the
user with the Win32 API. The hardware platforms include Packet PCs, Smartphones (as
explained here), Portable Media Centres, and even onboard computers in automobiles. The
Windows Mobile platform was designed for flexibility and with the developer in mind. For
that reason, it was designed to support lots of pre-emptive multitasking. It supports a
whopping 256 priority levels for threads and up to 32 processes. It supports all of the standard
mutual exclusion and synchronisation methods you would expect from a desktop PC. This
functionality makes it ideal for a smartphone because the users typically demand multitasking
and want to be as productive as possible.
PALM OS
Palm OS Garnet (v5.4.x) is a proprietary operating system originally developed by Palm Inc.
In the early versions (pre‐Garnet), the Palm OS was primarily utilised in Palm‐developed
Personal Digital Assistant (PDA) mobile hardware units. At one point, Palm PDAs with the
Palm OS held 85% of the market share in the mobile device market. However, in recent
years, Palm's market share has been in decline, mostly due to the stagnant nature of the OS
development and has yielded the leading position to Symbian.
Why Android
Andy Rubin, Google’s director of mobile platforms, commented “There should be nothing
that users can access on their desktop that they can’t access on their cell phone.” with this
vision the popularity of smart phones having Google’s Android Operating System is
continuously on the rise in the 21st century. Some of the advantages of Android over other
Smartphone operating systems is listed as under.
•
The ability to run tens of thousands of apps just like the iPhone but with choice of
phone models that you can choose from. The choice of with or without physical
keyboard, shape, colour, phone size, screen size, manufacturer, features, and phone
carrier. No more monopoly by one company on one carrier.
•
Android allow developers/programmers to develop apps (applications) in what is
known as "application without borders".
26
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
•
Android is beginner friendly and supremely customisable the more you use Google's
services, the more Android will shine Android has the majority of the market and the
user experience is improving quickly.
•
Google’s Android Now checks your location and calendar to automatically show you
relevant info e.g. traffic to work, cafes, and flight details and lets you search with
natural voice commands and replies with natural speech.
•
Android is an open source service. This means that it’s free and anyone can use it.
Anyone can modify and improve the software making it more effective and
personalised. Applications are freely made and designed for Android by numerous
app developers all over the world and these apps are offered for free on the Android
market place. This feature of open source has also attracted mobile phone producers
to manufacture phones using Android OS.
•
Android is not just an operating system designed for individuals but it also fulfils your
business needs at the same time. Android market place offers numerous apps that are
specially designed to manage your business. Now you can have a closer look at your
business processes on the go with the help of these apps.
•
Android also offers OS for tablets, thus defeating the monopoly of Apple’s iPads in
the market. Now you can have the tablets from different manufacturers running the
Android OS, giving a stiff competition to iPad.
Conclusion
The increasing trend of Smartphones usage by individuals of all ages has brought forward stiff
competition between different Smartphone OS’s and Google’s Android OS. However, recent
researches and reports revealed the fact that Android has outshone its competitors and has
managed to become the most widely used OS across Smartphone and Tablet users. The
Mobile OS from Google that has turned every proverbial head around the globe. In reality,
Android has dominated our lives in the last few years, and with future forecasts looking ever
so exciting for the Android OS.
Discussion Questions:
1. Analyse the case for the effective Operating System existing for a mobile.
2. Explain why Android has outshone its competitors.
Operating Systems Building Blocks | Introduction to Operating Systems
27
Overview of Operating Systems
Bibliography
e-References
•
Opearting Systems. Retrieved 28 Sep, 2016 from https://it325blog.files.wordpress.c
om/2012/09/operating-system-concepts-7-th-edition.pdf
•
Evolution of Operating System. Retrieved 28 Sep, 2016 from http://www.study
tonight.com/operating-system/evolution-of-os
•
Operating System Structures. Retrieved 28 Sep, 2016 from http://faculty.salina.k
state.edu/tim/ossg/Introduction/struct.html
Image Credits
•
Figure 1.1.1: http://pheryozzone.blogspot.in/2011/04/memory-layout-for-multipr
ogrammed.html
•
Figure 1.1.2: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/2_St
ructures.html
•
Figure 1.1.3: http://faculty.salina.k-state.edu/tim/ossg/Introduction/struct.html
•
Figure 1.1.4: https://jan.newmarch.name/OS/l7_2.html
•
Figure 1.1.5: http://faculty.salina.k-state.edu/tim/ossg/Introduction/struct.html
•
Figure 1.1.6: https://jan.newmarch.name/OS/l7_2.html
•
Figure 1.1.7: http://wiki.osdev.org/Exokernel
External Resources
•
Silberschatz, A., Galvin, P. B., & Gagne, G. (2005). Operating system concepts.
Hoboken, NJ: J. Wiley & Sons.
•
S. (2011). Operating system. New Delhi: Global Vision Publishing.
•
Stuart, M. (1974). Operating System. London: McGraw-Hill International Book
Company.
28
Introduction to Operating Systems | Operating Systems Building Blocks
Overview of Operating Systems
Video Links
Topic
Link
Introduction to OS
https://www.youtube.com/watch?v=5AjReRMoG3Y
History of OS
https://www.youtube.com/watch?v=BTQ6HtCkSBQ
Microsoft Windows History
https://www.youtube.com/watch?v=hAJm6RYTIro
Operating Systems Building Blocks | Introduction to Operating Systems
29
Overview of Operating Systems
Notes:
30
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Chapter Table of Contents
Chapter 1.2
Operating System Services and System Calls
Aim
.................................................................................................................................................. 31
Instructional Objectives..................................................................................................................... 31
Learning Outcomes ............................................................................................................................ 31
1.2.1 Operating System Services ...................................................................................................... 32
Self-assessment Questions ....................................................................................................... 36
1.2.2 System Calls .............................................................................................................................. 36
(i) System Calls for Process Management ............................................................................. 37
(ii) System Calls for File Management ................................................................................... 38
(iii) System Calls for Device Management ............................................................................ 38
(iv) System Calls for Information Management ................................................................... 39
(v) System Calls for Directory Management......................................................................... 40
(vi) Miscellaneous System Calls.............................................................................................. 40
Self-assessment Questions ....................................................................................................... 41
1.2.3 System Programs ...................................................................................................................... 41
Self-assessment Questions ....................................................................................................... 43
1.2.4 Operating System Design and Implementation ................................................................... 43
Self-assessment Questions ....................................................................................................... 44
1.2.5 System Boots ............................................................................................................................. 45
Self-assessment Questions ....................................................................................................... 46
Summary ............................................................................................................................................. 47
Terminal Questions............................................................................................................................ 48
Answer Keys........................................................................................................................................ 49
Activity................................................................................................................................................. 50
Case Study ........................................................................................................................................... 51
Bibliography ........................................................................................................................................ 55
e-References ........................................................................................................................................ 55
External Resources ............................................................................................................................. 55
Video Links ......................................................................................................................................... 55
Operating Systems Building Blocks | Introduction to Operating Systems
Operating System Services and System Calls
Aim
To acquaint the students with the knowledge of Operating System Service and System
Calls
Instructional Objectives
After completing this chapter, you should be able to:
•
Describe the services of an Operating System
•
Explain the working of System calls
•
Explain how an operating system is installed and customised
•
Elaborate the system boot process
Learning Outcomes
At the end of this chapter, you are expected to:
•
Summarise the services provided by an Operating System
•
Outline the working of System calls
•
Identify how an operating system is installed and customised
•
Outline the system boot process
Operating Systems Building Blocks | Introduction to Operating Systems
31
Operating System Services and System Calls
1.2.1. Operating System Services
An operating system renders its services to both the users and the programs.
•
It provides programs with an environment to carry out their operations.
•
It gives the services to users to work out the programs in an appropriate and effective
way.
The following are the few common services, as shown in Fig. 1.2.1 that an operating system
provides:
Figure 1.2.1: Operating System Services
User-Interface: All the operating systems have a user interface, which can takes various
forms.
•
Command Line Interface (CLI) is one of the forms of user interface. This form uses
the text commands and various processes.
•
Batch Interface is another form of user interface. This form makes use of the
commands and the directives to manage those commands that are put into the files,
and executes those commands.
Program Execution: The purpose of the computer system is to allow the users to execute a
program in an efficient manner. The operating system allows a circumstance where the user
32
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
can comfortably execute such programs. The user need not bother about the memory
allocation or de-allocation because the OS is fully responsible for all these things.
Running a program requires the loading of the program into the RAM (Random Access
Memory) first and assigning CPU time for its execution. It also accomplishes other essential
jobs such as allocation and de-allocation of memory, CPU scheduling, etc.
Following are the important activities of an operating system concerning program
management:
•
Installs a program into the memory
•
Carries the program
•
Manages the execution of the program
•
It proposes a mechanism for the process synchronisation
•
It provides a technique for process communication
•
It provides a technique to handle the deadlock
I/O Operation: Every program needs input, and after processing the input that is given by the
user, it generates output. This process makes use of certain I/O devices. The operating system
allows the user to know the details of fundamental hardware that the I/O needs. The
operating system allows the users to execute the programs by providing the I/O functions.
The user-level programs cannot offer the I/O programs; the operating system must give it.
An Operating System handles the communication between a user and the device drivers.
•
I/O operation intends to read or write an operation with any specific I/O device or
any other file.
•
Operating system gives the access to the I/O device when it is required.
File System Manipulation: While running a computer usually, a user needs to handle various
file functions like opening a file, saving a file, copying a file and deleting a file from the
storage disk.
Thus the operating system provides the file system manipulation service to the user programs
to finish the tasks easily. This service is carried out by the 'Secondary Storage Management'
which is a part of the operating system.
Operating Systems Building Blocks | Introduction to Operating Systems
33
Operating System Services and System Calls
Following are the important activities of an operating system related to file management:
•
The program needs to read a file or write a file
•
The operating system allows certain operations on the file
•
Permission differs from read-only, read-write, denied and so on
•
An Operating System provides an interface to the user to create/delete files
•
An Operating System provides an interface to the user to create/delete directories
•
An Operating System provides an interface to create a backup of the file system
Communication: Operating system interacts with various types of processes in the form of
shared memory. In a multitasking environment, the processes require to interact with each
other and transfer their information. These processes are in a hierarchical structure. In this
structure, the main process is the parent process, and the processes that occur under the main
process are the child processes.
Following are the important activities of an operating system related to communication:
•
The two processes need to exchange the data between them.
•
The processes can occur on one computer or several computers over a computer
network.
•
The Communication happens either by Message Passing or by Shared Memory.
Error Detection: The Operating system also deals with the hardware problems. To avoid the
hardware problems the operating system constantly observes the system for identifying the
errors and solves those errors. The important function of an operating system is to reveal the
errors that are related to the I/O devices. After identifying the errors, operating system
chooses a proper step for consistent computing.
User programs cannot manage the error detection and error correction because it involves
observing the entire computing process. A user program can associate with the
corresponding operation of the operating system when given authorisation.
The important activities of an operating system concerning error handling are:
34
•
The OS continuously explores for possible errors.
•
The operating system chooses a right action for accurate and consistent computing.
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Resource Allocation: During multi-tasking, an operating system plays a vital role in the
allocation of the required resources to each and every process for its better usage. The
resources are CPU, main memory, tape drive or secondary storage, etc. For this reason,
enforcement of the different types of algorithms such as process scheduling, disk scheduling,
CPU scheduling, etc. takes place.
The important activities of an operating system concerning resource management are:
•
The OS maintains various resources using schedulers.
•
For the better usage of CPU, it uses the algorithms called CPU scheduling algorithm.
Accounting: The operating system maintains the accounts for all the resources that each
process or user accesses. In a multitasking process, accounting increases the efficiency of the
system with the distribution of resources to each process.
Protection System: If a computer system has multiple users and permits the simultaneous
carrying out of various processes, then the processes must be conserved from each other's
activity.
The prominent activities of an operating system concerning protection are:
•
The OS assures to control all approaches to the system resources.
•
The OS assures that the external I/O devices are kept secure from invalid access
attempts.
•
The OS grants authentication features for each user, using passwords.
Operating Systems Building Blocks | Introduction to Operating Systems
35
Operating System Services and System Calls
Self-assessment Questions
1) The OS assures that the external I/O devices are kept secure from invalid access
attempts. State True or False?
a) True
b) False
2) In a multitasking process, accounting decreases the efficiency of the system with the
distribution of resources to each process. State True or False?
a) True
b) False
3) What is the full form of CLI?
a) Command Line Interface
c) Common Language Interface
b) Common Line Interface
d) Command Language Interface
4) In which form, does the Operating system interact with various types of processes?
a) Virtual Memory
b) GUI
c) Shared Memory
d) Secondary Memory
5) Where is the program loaded to make it run?
a) CPU
b) RAM
c) Niche Memory
d) ROM
1.2.2. System Calls
To perform an operation, the user needs to get a request for a service from the System. Before
any request, the user will make a special call which is also known as the System Call. A System
Call is a process in which a computer program asks for a service from the kernel of the
operating system on which the execution takes place, as shown in Fig. 1.2.2. This comprises of
hardware, creation and carrying out of new processes, and interacting with integral kernel
services such as process scheduling. System Calls provide a significant interface between a
process and the operating system. The System Call acts as an interface to the operating system
services.
When a user opens the system for the first time, then the system is in the user mode. When
the user requests for a service, then the user mode turns to the Kernel Mode, which just
listens to the request of the user, processes that request, and displays the results.
36
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Figure 1.2.2: System Calls
(i) System Calls for Process Management
A running program must be capable of stopping the execution either in a normal way or in an
abnormal way. When the execution of a program ends abnormally, one can take the memory
dump and examine it using a debugger.
•
fork (): To create a new process or spawn a child process
•
exec (): To run a new program or replace a current process with a new one
•
wait (): To make the process to wait or force the parent to suspend the execution
•
exit (): To terminate the process or the execution
•
getpid(): To find the unique process id
•
getppid(): To find the parent process id
•
nice (): To bias the currently running process
Almost all the operating systems provide the System Calls relating to process management,
which a normal user cannot execute it. A super status is necessary for its execution.
Operating Systems Building Blocks | Introduction to Operating Systems
37
Operating System Services and System Calls
Few System Calls for Process Management in Linux are:
•
getpriority(): It gets the maximum priority for a group of processes
•
setpriority(): Sets the priority for a group of processes
•
sched_yield(): It relinquishes the processor voluntarily without blocking it
•
sched_rr_get_interval(): It gets the time quantum for the RR (Round Robin) policy
(ii) System Calls for File Management
Some System Calls include such processes such as create, delete, read, write, reposition, or
close a file. There are also requirements to fix the file attributes i.e., get and set file attribute.
Most of the times, the OS provides an Application Program Interface (API) to execute these
System Calls.
For Windows, few examples are:
•
CreateFile(): Creates a file
•
ReadFile(): Reads a file
•
WriteFile(): Writes a file
•
CloseHandle(): Closes and invalidates the specified object handle
For Unix, few examples are:
•
open(): Opens a file and possibly create a file or device
•
read(): Reads from a file
•
write(): Writes to a file
•
close(): Closes a file descriptor, so that it no longer refers to any file and may be
reused
(iii) System Calls for Device Management
It takes several resources to execute a process, which the OS grants if available. These
resources are similar to the devices. Some of them are abstract (for example, file) and some
are physical (for example, video card).The user programs request for the device, and
they release the device after using it. Similarly, in the case of files, we can read, write,
and reposition the device.
38
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
For Windows, few examples are:
•
SetConsoleMode(): Sets the input mode of a console's input buffer or the output
mode of a console screen buffer.
•
ReadConsoleMode(): Reads keyboard input from a console's input buffer.
•
WriteConsole(): Writes a character string to a console screen buffer beginning at the
current cursor location
For Unix, few examples are:
•
ioctl(): Manipulates the underlying device parameters of special files
•
read(): Reads the data in bytes
•
write(): Writes the data from the buffer
(iv) System Calls for Information Management
Few System Calls help to transfer the information between the Operating system and the user
program. For example, date or time. The Operating system also maintains the information on
all its processes and provides the System Call to provide this information.
For Windows, few examples are:
•
GetCurrentProcessID(): Retrieves the process identifier of the calling process
•
SetTimer(): Creates a timer with the specified time-out value
•
Sleep(): Suspends the execution of the current thread until the time-out interval
elapses
For Unix, few examples are:
•
getpid(): Gets process identification
•
alarm(): Sets an alarm clock for delivery of a signal
•
sleep(): Suspends the program for a specific time
Operating Systems Building Blocks | Introduction to Operating Systems
39
Operating System Services and System Calls
(v) System Calls for Directory Management
Below given are a few system calls for Directory Management:
•
mkdir(name, mode): Creates a new directory. Modes are of three digits. The first digit
represents the owner, the second represents the group and the third represents other
users. The number 7 represents all three types of permission (i.e., read, write and
execute), 6 stands for read and write only, 5 stands for read and execute, 4 is read
only, 3 is write and execute, 2 is write only, 1 is execute only and 0 is no permissions.
•
rmdir(name): Removes an empty directory
•
link(name1, name2): Creates a new entry, name2, that is pointing to name1
•
unlink(name): Removes a directory entry
•
mount(special, name, flag): Mounts a file system. A call to mount() performs one of a
number of general types of operation, depending on the bits specified in mount flags.
•
umount(special): Unmounts a file system
(vi) Miscellaneous System Calls
Below given are a few miscellaneous system calls:
•
chdir(dirname): Changes the working directory
•
chmod(name, mode): Changes a file’s protection bits. On Linux and other Unixlike operating systems, there is a set of rules for each file which defines who can access
that file, and how they can access it. These rules are called file permissions or
file modes.
40
•
kill(pid, signal): Send a signal to a process
•
seconds = time(&seconds): Gets the elapsed time since Jan 1, 1970
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Self-assessment Questions
6) What is kill (pid, signal) System Call used for?
a) Change the working directory
b) Send a signal to a process
c) Unmount a file system
d) Changes a file’s protection bits
7) getppid() helps to find the unique process id. State True or False?
a) True
b) False
8) Which of the following is a system call for Directory management?
a) getpid()
b) getppid()
c) kill(pid, signal)
d) mount(special, name, flag)
9) A normal user can execute calls relating to process management. State True or False?
a) True
b) False
10) When a user opens the system for the first time, then the system is in the user mode.
State True or False?
a) True
b) False
11) What is API?
a) Applicaiton Processing ID
c) Application Prime ID
b) Application Program ID
d) Application Program Interface
1.2.3. System Programs
System Programs provide a suitable environment for program advancement and execution.
Some of these programs are simply user interfaces to System Calls; others are significantly
more complex.
The various categories of System Programs are:
1. File management: These programs include different process like:
a. Create
b. Delete
c. Copy
Operating Systems Building Blocks | Introduction to Operating Systems
41
Operating System Services and System Calls
d. Rename
e. Print
f. Dump
g. List
All these are used to manage files and directories.
•
Status information: Few programs ask the system for the date, time, availability of
memory or disk space, number of users, or identical status information. Other
programs are more complex, providing information on performance, setting down,
and troubleshooting. Usually, these programs format and publish the output to the
terminal or files or other output devices or show it in a window of the GUI. Some
systems also provide a schedule, which it uses to keep and get back the configuration
information.
•
File modification: Several text editors can be available to design and customise the
content of files that are present on the hard disk or any other cache files. They may
have special commands to explore contents of files or perform transformations of the
text.
•
Programming language support: Users are given with a few common programming
languages such as C, C++, Java, Visual Basic, and PERL the assemblers, compilers,
debuggers, and interpreters, along with the operating system.
•
Program loading and execution: To compile and collect a program in the system, it
needs to be installed into memory. The system may provide absolute loaders,
moveable loaders, linkage editors, and covered loaders. Debugging systems are
needed for either machine language or higher-level languages.
•
Communication: The communication program provides the technique for designing
virtual connections between the users, processes, and computer systems. They permit
the users to forward messages to each other's screen, to browse web pages, to forward
electronic mail messages, to log in remotely, or to dispatch the files from one machine
to another.
•
Application programs: Apart from the Systems Programs, most of the operating
systems have various programs that are helpful in resolving common problems or
42
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
operating common operations. These programs comprise the internet browsers, word
processors and code formatting tool, database systems, computer programs,
compilers, plotting and statistical analysis packages, and games. These programs are
referred to as the application programs or system services.
Self-assessment Questions
12) The file management operation is
a) Create
c) List
b) Print
d) None of the above
13) The programs under system services are
a) Word processors
c) OS Behaviour
b) Paging
d) Booting
1.2.4. Operating System Design and Implementation
Design goals: At the highest level, the choice of hardware and the various types of systems
i.e., batch system, time-shared system, single user system, multiuser system, distributed
system, real-time system, or general purpose, affects the design of the system. It is really hard
to specify the requirements beyond this highest level of design. One can however classify the
requirements into two basic groups: user goals and system goals. The system should be
convenient to use, easy to learn and to use, reliable, safe, and fast.
The people who design, create, manage, and operate the system can only define the different
sets of requirements. The system should be easy to design, carry out, and manage as well as be
flexible, reliable, error-free, and productive. These requirements are imprecise, and one can
explain it in various ways.
It is a highly creative and tough task to enumerate and design an operating system. Although
there is no clear way to do it, we adapt few general principles that are existing in the field of
software engineering.
Operating Systems Building Blocks | Introduction to Operating Systems
43
Operating System Services and System Calls
Implementation:
After designing the operating system, we come to the implementation process of the
Operating system:
•
At first, operating systems were created in assembly, but now C/C++ is the common
language used for this. The first program that was not in assembly language is MCP
(master control program), which was for Burroughs computers, developed at MIT
and written in PL/1. Most of the Windows and Linux OS are in C language.
•
The assembly code with small blocks needs to relate to some low-level I/O functions
in device drivers, turning the interrupts on and off and the Test and apply
Instructions for integrating the facilities.
•
Using higher level languages provides and addresses the code faster. It makes the OS
easier to shift to different hardware platforms.
•
To identify the bottlenecks, monitor the system performance. A code must be written
to compute and display the system behaviour measures. In numerous systems, the
operating system performs this task by producing the trace listings of the system
behaviour. These traces help to find the errors in the OS behaviour.
Self-assessment Questions
14) Operating systems are created in
a) Assembly
c) Linker
b) Loader
d) None of the above
15) The operating system requirement is categorised into
a) User goals
b) System goals
c) Both a & b
d) None of the above
44
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
1.2.5. System Boots
The process of starting a computer by loading the kernel is known as booting the system. In
most of the computer systems, a small piece of code is present and is known as the bootstrap
program or bootstrap loader. It detects the kernel, loads into the main memory and starts its
execution. Some computer systems like PCs, use a two-step technique in which a simple
bootstrap loader obtains a more complex boot program from the disk, which in turn installs
the kernel.
The initial bootstrap program resides in the predefined memory location. This program is in
the read-only memory (ROM) format because, at system start-up, the RAM is in an obscure
mode. ROM is user-friendly because it does not require initialisation and the computer virus
cannot harm it. The invention of various types of read-only memory (ROM) provided a
solution to this paradox by providing computers to be shifted with a start-up program that
could not be wiped out. Growth in the capacity of ROM provides a deeper explanation on the
start-up processes to be set up.
The bootstrap program can carry out various works. One task is to figure out the state of the
machine. If it passes this, the program will continue with the booting procedure. It can also
load all facets of the system, from CPU and records to device controllers and the texts of main
memory. Later it starts the operating system.
A problem in this system is that the bootstrap code can be changed only when the change
occurs in the ROM hardware chips. Some systems solve this problem through Erasable
Programmable Read-Only Memory (EPROM), which can be read-only except when clearly
given a command that can be written. All types of ROM are also known as firmware. The
problem with firmware is that the executing code is slower than the executing code in RAM.
The bootstrap loader is stored in firmware in case of large operating system or stored on a
disk for a system that changes constantly. In this state, the bootstrap runs diagnostics and
comprises of a code that can scan a single block at a fixed location from disk into memory
and carry out the code from that boot block.
All of the operating systems and disk-bound bootstraps can be easily substituted by creating
new versions to the disk. A boot partition of the disk is called a boot disk or system disk.
Operating Systems Building Blocks | Introduction to Operating Systems
45
Operating System Services and System Calls
Self-assessment Questions
16) Erasable Programmable Read-Only Memory (EPROM), can be read-only except when
clearly given a command that can be written. State True or False?
a) True
b) False
17) Most of the Windows and Linux OS are written in Java Language. State True or False?
a) True
b) False
18) What is the process of starting a computer by loading the kernel called?
a) Virtual Allocation
b) Paging
c) OS Behaviour
d) Booting
19) What detects the kernel, loads it into the main memory and starts its execution?
a) Bootstrap Loader
b) RAM
c) EPROM
d) CPU
46
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Summary
o The User-Interface can take various forms. One of them is a CLI (commandline interface), which makes use of the text commands and the process to enter
them (say, a program that allows to enter and edit the commands).
o Another form of UI is a batch interface, in which the commands and the
directives to control those commands are entered into the files, and executes
them.
o The purpose of the computer system is to allow the users to execute a program
in an efficient manner.
o Running a program requires the loading of the program into the RAM
(Random Access Memory) first and assigning CPU time for its execution.
o Every program needs input, and after processing the input that is given by the
user, it generates output. This process makes use of certain I/O devices.
o Operating system interacts with various types of processes in the form of shared
memory.
o To avoid the hardware problems the operating system constantly observes the
system for identifying the errors and solving those errors.
o User programs cannot manage the error detection and error correction because
it involves observing the entire computing process.
o During multi-tasking, an operating system plays a vital role in the allocation of
the required resources to each and every process for its better usage.
o The operating system maintains the accounts for all the resources that each
process or user accesses
o In a multitasking process, accounting increases the efficiency of the system with
the distribution of resources to each process.
Operating Systems Building Blocks | Introduction to Operating Systems
47
Operating System Services and System Calls
o If a computer system has multiple users and permits the simultaneous carrying
out of various processes, then the processes must be conserved from each other's
activities.
o A system call is a process in which a computer program asks for a service from the
kernel of the operating system on which the execution takes place.
o When a user opens the system for the first time, then the system is in the user
mode. When the user requests for a service, then the user mode turns to the
Kernel Mode, which just listens to the request of the user, processes that request,
and displays the results.
o When the execution of a program ends abnormally, one can take the memory
dump and examine it using a debugger.
o To compile and collect a program in the system, it needs to be installed into
memory.
o The process of starting a computer by loading the kernel is known as booting the
system. In most of the computer systems, a small piece of code is present and is
known as the bootstrap program or bootstrap loader.
o A problem in this system is that the bootstrap code can be only changed while
changing occurs in the ROM hardware chips. Some systems solve this problem
through Erasable Programmable Read-Only Memory (EPROM), which can be
read-only except when clearly given a command that can be written.
Terminal Questions
1. Describe the various services of the Operating System.
2. List the various functions of system calls with examples.
3. Explain the concept of System boot.
48
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Answer Keys
Self-assessment Questions
Question No.
Answer
1
a
2
b
3
b
4
c
5
b
6
b
7
b
8
d
9
b
10
a
11
d
12
d
13
a
14
a
15
c
16
a
17
b
18
d
19
a
Operating Systems Building Blocks | Introduction to Operating Systems
49
Operating System Services and System Calls
Activity
Activity Type: Online/Offline
Duration: 30 Minutes
Description:
Research on the System Calls used in Windows and UNIX for the commands used for:
1. Process Control
2. File Manipulation
3. Device Manipulation
4. Information Maintenance
5. Communication
6. Protection
Prepare a snapshot based presentation of what you have done.
50
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Case Study
Booting in Windows
The booting process in windows is very easy and has user friendly features. There are four
basic steps in the Windows operating system while booting. The first process starts when you
turn on the computer which is BIOS i.e., Basic Input Output System.
POST
MBR
Other Partition
and OS detection
Other Booting
Supporting Files
•
Step 01:
In the first step, POST stands for Power On Self-Test, which is initially the inventory
step and a very critical step, where hardware configuration and detection is
performed, by looking at the errors and the diagnosis reports i.e., Error codes, beeps
codes, numeric codes.
•
Step 02:
Now BIOS will look for the Booting Priority, from where the computer will get the
operating system. For example, from hard-disk, floppy disk, or network. This record
in the booting process is called MBR (Master Boot Record), which is stored on the
first sector of the hard disk from where the operating system start its loading in the
primary memory. It also provides the Booting partition information.
•
Step 03:
Now the BIOS knows the MBR and Booting Partition, where the operating system
loads the small system software called KERNEL into the main memory. Finding
Operating Systems Building Blocks | Introduction to Operating Systems
51
Operating System Services and System Calls
NTLDR (NT Loader) will load the partitions in the computer system and try to
identify other partitions other than the primary partitions. Also the boot.inf file
checks the partitions for other Operating system existence, where the user will select
the operating system loading into the memory.
•
Step 04:
Other important booting support file line win.sys, NTOSKERNEL.exe, HAL.DLL,
system.ini, sysedit.exe, config.exe, autoexec.BAT, MSCONFIG.exe these files help in
different system software installation for which the application oriented environment has to
get implemented and Microsoft Corporation developed some standard configuration done
while booting the system.
Booting in Linux
The booting process in Linux is quite different than windows except the BIOS and MBR
features. There are four basic steps in Linux operating system while booting. The first process
starting when you turn on your computer is BIOS i.e., Basic Input Output System.
POST
MBR
GRUB
Other Booting
Supporting Files
•
Step 01:
In First step POST stands for Power On Self-Test, which is initially inventory step and
very critical step, where hardware configuration and detection is performed, by
looking at the errors and the diagnosis reports for example, Error codes, beeps codes,
numeric codes. It will also take care of identifying the Master boot record for
initialising the booting process.
52
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
•
Step 02:
Now BIOS will look for the Booting Priority, from where the computer will get the
operating system. This record in booting process is called MBR (Master Boot Record),
which is stored on the first sector of the hard disk from where the operating system
start its loading in the primary memory. Some time it stores it in a floppy, bootable
CD, or flash drive. It also provides the Booting partition information from where the
operating system starts its loading on the computer.
•
Step 03:
After getting the idea about the primary partition, the first bootstrap loader that loads
into the memory is GRUB (Grand Unified Boot Loader) or LiLo (Linux Loader),
which try to load the operating system feature form the supporting resources the
initial loading file is initrd (Initial RAM-DISK), which creates the ram-disk
initialisation to keep the RAM disk ready for loading the operating system with the
sources. One of the most importanz functions of the GRUB is to load the Kernel into
the memory during the booting by loading LinuxRC which stands for Linux Run
Command for initialising the basic hardware. Finally the Grub completes its part by
initialising the file system mounting though INIT root directory by mounting the
initial file system to load the other supporting files of the Linux operating system.
•
Step04:
Other important booting support file like/sbin//init, /etc/inittb, /etc/rc.local, /runlevel
These files help in different system software installation for which the application
oriented environment has to be implemented. It also helps to create the GUI or
Console base environment for the user to use the operating system.
Operating Systems Building Blocks | Introduction to Operating Systems
53
Operating System Services and System Calls
Comparative chart of Windows, Linux Booting Process
OS Features
Windows
Linux
BIOS
Yes
Yes
POST
Yes
Yes
Boot Loader
NLTDR
GRUB or LiLo
Kernel
NTOSKERNEL
INIT, initrd
Win.sys,HAAL.DLL,system.ini,
Supporting Files
sysedit.exe.config.exe,autoexec,
BAT,MSCOFIG.exe
/sbin//init,/etc/inittb/e
tc/re.local,/runlevel
Discussion Questions:
1. Analyse the case for the OS which has the best booting process
54
Introduction to Operating Systems | Operating Systems Building Blocks
Operating System Services and System Calls
Bibliography
e-References
•
Operating Systems Services. Retrieved 06 Oct, 2016 from http://ecomputernotes.co
m/fundamental/disk-operating-system/operating system-services
•
OS Design and Installation. Retrieved 06 Oct, 2016 from https://it325blog.file
s.wordpress.com/2012/09/operating-system-concepts-7-th-edition.pdf
•
System Boots. Retrieved 06 Oct, 2016 from https://it325blog.files.wordpress.c
om/2012/09/operating-system-concepts-7-th-edition.pdf
Image Credits
•
Figure 1.2.1: http://www.cs.odu.edu/~cs471w/spring10/lectures/OSStructures.htm
•
Figure 1.2.2: http://www.cs.odu.edu/~cs471w/spring10/lectures/OSStructures.htm
External Resources
•
Silberschatz, A., Galvin, P. B., & Gagne, G. (2005). Operating system concepts.
Hoboken, NJ: J. Wiley & Sons.
•
Ahmad, A. (2010). Operating system. New Delhi: Knowledge Book Distributors.
•
Stallings, W. (1992). Operating system. New York: Maxwell Macmillan Canada.
Video Links
Topic
Link
Linux Boot Process
https://www.youtube.com/watch?v=ZtVpz5VWjAs
Fork() system call tutorial
https://www.youtube.com/watch?v=xVSPv-9x3gk
How a Linux system call works
https://www.youtube.com/watch?v=FkIWDAtVIUM
Operating Systems Building Blocks | Introduction to Operating Systems
55
Operating System Services and System Calls
Notes:
56
Introduction to Operating Systems | Operating Systems Building Blocks
Operating Systems Building Blocks
MODULE - II
Process
Management
MODULE 2
Process Management
Module Description
The main goal of studying Process Management is to be able to understand the concept of
Process, Threads, and CPU Scheduling in the context of an operating system. This module
also helps you in understanding the concept of Process Synchronisation and Deadlocks.
By the end of this module, students will learn about Process and its scheduling, inter process
communication; Threads – Single, Multi, user and Kernel and their benefits; CPU Scheduling
and its criteria; Process Synchronisation and its problems; and Deadlocks - its characteristics,
methods, and recovery process.
By the end of this module, students will be able to outline the concept of Process, summarise
the concept of Threads in Operating System, outline the Multi-Threading Model, list the
issues of Threading, outline the concept of CPU Scheduling, outline the CPU Scheduling
algorithm, identify the criteria for selecting a CPU scheduling algorithm, outline the concept
of Process Synchronisation, identify the critical section problem, outline the classic problems
of synchronisation, outline the fundamentals of Deadlock, classify various methods to handle
deadlocks, and identify the ways to recover from deadlocks.
Chapter 2.1
Process, Threads and CPU Scheduling
Chapter 2.2
Process Synchronisation and Deadlocks
Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Chapter Table of Contents
Chapter 2.1
Process, Threads and CPU Scheduling
Aim ....................................................................................................................................................... 57
Instructional Objectives..................................................................................................................... 57
Learning Outcomes ............................................................................................................................ 57
2.1.1 Process ....................................................................................................................................... 58
(i) Introduction ......................................................................................................................... 58
(ii) Process Scheduling ............................................................................................................. 62
(iii) Cooperating Process ......................................................................................................... 65
(iv) Operations on Process ...................................................................................................... 66
(v) Inter Process Communication .......................................................................................... 68
(vi) Communication in Client-Server System ...................................................................... 70
Self-assessment Questions ....................................................................................................... 80
2.1.2 Threads ...................................................................................................................................... 81
(i) Introduction ......................................................................................................................... 81
(ii) Process Scheduling ............................................................................................................. 81
(iii) Multi Thread Process and Benefits ................................................................................. 81
(iv) User and Kernel Threads.................................................................................................. 82
(v) Multi-Threading Model..................................................................................................... 84
(vi) Threading issue.................................................................................................................. 85
Self-assessment Questions ....................................................................................................... 87
2.1.3 CPU Scheduling ....................................................................................................................... 87
(i) Basic Concepts ..................................................................................................................... 87
(ii) Scheduling Criteria ............................................................................................................ 88
(iii) Scheduling Algorithm ...................................................................................................... 89
(iv) Multiple Processor Scheduling and Benefits ................................................................. 93
(v) Real Time Scheduling ........................................................................................................ 94
Self-assessment Questions ....................................................................................................... 95
Summary ............................................................................................................................................. 96
Terminal Questions............................................................................................................................ 98
Answer Keys........................................................................................................................................ 98
Activity................................................................................................................................................. 99
Case Study ......................................................................................................................................... 100
Bibliography ...................................................................................................................................... 102
Operating Systems Building Blocks | Process Management
Process, Threads and CPU Scheduling
e-References ...................................................................................................................................... 102
External Resources ........................................................................................................................... 103
Video Links ....................................................................................................................................... 103
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Aim
To familiarise the students with the knowledge of basics of Process, Threads, and CPU
Scheduling
Instructional Objectives
After completing this chapter, you should be able to:
•
Explain the concept of process in Operating Systems
•
Describe the concept of Threads in Operating Systems
•
Explain the concept of CPU scheduling
•
Explain the CPU Scheduling algorithm
•
Describe the criteria for selecting a CPU scheduling algorithm
Learning Outcomes
At the end of this chapter, you are expected to:
•
Outline the concept of Process
•
Summarise the concept of Threads in OS
•
Outline the Multi-Threading Model
•
List the issues of Threading
•
Outline the concept of CPU Scheduling
•
Outline the CPU Scheduling algorithm
•
Identify the criteria for selecting a CPU scheduling algorithm
Operating Systems Building Blocks | Process Management
57
Process, Threads and CPU Scheduling
2.1.1 Process
Earlier computer systems provided only one program to be carried out at a time. This
program had full control of the system and had an approach to all the resources of the system.
In contrast, current-day computer systems provided multiple programs to be installed into
memory and carried out concurrently. This development needed firmer control and more
categorisation of the various programs; and these requirements resulted in the concept of a
process, which is a program in execution. A process is the group of work in a modern timesharing system.
A process is a program in execution. The accomplishment of a process must be raised in a
subsequent manner. A process is defined as a system which describes the fundamental unit of
work to be carried out in the system. It also requires taking care of various system tasks that
are dropped outside the kernel itself. Thus a system contains the collection of processes such
as operating system processes executing system code and user processes executing user code.
All these processes execute with CPU and by switching the CPU between processes. The
operating system can make the computer more productive.
(i)
Introduction
An operating system involves various programs. A batch system executes jobs, whereas a
time-shared system executes tasks or user programs. If we take the term process, much of the
operating-system theory and terminology were evolved during a time when job processing
was considered as the major activity of operating system. Now we commonly say process
instead of a job because the process has superseded job.
1. Process: A process is a program in execution. Besides a process is more important
than the program code, which is seldom called as the text section. It also consists of
current activity, as illustrated by the value of the program counter and the contents of
the registers of the processor.
58
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
.
Figure 2.1.1: Simplified Layout of a process inside main memory
A process usually involves the process stack that includes temporary data (such as
return addresses, function parameters, and local variables) and a data section which
consists of global variables. A process may also contain a heap a memory and that is
dynamically allocated when the process runs. Programs are the passive entity, such as
a file consisting of information stored on the disk which is often known as an
executable file, whereas a process is an active entity. A program is called as a process
when an executable file is installed into the memory. There are two common
techniques for installing executable files, such as double-clicking an icon and entering
the name of the executable file.
2. Process state: The state of a process changes, as it executes. It is defined by the
current activity of that process. The states of a process can be as given below:
a. New: The process is being developed.
b. Running: Instructions are being carried out.
c. Waiting: The process waits for some activity such as an I/O completion, to
take place.
d. Terminated: The process has completed execution.
e. Ready: The process is waiting to be allowed to a processor.
Operating Systems Building Blocks | Process Management
59
Process, Threads and CPU Scheduling
However, the states which they describe are located on all systems. Some operating
systems also more finely define process states. It is important to recognise that only
one process can be operating on any processor at any moment. Many processes may
be prepared and limiting, however.
Figure 2.1.2: Process State
3. Process control block: In an operating system, each process is performed by a process
control block (PCB) which is also known as a task control block. It consists of many
bits of information correlated with a specific process, including these:
a. Process state: The state may be new, prepared, executing, waiting, halted and so
on.
b. Program counter: The program counter specifies the address of the next
instruction to be carried out for this process.
c. CPU registers: The registers differ in type and number that depend on the
computer structure. It consists of accumulators, stack pointers, index registers,
and general purpose registers, any condition code information. In addition to the
program counter, the information about this state must be stored when an
interruption takes place, to make the process run correctly afterward.
d. CPU scheduling information: This information involves a pointer to scheduling
queues, process priority, and other scheduling parameters.
60
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
e. Memory management information: Information such as the value of the source
and limit registers, the segment tables or the page tables depending on the
memory system that is applied by the operating system are contained in it.
f. Accounting information: This information involves the amount of CPU and real
time used, account numbers, time limits, job or process numbers, etc.
g. I/O status information: It consists of the list of I/O devices allotted to the process,
a list of open files, and so on.
Simply, the PCB maintains as the repository for any information that may differ from
process to process.
Figure 2.1.3: Process Control Block
•
Threads: As earlier discussed, a single thread is executed by a program called process.
For instance, when a process is running a word-processor program, a single thread of
instructions is being carried out. This single thread of control supports the process to
execute only one task at one time. The user cannot concurrently type in characters
and operate the spell checker within the same process.
Operating Systems Building Blocks | Process Management
61
Process, Threads and CPU Scheduling
(ii) Process Scheduling
The process scheduling is defined as the action of the process manager which manages the
replacement of the functioning process from the CPU and the choosing of another process by
a specific strategy. Process scheduling is an important part of multiprogramming operating
systems. Such operating systems provide more than one process to be installed into the
executable memory at a time, and the installed process shares the CPU using time
multiplexing. The main goal of multiprogramming is to execute some process at all times to
maximise CPU utilisation. The objective of time sharing is to exchange the CPU among
processes regularly so that users can communicate with each program while it is operating.
To achieve these objectives, the process scheduler chooses an available process (possibly from
a set of several available processes) for a program running on the CPU.
•
Scheduling Queues: The Operating System manages all PCBs in Process Scheduling
Queues. For each of the process states, the OS keeps a separate queue and PCBs of all
processes in the same execution state are put in the same queue. When the state of a
process is modified, its PCB is separated from its current queue and switched to its
new state queue.
The following are important process scheduling queues that are managed by
operating system:

Job queue: This queue maintains all the processes in the system.

Ready queue: This queue maintains a set of all processes residing in main
memory, prepared and waiting to be carried out. A new process is always kept in
this queue.

Device queue: The processes which are stopped due to the absence of an I/O
device constitute this queue.
The Operating System can apply different policies to maintain each queue i.e., FIFO,
Priority, Round Robin, etc. The operating system scheduler decides how to relocate
processes between the ready and run queues which can only obtain one entry per
processor core on the system.
62
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
•
Schedulers: Schedulers are special system software which manages process
scheduling in various ways. Their main task is to choose the right jobs to be put into
the system and to determine which process to execute.
Schedulers are of three types:

Long-Term Scheduler: It is also known as a job scheduler. A long-term scheduler
decides which programs are suitable for the system for processing. It chooses
processes from the queue and installs them into memory for execution. Process
installs into the memory for CPU scheduling.
The main intention of the job scheduler is to give a balanced mix of jobs, such as
processor bound and I/O bound. It also maintains the degree of
multiprogramming. If the degree of multiprogramming is constant, then the
average rate of process development must be similar to the average rate of
departure of processes leaving the system.

Short term Scheduler: It is also known as CPU scheduler. The main objective of
this scheduler is to enhance system performance following the selected set of
criteria. It is the transformation of the ready state to running state of the process.
CPU scheduler chooses a process among the processes that are prepared to carry
out and allocates CPU to one of them.
Short-term schedulers, also called as dispatchers, take the decision of which
process to carry out next. The short-term schedulers are quick compared to the
long-term schedulers.

Medium term scheduler: Medium-term scheduling is considered as a component
of swapping. In this scheduler, the processes are removed from the memory. It
decreases the degree of multiprogramming. It is also responsible for controlling
the swapped out processes.
A running process can be turned into suspended if it makes an I/O request. A
suspended process cannot cause any progress towards completion. In this
situation, to remove the process from memory and create space for other
processes the suspended process is shifted to the secondary storage. This process
is known as swapping, and the process is declared to be rolled out or swapped out.
Swapping may be essential to enhance the process mix.
Operating Systems Building Blocks | Process Management
63
Process, Threads and CPU Scheduling
•
Context Switch: A context switch is a technique to keep and restore the state or
context of a CPU in Process Control block so that execution of a process can be
discontinued from the same point at a later time. By using this mechanism, a context
switcher allows multiple processes to use a single CPU. Context switching is an
important component of multitasking operating system features.
When the scheduler exchanges the CPU from one executing process to another
execution process, the state from the current running process is stored in the process
control block. After this, the state which is needed for the process to run next is
installed on its own PCB and applied to set the PC, registers, etc. At this point, the
execution of the second process can start.
Figure 2.1.4: Context Switch
Context switches are accelerated since the memory state, and the register must be saved and
also restored. To evade the context switching time, few hardware systems use two or more
sets of the processor registers. The following information given below is stored for the later
use, when the process is switched:
64
•
Scheduling information
•
Currently used register
•
Base and limit register value
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
•
I/O State information
•
Program Counter
•
Changed State
•
Accounting information
(iii) Cooperating Process
In the operating system, processes executing simultaneously can be either independent
processes or cooperating processes. A process is considered as independent if it cannot
impact or get influenced by the other processes that are carrying out in the system. When the
sharing of data does not occur between two processes, then that is called as independent.
A process is known as a cooperating process if it can impact or get influenced by the other
processes that are being carried out in the system. When the sharing of data takes place
between different processes, then that is called as a cooperating process.
There are various reasons for maintaining an environment that supports process
cooperation:
•
Information sharing: Since various users may be involved in the same part of
information (for instance, a shared file), we must give an environment to support
simultaneous access to such information.
•
Computation Speedup: If we require a particular task to execute faster, we must snap
it into subtasks, each of which will be carried out in parallel with the others.
Understand that such a speedup can be obtained only if the computer has multiple
processing elements such as I/O channels or CPUs.
•
Modularity: We may require designing the system in a modular fashion, segregating
the functions of the system into separate processes or threads. The solution of a
problem is divided into parts having well-defined interfaces, and where the parts
execute in parallel.
•
Convenience: A user may be executing multiple processes to obtain a single objective,
or where a utility may apply multiple components, which attach via a pipe structure
that connects the stdout of one stage to stdin of the next, etc.
Operating Systems Building Blocks | Process Management
65
Process, Threads and CPU Scheduling
(iv) Operations on Process
The processes in most systems can execute simultaneously, and they may be formed and
removed effectively. Thus, these systems must provide a technique for process creation and
termination.
•
Process Creation: Process creation is an important unit of process operation. They can be
titled as system initialisation, a user request to design a new process, execution of process
creation system and lastly, a batch job initialization. Processes can be in the form of
forefront processes that it connects with the users and, the backdrop processes.
A process may create several new processes through a create-process system call during
the execution course. The creating process is named as a “parent process,” and the new
processes are named as the “children” of that process. Each of these new processes may
develop other processes, forming a tree of processes.
In general, a process requires certain resources to finish its task. When a process generates
a sub-process, that sub-process will be able to get its resources directly from the operating
system or it may be inhibited to a subset of the resources of the parent process. The parent
may divide its resources among its children. As well the different physical and logical
resources that a process acquires when it is designed may be moved ahead by the parent
process to the child process.
When a process generates a new process, there are two possibilities regarding
execution:

Simultaneously the parent continues to carry out with its children.

The parent remains, till some or all of its children have stopped.

Also two possibilities may find its existence regarding the address space of these
new processes:

The child process is a replica of the parent process. It contains the same data and
program as the parent.

•
The child process has a new program installed into it.
Process Termination: All starting processes are meant to serve some purpose. If the
purpose is worked, they get closed. The goal of the processes is to execute the instructions
66
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
provided by a program. When the process ends accomplishing the last statement, it gets
cancelled. This process is known as process termination. Once the final execution of a
statement is done, the process asks the operating system to terminate it using a system call
named exit(). At this time, the process returns the status value, which is particularly an
integer. This status value is returned to the parent process using a system call named
wait(). At this time, the deallocation of the process resources occurs, such as open files,
physical and virtual memory, and I/O buffers. At this time, the resources that are applied
by the process come back to the system. In this process, the memory space of the PCB
(process control block) is restored to the free memory pool.
A termination can also occur in various other situations as well. Using a system call such
as TerminateProcess(), a process can terminate another process. Such a call can only be
triggered by the parent of the particular process that has to be terminated. This is only to
prevent the users to kill the jobs of the other users. For this, the parent should know the
identity of its children. So when a new process is created within a process, its identity is
sent to the parent of that process.
A parent may stop the execution of one of its children for various reasons, such as:

The child has crossed the usage limit of some of the resources that it has been allotted.
For this tracking mechanism of the usage limit, the parent must have a technique to
inspect the usage of its children.

No further necessity of the task that is given to the child.

When the exiting of the parent happens, since the operating system does not allow the
child to proceed further after the parent has been terminated.
In few systems like VMS, once the parent terminates, the child will not be allowed to exist.
In such systems, if a parent process is terminated, normally or abnormally, then all its
child processes will also be terminated. This concept is referred to as the cascading
termination.
To demonstrate the process execution and its termination, let us take an example of
UNIX system, where the process can be terminated using a system call, exit(). The parent
process can wait for the child to complete its process using a system call, wait(). This
Operating Systems Building Blocks | Process Management
67
Process, Threads and CPU Scheduling
system call sends the process identifier related to the terminated child to the parent
process so that the parent is informed about the termination of its child process.
Note: A process can also terminate if there are no existing threads in that process.
(v)
Inter Process Communication
The processes that are executing in a concurrent fashion, can either be cooperating processes
or it can be an independent processes.
A process is said to cooperative if it can either affect the other processes or be affected by
other process in the system. Such cooperative processes share the data and information with
each other.
A process is said to be independent if it either cannot affect the other process or it cannot be
affected by the other processes. Such independent processes do not share the data or the
information with each other.
The reasons for providing a process cooperation environment are:
•
Speeding up the computation: For a task to complete quickly, it has to be broken
into sub tasks, so that they can be executed parallely. Such speeding up of the
computation can only happen if the system has multiple processors.
•
Convenience: A single user can work on multi tasks at the same time. For example, a
single user can edit, print, compile, etc. at the same time.
•
Information Sharing: Multiple users might be looking for the same information. So a
concurrent access to such information must be provided.
•
Modularity: A modular system can be constructed by dividing the functions into
threads or processes.
Such cooperating process needs an IPC (Interprocess Communication) method to exchange
the information and data. IPC facilitates data communication by giving processes to apply
segments, semaphores, and other techniques in distributing memory and information. IPC
helps in transferring efficient message between processes. The idea of IPC is based on Task
Control Architecture (TCA). It is a suitable technique that can transfer the variable length
arrays, data structures, and lists. It has the potential of using publish/subscribe and
68
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
client/server data-transfer standards supporting wide range of operating systems and
languages.
The two important models of communication are:
•
Shared Memory: In this model, a region of the memory is established, which has
been shared by the cooperating processes. All the processes that are connected to that
shared region can exchange the information. They can also read and write in the
shared region.
•
Message Passing: In this model, the communication happens through the exchange of
messages between all the cooperating processes.
The difference between shared memory and message passing has been shown in figure.
2.1.5
Most of the operating systems use both these models. The message passing model is only
useful for the exchange of small amount of data. Compared to the shared memory model, the
message passing model is easy to be implemented.
Shared memory model have higher speeds and greater convenience of communication. The
shared memory model is quicker as compared to the message passing model, as the latter uses
the system calls, which are time-consuming due to the intervention of the kernel. However,
the system calls are also used in the shared memory model to establish the memory regions.
Once this memory has been established, no involvement of kernel is required.
.
Figure 2.1.5: Inter Process Communication
Operating Systems Building Blocks | Process Management
69
Process, Threads and CPU Scheduling
(vi) Communication in Client-Server System
Previously we have discussed how the processes interact through message passing and shared
memory. Both these techniques can be applied to communicate with the client-server systems
as well. Here, we will learn about three other strategies to communicate with the client-server
systems: RPC (remote procedure calls), sockets, and Java's RMI (remote method invocation).
•
Sockets
A socket is considered to be an endpoint for the communication. A set of processes that
are communicating over a network applies a set of sockets (one for each of the processes).
A socket is recognised by an IP address along with a port number. Usually, the sockets
utilise the client-server architecture. The server waits for an incoming client request by
following a specified port. Once a request is obtained, the server acquires a link from the
client socket to establish a connection. The servers performing specific services (like
HTTP, telnet, and FTP) listen to the well-known ports (like an FTP listens to port 21,
telnet to port 23, and the HTTP, or the web, to port 80). All the ports below 1024 are
deemed as well known and can be used to perform the standard services.
When the client process begins a request for a connection, the host computer assigns a
port to it. The port that is assigned is an arbitrary number which is higher than 1024. For
instance, if a client on the host X with an IP address 146.86.5.20 wants to build a
connection with a web server (that is listening to port 80) having an address 161.25.19.8,
the host X may be designated to port 1625. The connection will comprise of a pair of
sockets (161.25.19.8:80) on the web server and (146.86.5.20:1625) on host X. The above
situation is shown in Figure. 2.1.6. Based on the port number of the destination, the
packets that are travelling between the hosts are allotted to the relevant process.
70
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Figure 2.1.6: Communication using Sockets
All the connections should be unique. Therefore, if any other process present in the host
X wishes to build a connection with the same web server, then it would be designated with
a port number other than 1625 and which is greater than 1024, which assures that all the
connections comprise of a different pair of sockets.
Though most of the examples here use C language, we will demonstrate the sockets using
Java code, as it gives a much simpler interface to the sockets and has a rich library for
networking utilities.
Java programming has three distinct types of sockets. UDP (Connectionless) sockets
utilise the DatagramSocket class. TCP (Connection-oriented) sockets utilise the Socket
class. Ultimately, the MulticastSocket class which is a subclass of the DatagramSocket
class. The multicastsocket permits the data to be transmitted to multiple recipients. Our
example illustrates a date server that uses a TCP connection-oriented socket. The
operation enables the clients to request the current time and date from the server. The
server responds to the port 6013, while the port can be any random number which is
greater than 1024. The server responds with the time and date to the client when a
connection is obtained.
import java . net .*;
import java. io.*;
public class DateServer
{
Public static void main (string [] args)
try {
Operating Systems Building Blocks | Process Management
{
71
Process, Threads and CPU Scheduling
ServerSocket sock = new ServerSocket (6013);
// now listen for connections
while (true) {
Socket client = sock .accept () ;
PrintWriter pout = new
PrintWriter(client. getOutputStream(), true) ;
// write the date to the socket
pout.printIn(new Java.util.Date( ) .toString ( )) ;
// close the socket and resume
// listening for connections
Client.close ( ) ;
}
}
catch (IOException ioe)
{
System.err.println(ioe) ;
}
}
}
The above code represents the working of data server. The server builds a Server Socket
that specifically listens to the port 6013. The server then listens to the port using accept ()
method. The server blocks on the accept () method waits for a client to demand a
connection. When the connection request is received, the accept () returns a socket, so
that the server utilises it to communicate with the client.
The communication between the server and the socket is as follows. Initially, the server
builds a PrintWriter object that is used to communicate with the client. This PrintWriter
object lets the server write to the socket using the println () and routine print () methods
for the output. The server process then grants the date to the client, by calling the method
println (). Once the date has been written to the socket, the server stops that socket to the
client and continues listening for other requests.
The client interacts with the server by building a socket and connecting to the port on
which the server listens. Such a client can be implemented in the Java program as shown
in the below set of code. The client builds a Socket and demands a connection to the
server at IP address 127.0.0.1 on the port 6013. After the connection is established, the
client can read from the socket using the standard stream I/O statements. Once it receives
the date from the server, the client stops the socket and exits. The 127.0.0.1 IP address is a
special IP address referred as a loopback. When the computer points to 127.0.0.1, it is
pointing to itself. This lets the server and the client on the same host to interact using the
TCP/IP protocol. The 127.0.0.1 IP address can be replaced with that of the IP address of a
different host running the date server. An actual host name can also be used in addition to
the IP address.
72
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
import java . net .*;
import java. io.*;
public class DateClient
{
Public static void main (string [] args)
{
try {
//make connection to server coket
Socket sock = new ServerSocket (“127.0.0.1”,6013);
InputStream in = sock.getInputStream();
BufferedReader bin = new
BufferReader (new InputStreamReader (in));
// read the date from the socket
String line;
While ((line – bin.readline()) != null)
System.out.printIn(line);
//close the socket connection
sock.close ( ) ;
}
catch (IOException ioe)
{
System.err.println(ioe) ;
}
}
}
Although the communication using sockets are considered to be efficient and common,
they are considered to be a low-level form of communication between the distributed
processes. One reason being the sockets allow only an unorganised stream of bytes to be
interchanged between the communicating threads. It is the duty of the client or the server
application to command a structure on the data.
•
Remote Procedure Calls
One of the most traditional forms of remote service is the Remote Procedure Calls (RPC)
paradigm. The Remote Procedure Calls was planned in a way to separate the usage of the
procedure call mechanism between the systems including the network connections. RPC
is similar to the IPC mechanism in many ways, and it is normally built on the top of the
system. Since we are dealing with an environment where the processes are executing on
distinct systems, we require a message-based communication scheme to implement a
remote service. In contrast to IPC, the messages that are exchanged in the Remote
Procedure Calls communication are well structured. Each message is directed to a Remote
Procedure Calls daemon that is listening to a port on the remote system, and all of them
have an identifier of the function and the parameters to execute and pass the function
respectively. Then the function is accomplished as requested, and the output is then sent
to the requester through a separate message.
Operating Systems Building Blocks | Process Management
73
Process, Threads and CPU Scheduling
A port is just a number inserted at the start of the message packet. Normally, a system has
a single network address; it may comprise of multiple ports within that address to
distinguish the network services that it supports. If the remote process requires a service,
it directs the message to the proper port. For example, if a system wants to allow the other
systems to list its current users, it would comprise of a daemon supporting the RPC
attached to the port (say port 3027). Any remote system can get the required information
by sending a Remote Procedure Calls message to the port 3027 on the server. The
information would be received through a reply message.
The semantics of Remote Procedure Calls lets a client request a procedure on the remote
host as it requests a procedure locally. The Remote Procedure Calls system hides the
details that enable the communication to take place, by granting a stub on the client side.
Normally, a distinct stub exists for each of the separate remote procedure. When the
client requests a remote procedure, the Remote Procedure Calls system calls the relevant
stub, passing the parameters granted to the remote procedure. The stub then finds the
port on the server and marshals the parameters. The parameter marshalling packages the
parameters in a way that can be sent over a network. The stub then uses a message passing
process to send a message to the server. A related stub on the server side accepts this
message and requests the procedure on the server. If required, the return values are
passed to the client using the identical process.
One issue that you must deal with are the deviations in data representation on the client
machines and the server machines. Examine the representation of a 32-bit integer. Few
systems (big-endian) make use of the high memory address to save the most significant
byte, while the other systems (little-endian) save the least significant byte. To resolve such
differences, many Remote Procedure Calls systems set a machine-independent
representation of such data. One such representation is the XDR (external data
representation). On the client side, the parameter marshalling includes the conversion of
the machine dependent data into external data representation before being forwarded to
the server. On the server side, the external data representation data are unmarshalled and
are transformed to the machine-dependent representation for the server.
One other critical issue concerns the semantics of a call. While the local procedure calls
fail only under extreme cases, the Remote Procedure Calls can be duplicated, fail, or be
executed more than once, due to the common network errors. One right way to approach
this problem is that the operating system needs to assure that the messages are being acted
74
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
on only once. Most of the local procedure calls have a functionality called "exactly once,"
which is tough to implement.
Let us first examine "at most once." This semantic can be ensured by appending a
timestamp to each of the messages. The server must retain a history of all the timestamps
of the messages that has been processed or a history long enough to assure that the
repeated messages are identified. The incoming messages that have an existing timestamp
in the history are neglected. Then the client can send a message a multiple number of
times and make sure that it only executes once.
At first, we must eliminate the risk of server never receiving the request. To achieve this,
the server needs to implement the at most once protocol and must also confirm to the
client that the Remote Procedure Call was obtained and executed. Such ACK messages are
prevalent throughout networking. The client must resend each Remote Procedure Call
regularly until it gets back the ACK for that call.
One other issue is the communication between a client and a server. With the standard
procedure calls, some binding occurs during the link, load, or execution time so that the
name of the procedure call is substituted for the memory address of that procedure call.
The Remote Procedure Call scheme needs a similar binding of the server and the client
port, but how does the client perceive the port numbers on the server? Neither of the
systems has full information on the other as they do not share the memory.
Two approaches are standard. First, the binding information may be predetermined,
through the fixed port addresses. During the compile time, a Remote Procedure Call has a
fixed port number linked with it. The server cannot alter the port number of the service
being requested, once a program is compiled. Second, binding is dynamically done using
a rendezvous mechanism. Usually, an OS provides a rendezvous (matchmaker) daemon
on a fixed Remote Procedure Call port. The client then sends a message containing the
name of the Remote Procedure Call to the rendezvous daemon demanding the port
address of the Remote Procedure Call that it wants to execute. Then the port number is
returned, and the Remote Procedure Call can be transmitted to that port until the server
crashes or the process terminates. The second method needs an extra overhead compared
to the initial request, but it is more flexible than the first approach. The figure 2.1.7 shows
a sample interaction.
Operating Systems Building Blocks | Process Management
75
Process, Threads and CPU Scheduling
Figure 2.1.7: Execution of an RPC (Remote Procedure Call)
The Remote Procedure Call scheme is beneficial in implementing a distributed file
system. Such systems can be implemented as a set of Remote Procedure Call daemons and
clients. The messages are directed to the distributed file system port on the server on
which the file operation needs to take place. The message carries the disk operation to be
executed, which can be write, read, delete, rename, or status, corresponding to the
standard file-related system calls. The return message contains the data emerging from
that call, which is executed by the DFS daemon on account of the client. For example, a
message may contain a request to transfer the whole file to a client or can be merely
limited to a simple block request, which may require several such requests, if a whole file
is to be transmitted.
76
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
•
Remote Method Invocation
RMI (Remote Method Invocation) is a Java feature similar to Remote Procedure Calls.
The Remote Method Invocation enables a thread to request a method on the remote
object. The objects are viewed as remote if they are in a different JVM (Java virtual
machine). Hence, the remote object can be on a remote host that is connected by a
network or a different Java virtual machine on the same computer. The above situation is
represented in Figure. 2.1.8.
Remote Method Invocation and Remote Procedure Calls differ in two ways. First, the
Remote Procedure Calls support the procedural programming, where only the remote
procedures or the functions can be called. Alternatively, the Remote Method Invocation is
object-based. It supports the invocation of methods on the remote objects. Second, the
parameters to the remote procedures are conventional data structures in Remote
Procedure Calls; with Remote Method Invocation, it is plausible to pass the objects as
parameters to the remote methods. By enabling a Java program to invoke the methods on
the remote objects, Remote Method Invocation makes it likely for the users to develop the
Java applications that are shared across the network.
Remote Method Invocation needs to implement the remote object using the stubs and the
skeletons, to make the remote methods transparent to both client and server. A stub acts
as a proxy for the remote object, and it remains with the client. When the client requests a
remote method, the stub for that remote object is requested. This client stub is liable for
creating a parcel with the name of the method to be requested on the server and the
marshalled parameters for that method. The stub then assigns this parcel to the server,
where the remote object's skeleton receives it. This skeleton is liable for requesting the
desired method on the server and unmarshalling the parameters. These skeletons then
marshals the return value into a parcel and send back the parcel to the client. The stub
then unmarshalls the return value and forwards it to the client.
Let us closely look how this process operates. Imagine that a client wants to request a
method on the remote object server with a signature someMethod(Object, Object) that
gives back a boolean value. The client executes the following statement:
boolean val = server.someMethod(A, B)
This call to someMethod() including the parameters A and B request the stub for a
remote object. The stub marshals both the parameters A and B and the method name that
Operating Systems Building Blocks | Process Management
77
Process, Threads and CPU Scheduling
is to be invoked on the server into a parcel and sends it to the server. The server skeleton
then unmarshals these parameters and requests the method someMethod(). The original
implementation of someMethod() remains on the server. Once this method is done, the
skeleton then marshals the boolean value returned from someMethod () and sends it back
to the client. The stub then unmarshalls this value and sends it to the client. This process
is shown in Figure. 2.1.9.
Figure 2.1.8: Remote Method Invocation
Figure 2.1.9: Marshalling Parameters
78
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Fortunately, the level of abstraction that the Remote Method Invocation gives makes the
skeletons and the stubs transparent, letting the Java developers to write the programs that
request distributed methods just like they request the local methods. It is essential,
however, to know few rules about the behaviour of parameter passing.

If the marshalled parameters are non-remote or local objects, they are transferred by
copy, applying a technique called as the object serialisation. But, if the parameters are
also remote objects, they are transferred by reference. In the previous example, if A
and B are local object and remote object respectively, then A is serialised and
transferred by copy, and B is transferred by reference. This allows the server to request
the methods on B remotely.

To pass the local objects as the parameters to the remote objects, they need to
implement the interface java.io.Serialisable. Most of the objects in the core Java API
implement this for allowing them to be used with Remote Method Invocation. The
object serialisation lets the state of an object to be inscribed to a byte stream.
Operating Systems Building Blocks | Process Management
79
Process, Threads and CPU Scheduling
Self-assessment Questions
1) What happens when the process issues an I/O Request?
a) It is placed in an I/O Queue
b) It is placed in a waiting Queue
c) It is placed in the ready Queue
d) It is placed in the job Queue
2) If all the processes I/O bound, the ready queue will almost always be ______, and the
short term scheduler will have a _____ to do.
a) Full, Little
b) Full, Lot
c) Empty, Little
d) Empty, Lot
3) In a time-sharing operating system, when the time slot given to a process is
completed, the process goes from a running state to ____________ state?
a) Blocked
b) Ready
c) Suspended
d) Terminated
4) Which of the following maintains the Port identities and capabilities?
a) Object Oriented OS
b) Kernel Service
c) Kernel
d) MicroKernel
5) A batch system executes jobs, whereas a time-shared system executes tasks or user
programs. State True or False?
a) True
b) False
6) In an operating system, each process is performed by a process control block (PCB)
orProgram Control Block. State True or False?
a) True
b) False
7) The Operating System manages all PCBs in Process Scheduling Queues. State True or
False?
a) True
b) False
8) A long-term scheduler decides which programs are suitable for the system for
processing. State True or False?
a) True
b) False
80
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
2.1.2 Threads
(i)
Introduction
A thread is the smallest unit of processing that can be performed in an OS. In most modern
operating systems, a thread exists within a process - that is, a single process may contain
multiple threads. A thread shares the data section, code section, and other resources of the
operating system with the other threads from the same process. A process has a single thread.
If any process has multiple threads, then it can perform multiple tasks.
A thread is essential for sharing address space within a process because the cost of
communication between threads is low. It uses same code section, data section, and OS
resources. So that threads are also known as a lightweight process. Threads play a prominent
role in up-gradation of application performance through parallelism. Threads apply a
software approach for enhancing the performance of operating system by decreasing the
overhead thread which is equivalent to a classical process.
(ii)
Process Scheduling
In computer programming, single threading is the process where only a single thread runs at
one point of time. Multithreading is where multiple threads runs at one point of time. While,
it has been told that the term single threading is deceptive, the term has been widely taken as
a consideration within the functional programming community. The implementation of
single threaded process within a server can accept only one client at a time and can halt the
other users requesting services for a long time.
The benefits of single thread process are:
•
It is not a complex process.
•
Fewer overheads added by a thread to an application.
(iii) Multi Thread Process and Benefits
Multithreading is the process of a CPU (Central Processing Unit) to handle several processes
or threads simultaneously. This concept is used by the operating system to control its
utilisation simultaneously by multiple users. It handles the multiple requests made by a single
user on a single program.
Operating Systems Building Blocks | Process Management
81
Process, Threads and CPU Scheduling
There are four benefits of multi-thread process:
1. Responsiveness: Multi-threading collective application may provide a program to
carry on execution even if part of it is closed or is operating a lengthy operation,
thereby enhancing responsiveness to the user. For instance, a multi-threaded web
browser could still permit user interaction in one thread while an image is being
installed in another thread.
2. Resource Sharing: Typically, threads give the memory and the resources of the
process in which they exist. The advantage of data and sharing code is that it provides
a medium to receive numerous distinctive threads of activity in the same address
space.
3. Economy: It is expensive allotting memory and resources for process creation. As
threads provide resources of the process in which they exist, it is more cost effective to
make and context-switch threads. Experimentally measuring the difference in
expense may be tough, but in general, it takes more time to design and handle
processes than threads. For example, in Solaris, designing a process is about thirty
times slower than designing a thread, and also context switching is about five times
slower.
4. Utilisation of multiprocessor architectures: The advantages of multithreading can
be largely enhanced in a multiprocessor structure, where threads can be executed
parallel on various processors. A single threaded process can only execute on one
CPU, no matter how many are ready to use. Multi-threading on a multi-CPU
machine raises competency.
(iv) User and Kernel Threads
•
User Thread: In user thread, the thread management kernel does not know the
occurrence of threads. The thread library consists of code for making and damaging
threads, for scheduling thread execution, for sending message and data between threads
and for keeping and recovering thread contexts. The application begins with a single
thread.
82
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
The advantages include:
•
Thread switching does not need Kernel mode authorisation.
•
User level thread can execute on any operating system.
•
Scheduling can be applied specifically to the user level thread.
The disadvantages are:
•
•
In a normal operating system, most system calls are prevented.
•
The multi-threaded application is unable to take advantage of multiprocessing.
Kernel Thread: In kernel thread process, thread management is accomplished by the
Kernel. In general, there is no requirement of thread management code. Kernel threads
are provided directly by the operating system. Any application can be computed to be
multi-threaded. All the threads inside an application are preserved within a single process.
The Kernel manages information of context for the process as whole and individual
threads within the process. Scheduling by the Kernel is accomplished on a thread basis.
The Kernel executes thread creation, scheduling, and management in Kernel space.
Generally, Kernel threads are slower to design and handle than the user threads.
The advantages of kernel threads are:
•
The kernel can simultaneously organise multiple threads from the same process on
multiple processes.
•
If one thread in a process is obstructed, the Kernel can set up another thread for the
same process.
•
Kernel routines can be multi-threaded.
The disadvantages of kernel threads are:
•
Usually, Kernel threads are moderate to design and handle than the user threads.
•
Mode switch to the kernel is required within the same process for the transfer of
control from a thread to another.
Operating Systems Building Blocks | Process Management
83
Process, Threads and CPU Scheduling
(v)
Multi-Threading Model
The two systems i.e. combined user level thread and Kernel level thread are provided by some
operating system. Solaris is an example of this combined approach. In this combined
approach, the same application consisting of multiple threads can execute in parallel on
multiple processors, and there is no need of blocking the entire process by a blocking system
call.
There are three types of Multi-threading models such as:
•
Many to Many model: In the many-to-many model, many user-level threads are
collected to a smaller or equal number of kernel threads. The number of kernel threads
may be distinct to either a specific application or a specific machine. An application may
be given more kernel threads on a multiprocessor as compared to a uniprocessor. The
developers can design as much user threads according to their need, and the equivalent
kernel threads can execute in parallel on a multiprocessor. Also, when a blocking system
call is executed by a thread, the kernel can make a list of another thread for carrying it
out. The main role of the many-to-many model is to multiplex many user-level threads to
a smaller or an equal number of kernel threads but also give a user-level thread to be
encircled to a kernel thread.
•
Many to One model: The many-to-one model designs many user-level threads to a single
kernel thread. Thread management is executed by the thread library in user space, so it is
effective, but the entire process will stop if a blocking system call is made by a thread.
Also, only one thread can obtain the kernel at a time; multiple threads cannot execute
parallelly on the multiprocessors.
•
One to one model: The one-to-one model designs each user thread to a kernel thread. It
gives more compatibility than the many-to-one model by providing another thread to
execute when a thread creates a blocking system call. It provides multiple threads to
execute in parallel on multiprocessors. The demerit to this model is that forming a user
thread needs to design the corresponding kernel thread since the cost of designing the
kernel threads can distort the performance of an application. Most applications of this
model obstruct the number of threads provided by the system.
84
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
(vi) Threading issue
•
The fork () and exec () System calls: The fork() system call is applied to make a separate,
duplicate process. The linguistics of the fork() and exec() system calls modify in a
multithreaded program.
If in a program, one thread calls fork(), then is the new process single-threaded or does
the new process duplicate all the threads? Some UNIX systems have selected to have two
versions of fork(), one that replicates all threads and another that replicates only the
thread that applied the fork() system call. The exec() system call carries out in the same
direction. That is if a thread applies the exec () system call, the program determined in the
parameter to exec() will change the entire process including all threads.
The use of one version among two of fork() depends on the application. If exec() is
required frequently after forking, then replicating all threads is unneeded, as the program
stated in the parameters to exec () will change the process. In this instance, replicating
only the calling thread is suitable. However, the independent process does not request
exec () after forking, the independent process should duplicate all threads.
•
Cancellation: Thread cancellation is the process of cancelling a thread before it has
accomplished. For example, if multiple threads are simultaneously exploring through a
database and one thread gives the result, the remaining threads might be terminated.
Another situation might take place when a user holds a button on a web browser that
cancels a web page from installing any further. Sometimes a web page is installed using
several threads, and each image is loaded in a different thread. When a user holds the stop
button on the browser, all threads loading the page is terminated.
A thread which is to be aborted is sometimes specified as the target thread. Termination
of a target thread may take place in two different scenarios:
•
Asynchronous cancellation: One thread instantly cancels the target thread.
•
Deferred cancellation: The target thread regularly analyses whether it should
cancel or provide it an opportunity to cancel itself in an orderly fashion.
Operating Systems Building Blocks | Process Management
85
Process, Threads and CPU Scheduling
With the deferred cancellation, in contrast, one thread shows that a target thread is to be
terminated, but cancellation takes place only after the target thread has verified a flag to
conclude if it should be terminated or not. This permits a thread to verify whether it
should be terminated at a point where it can be terminated safely.
•
Signal Handling: A signal is applied in UNIX systems to inform a process that a specific
event has taken place. A signal may be accepted either synchronously or asynchronously,
depending on the source of and the reason for the event being indicated. All signals
adhere to some pattern:
•
A signal is developed by the existence of a particular event.
•
A developed signal is passed to a process.
•
Once delivered, the signal must be controlled.
Every signal can be controlled by one of the following possible handlers:
•
A default signal handler
•
A user-defined signal handler
Every signal contains default signal handler that is executed by the kernel when managing
that signal. This default action can be overcome by a user-defined signal handler which is
called a process to handle the signal. Signals can be controlled in different ways. Some
signals like adjusting the size of a window may just be neglected; others, like an illegal
memory access may be controlled by terminating the program.
86
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Self-assessment Questions
9) What happens when the thread is blocked?
a) Thread moves to the ready queue
b) Thread remains blocked
c) Thread completes
d) A new thread is provided
10) Which of the following is terminated when the termination of process takes place?
a) First thread of the process
b) First two threads of the process
c) All threads within the process
d) No Thread within the process
11) Which of the following is not a valid state of the thread?
a) Running
b) Parsing
c) Ready
d) Blocked
2.1.3 CPU Scheduling
(i) Basic Concepts
CPU scheduling is a process which permits one process to utilise the CPU while the carrying
out of another process is in a waiting state due to deficiency of any resource like I/O etc.,
thereby using of CPU fully. The aim of CPU scheduling is to create the system efficient, fast
and fair.
•
CPU-I/O Burst Cycle: The progress of CPU scheduling relays upon a noticed property of
processes. Process execution contains a cycle of CPU execution and I/O wait. Processes
substitute between these two cases. Process execution starts with a CPU burst. That is
supported by an I/O burst, which is supported by another CPU burst, then another I/O
burst, and so on. Ultimately, the final CPU burst concludes with a system request to
complete execution.
The duration of CPU bursts has been evaluated extensively. Even though they differ
greatly from process to process and from computer to computer, they have a frequency
curve. The curve is generally described as exponential or hyper-exponential, with a small
number of long CPU bursts and a large number of short CPU bursts.
Operating Systems Building Blocks | Process Management
87
Process, Threads and CPU Scheduling
•
CPU Scheduler: Whenever the CPU turns unproductive, the operating system must
choose one of the processes in the ready queue to be carried out. The selection process is
executed by the short-term scheduler or CPU scheduler. The scheduler chooses a process
from the processes in memory that are prepared to carry out and assigns the CPU to that
process. The ready queue is not certainly a First-in First-out (FIFO) queue. While
considering the different scheduling algorithms, a ready queue can be enforced as a FIFO
queue, a tree, a priority queue, or simply an unordered linked list. However, all the
processes in the ready queue are arranged for running on the CPU. Generally, the
testimonies in the queues are process control blocks (PCBs) of the processes.
•
Preemptive Scheduling: CPU-scheduling decisions can occur under the following four
factors:

When a process shifts from the running state to the waiting state. For example, on
account of an I/O request or an appeal of wait for the completion of one of the
child processes)
•
When a process shifts from the running state to the ready state. For example, when an
interruption takes place.
•
When a process shifts from the waiting state to the ready state. For example, at the
termination of I/O.
•
When a process completes.
When scheduling occurs only under factors 1 and 4, we can state that the scheduling scheme
is cooperative or non-preemptive; otherwise, it is pre-emptive. In the process of nonpreemptive scheduling, once the CPU has been assigned to a process, the process holds in the
CPU until it delivers the CPU either by cancelling or by swapping to the waiting state.
(ii)
Scheduling Criteria
There are many criteria to check the best scheduling algorithm.
•
CPU utilisation: We should keep using the CPU as much as possible. Generally, CPU
utilisation can extend from 0 to 100 percent. In a real system, it should extend from 40
percent i.e. for a lightly loaded system to 90 percent which is for a heavily used system.
88
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
•
Throughput: The work is being done by the CPU when it continues in executing
processes. Throughput is defined as the number of processes that is carried out per time
unit. For longer processes, the rate may be one process per hour or it may be ten
processes per second for short transactions.
•
Turnaround time: From the perspective of a particular process, the important criterion is
how long it requires carrying out that process. The reversal time is defined as the interval
that starts at the time of submission to the time of accomplishment of a process. The
reversal time is the sum of the periods allocated waiting to follow the memory, waiting in
the ready queue, carrying out on the CPU, and operates I/O.
•
Waiting time: The CPU scheduling algorithm does not alter the amount of time when a
process carries out or operates I/O. It alters only the amount of time that a process
allocates waiting in the prepared queue. Waiting time is the collection of periods allocated
waiting in the prepared queue.
•
Response time: In an interactive system, reversal time may not be the best criterion.
Sometimes, a process can give some output moderately early and can endure computing
new results while previous results are being outputted to the user. Thus, another method
is the time from the compliance of a request till the first response is created. This method
is called response time. It is the time which is required for commencing a response. The
reversal time is usually restricted by the speed of the output device.
(iii) Scheduling Algorithm
CPU Scheduling manages the problem of deciding which of the processes in the prepared
queue is to be allotted to the CPU. There are various CPU scheduling algorithms:
•
First-Come, First-Served (FCFS) scheduling

The Jobs are carried out on a first come first serve basis.

It is a non-preemptive as well as a pre-emptive scheduling algorithm.

It is easy to recognise and carry out.

Its application is based on FIFO queue.

Inefficient in performance as the average wait time is high.
Operating Systems Building Blocks | Process Management
89
Process, Threads and CPU Scheduling
Process
Arrival Time
Execute Time
Service time
P0
0
5
0
P1
1
3
5
P2
2
8
8
P3
3
6
16
P0
0
P1
58
P2
16
P3
22
Wait time of each process is as follows:
Process
P0
P1
P2
P3
Wait time: Service Time – Arrival Time
0-0=0
5-1=4
8-2=6
16-3=13
Average Wait Time: (0+4+6+13) / 4 = 5.75
•
Shortest-Job-Next (SJN) Scheduling

Shortest-Job-Next (SJN) Scheduling is called as shortest job first or SJF

It is a non-preemptive scheduling algorithm.

It is the best approach to reduce the waiting time.

It is suitable to carry out in Batch systems where required CPU time is known in
advance.

It is impossible to apply in interactive systems where required CPU time is not
known.

The processor should be aware in advance how much time the process will take.
Process
P0
P1
P2
P3
90
Arrival time
0
1
2
3
Execute time
5
3
8
6
Service time
3
0
16
8
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
P0
P1
0
P2
3
P3
8
16
22
Wait time of each process is as follows:
Process
P0
P1
P2
P3
Wait Time: Service time- Arrival time
3-0=3
0-0=0
16-2=14
8-3=5
Average Wait Time: (3+0+14+5) / 4 = 5.50
•
Priority Scheduling

Priority scheduling is known as a non-preemptive algorithm, and in batch
systems, it is considered as one of the most common scheduling algorithms.

Each process is assigned a priority. The process which has the highest priority
needs to be executed first.

Processes with the same priority are executed on a first come first serve basis.

Prioritisation can be done based on the memory demand, time necessities or some
other resource need.
Process
P0
P1
P2
P3
Arrival time
0
1
2
3
P0
0
Execute time
5
3
8
6
P1
6
Priority
1
2
1
3
P2
9
Service time
9
6
14
0
P3
14
22
Wait time of each process is as follows:
Process
P0
P1
P2
P3
Wait Time: Service time- Arrival time
9-0=9
6-1=5
14-2=12
0-0=0
Operating Systems Building Blocks | Process Management
91
Process, Threads and CPU Scheduling
Average Wait Time: (9+5+12+0) / 4 = 6.5
•
Shortest Remaining Time

Shortest remaining time (SRT) is the preemptive form of the SJN algorithm.

The processor is assigned to the job closest to finishing, but it can be preempted
by a newer prepared job with shorter time to accomplish.

It is impossible to enforce in interactive systems where required CPU time is
unknown.

Sometimes it is applied in batch surroundings where short jobs require to give
preference.
•
Round Robin(RR) Scheduling

Round Robin is considered as a preemptive process scheduling algorithm.

Each process gets a fixed time to carry out the process; it is called a quantum.

Once a process is carried out for a specific time period, it is pre-empted, and
another process carries out for a given time period.

Context switching is applied to save states of preempted processes.
Quantum = 3
P0
0
P1
3
P2
6
P3
9
P0
12
P2
14
P3
17
P2
20
22
Wait time of each process is as follows:
Process
P0
P1
P2
P3
Wait Time: Service time- Arrival time
(0-0)+(12-3)= 9
3-1=2
(6-2) + (14-9) + (20-17)=12
(9-3) + (17-12)=11
Average Wait Time: (9+2+12+11) / 4 = 8.5
92
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
•
Multiple-Level Queues Scheduling
Multiple-level queues are a dependent scheduling algorithm. They allow the use of their
existing algorithms to group and schedule jobs with common features.

Multiple queues are supported for processes with common features.

Each queue can obtain its own scheduling algorithms.

Priorities are given to each queue.
For example, CPU-bound jobs can be organised in a single queue and all the I/O-bound
jobs can be organised in another queue. The Process Scheduler then preferably chooses
jobs from each queue and appoints them to the CPU based on the algorithm assigned to
the queue.
(iv) Multiple Processor Scheduling and Benefits
Multiprocessing is the use of two or more central processing units (CPUs) within a single
computer system. Multiprocessing is the ability of a system to assist more than one processor
and/or the ability to allocate tasks between them.
In the case of the uniprocessor, scheduling is one dimensional. In the case of multiprocessor,
scheduling is two dimensional. The scheduler decides which process has to run and which
CPU to run it on. This extra dimension complicates scheduling on multiprocessors. Another
complicating factor is that in some systems, all the processes are distinct whereas in others
they come in sets. An example of the former situation is a timesharing system in which
independent processes are started by independent users. The processes are distinct, and each
one can be scheduled without concerning the other ones. An example of the latter situation
often occurs in the program development environments. The large systems comprise of few
header files that comprises of macros, type definitions, and variable declarations which are
used by the actual code files. When a header file is altered, all the code files that comprise it
must be recompiled. The program make is usually used to manage development. When make
is invoked, the compilation of only those code files starts that must be recompiled in relation
to changes to the header or code files. Object files which are still valid are not regenerated.
The benefits include:
•
Increased Throughput: More work can be completed in a unit time, by increasing
the number of processors.
Operating Systems Building Blocks | Process Management
93
Process, Threads and CPU Scheduling
•
Cost Saving: It shares the memory, buses, peripherals, etc. Therefore multiprocessor
system saves money as compared to multiple single systems. Also, if the number of
programs is operating on the same data, it is cheaper storing the data on one single
disk which is shared by all processors instead of using many copies of the same data.
•
Increased Reliability: In this system, due to the workload getting distributed among
several processors results in increased reliability. Even if one processor fails then, its
failure might slightly slow down the speed of the system, but the system will work
smoothly.
(v) Real Time Scheduling
The term scheduling analysis in real-time computing system comprises of the analysis and
testing of the scheduler system and the algorithms that are used in the real-time applications.
The real-time scheduling analysis is the assessment, testing and verification of the scheduling
system and the algorithms that are used in real-time operations. Testing and verification of
the real-time system should be done to check its performance. In computer science, testing
and verification are also called as model checking.
A real-time scheduling system is formed of the scheduler, clock, and the processing hardware
elements. In a real-time system, a process or task has agenda; tasks are received by a real-time
system and finished according to the task deadline depending on the nature of the scheduling
algorithm. Evaluation and Modelling, a real-time scheduling system concern, are on the
analysis of the algorithm capability to meet a process deadline. A deadline is defined as the
time required for a task to be processed.
94
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Self-assessment Questions
12) The processes that are residing in main memory and ready to execute are in
a) Job queue
b) Ready queue
c) Execution queue
d) Process queue
13) The duration from the time of submission of a process to completed time is
mentioned as
a) Waiting time
b) Turnaround time
c) Response time
d) Throughput
14) In priority scheduling algorithm
a) CPU is allocated to the process with highest priority
b) CPU is allocated to the process with lowest priority
c) Equal priority processes cannot be scheduled
d) None of the above
Operating Systems Building Blocks | Process Management
95
Process, Threads and CPU Scheduling
Summary
o A process is defined as a system which describes the fundamental unit of work to be
carried out in the system.
o A process is a program in execution. A process usually involves the process stack
that includes temporary data (such as return addresses, function parameters, and
local variables) and a data section which consists of global variables.
o A program is called as a process when an executable file is installed into the
memory.
o The state of a process is described in part by the current activity of that process. The
states of program can be New, Running, Waiting, Terminated, and Ready.
o In an operating system, each process is performed by a process control block (PCB)
which is also known as a task control block.
o The process scheduling is defined as the action of the process manager which
manages the replacement of the functioning process from the CPU and the
choosing of another process by a specific strategy.
o The important process scheduling queues that are managed by operating system are
Job Queue, Ready Queue, and Device Queue.
o Schedulers are special system software which manages process scheduling in
various ways. Their main task is to choose the right jobs to be put into the system
and to determine which process to execute.
o A context switch is a technique to keep and restore the state or context of a CPU in
Process Control block so that execution of a process can be discontinued from the
same point at a later time.
o Inter Process Communication is a technique that allows the transferring of data
between processes. The idea of IPC is based on Task Control Architecture (TCA).
o A socket is considered to be an endpoint for the communication. A set of processes
that are communicating over a network applies a set of sockets.
96
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
o Three strategies to communicate with the client-server systems: RPC (remote
procedure calls), sockets, and Java's RMI (remote method invocation).
o A thread is a flow of execution using the process code with its own program counter
that maintains track of which instruction to carry out next, system registers (which
carry its current working variables), and a stack which contains the execution
history.
o In computer programming, single threading is the process of one command at a
time. Multi-threading is the inverse of single threading.
o Multi-threading is the capability of a central processing unit (CPU) or a single core
in a multicore processor to carry out processes or threads simultaneously, provided
by the operating system.
o Three types of Multi-threading models are: Many-to-one, Many-to-many, and Oneto-one.
o In the many-to-many model, many user-level threads are collected to a smaller or
equal number of kernel threads. The many-to-one model designs many user-level
threads to a single kernel thread. The one-to-one model designs each user thread to
a kernel thread.
o CPU scheduling is a process which permits one process to utilise the CPU while the
carrying out of another process is in a waiting state due to deficiency of any
resource like I/O etc., thereby using of CPU fully.
o Various CPU Scheduling algorithms are: First-Come, First-Served (FCFS)
scheduling, Shortest-Job-Next (SJN) Scheduling, Priority Scheduling, Shortest
Remaining Time, Round Robin(RR) Scheduling, and Multiple-Level Queues
Scheduling
o The real-time scheduling analysis is the assessment, testing and verification of the
scheduling system and the algorithms that are used in real-time operations.
Operating Systems Building Blocks | Process Management
97
Process, Threads and CPU Scheduling
Terminal Questions
1. List the benefits of single and multi-thread processes.
2. Classify the various types of threads.
3. List the various CPU Scheduling algorithms.
Answer Keys
Self-assessment Questions
98
Question No.
Answer
1
a
2
c
3
b
4
c
5
a
6
b
7
a
8
a
9
a
10
c
11
b
12
a
13
b
14
a
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
Activity
Activity Type: Offline/Online
Duration: 30 minutes
Description:
Explain the differences in the degree to which the following scheduling algorithms
discriminate in favour of short processes:
1. FCFS
2. RR
3. Multilevel feedback queues
Operating Systems Building Blocks | Process Management
99
Process, Threads and CPU Scheduling
Case Study
CPU scheduling in Solaris:
•
Priority-based scheduling

Four classes: real time, system, time sharing, interactive (in order of priority)

Different priorities and algorithm in different classes

Default class: time sharing

Policy in the time sharing class:
o Multilevel feedback queue with variable time slices
o See the dispatch table (Solaris dispatch table for interactive and time-sharing
threads)
o Good response time for interactive processes and good throughput for CPUbound processes
Windows XP scheduling
•
A priority-based, preemptive scheduling

100
Highest priority thread will always run
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling

•
•
Also have multiple classes and priorities within classes
Similar idea for user processes – Multilevel feedback queue

Lower priority when quantum runs out

Increase priority after a wait event
Some twists to improve “user perceived” performance:

Boost priority and quantum for foreground process (the window that is currently
selected).

Boost priority more for a wait on keyboard I/O (as compared to disk I/O)
Linux Scheduling
•
A priority-based, preemptive with global round-robin scheduling

Each process has a priority

Processes with a larger priority also have a larger time slices

Before the time slices are used up, processes are scheduled based on priority.

After the time slice of a process is used up, the process must wait until the all ready
processes to use up their time slice (or be blocked) – a round-robin approach.
o No starvation problem.

For a user process, its priority may + or – 5 depending whether the process is I/Obound or CPU-bound.
o Giving I/O bound process higher priority.
Summary
•
•
Basic idea for schedule user processes is the same for all systems:

Lower priority for CPU bound process

Increase priority for I/O bound process
The scheduling in Solaris / Linux is more concerned about fairness.

•
More popular as the operating systems for servers.
The scheduling in Window XP is more concerned about user perceived performance.

More popular as the OS for personal computers.
Discussion Questions:

1. Analyse the case for the Scheduling in the three different OS.
Operating Systems Building Blocks | Process Management
101
Process, Threads and CPU Scheduling
Bibliography
e-References
•
Process Scheduling, Retrieved 24 Oct, 2016 from https://www.cs.rutgers.edu/~p
xk/416/notes/07-scheduling.html
•
Threads, Retrieved 24 Oct, 2016 from https://www.cs.rutgers.edu/~pxk/416
/notes/05-threads.html
•
CPU Scheduling, Retrieved 24Oct, 2016 from http://www.go4expert.com/articles/t
ypes-of-scheduling-t22307/
Image Credits
•
Figure 2.1.1: https://www.tutorialspoint.com/operating_system/os_processes.htm
•
Figure
2.1.2:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.3:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.4:
http://stackoverflow.com/questions/17228441/context-switch-
questions-what-part-of-the-os-is-involved-in-managing-the-contex
•
Figure
2.1.5:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.6:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.7:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.8:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
•
Figure
2.1.9:
https://it325blog.files.wordpress.com/2012/09/operating-system-
concepts-7-th-edition.pdf
102
Process Management | Operating Systems Building Blocks
Process, Threads and CPU Scheduling
External Resources
•
Silberschatz, A., Galvin, P. B., & Gagne, G. (2005). Operating system concepts.
Hoboken, NJ: J. Wiley & Sons.
•
Ahmad, A. (2010). Operating system. New Delhi: Knowledge Book Distributors.
•
Stallings, W. (1992). Operating system. New York: Maxwell Macmillan Canada.
Video Links
Topic
Link
Linux Boot Process
https://www.youtube.com/watch?v=ZtVpz5VWjAs
Fork() system call tutorial
https://www.youtube.com/watch?v=xVSPv-9x3gk
How a Linux system call works
https://www.youtube.com/watch?v=FkIWDAtVIUM
Operating Systems Building Blocks | Process Management
103
Process, Threads and CPU Scheduling
Notes:
104
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Chapter Table of Contents
Chapter 2.2
Process Synchronisation and Deadlocks
Aim............... ...................................................................................................................................... 105
Instructional Objectives................................................................................................................... 105
Learning Outcomes .......................................................................................................................... 105
2.2.1 Process Synchronisation........................................................................................................ 106
(i) Background ........................................................................................................................ 106
(ii) Mutual exclusion .............................................................................................................. 107
(iii) The critical process ......................................................................................................... 108
(iv) Synchronisation Hardware ............................................................................................ 110
(v) Semaphores ....................................................................................................................... 113
(vi) Classic problem of synchronisation .............................................................................. 114
(vii) Critical Region ................................................................................................................ 118
(viii) Monitors......................................................................................................................... 118
(ix) Operating System Synchronisation............................................................................... 119
(x) Atomic Transaction ......................................................................................................... 121
Self-assessment Questions ..................................................................................................... 124
2.2.2 Deadlocks ................................................................................................................................ 125
(i) System Model..................................................................................................................... 125
(ii) Deadlock Characterisation.............................................................................................. 126
(iii) Methods of handling deadlocks .................................................................................... 131
(iv) Deadlock Prevention....................................................................................................... 132
(v) Deadlock Avoidance ........................................................................................................ 133
(vi) Deadlock Detection ......................................................................................................... 135
(vii) Recovery from Deadlocks ............................................................................................. 138
Self-assessment Questions ..................................................................................................... 140
Summary ........................................................................................................................................... 141
Terminal Questions.......................................................................................................................... 143
Answer Keys...................................................................................................................................... 143
Activities ............................................................................................................................................ 144
Case Study ......................................................................................................................................... 145
Bibliography ...................................................................................................................................... 146
Operating Systems Building Blocks | Process Management
Process Synchronisation and Deadlocks
e-References ...................................................................................................................................... 146
External Resources ........................................................................................................................... 147
Video Links ....................................................................................................................................... 147
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Aim
To familiarise the students with the knowledge of Process Synchronisation and
Deadlocks
Instructional Objectives
After completing this chapter, you should be able to:
•
Explain the concept of Process Synchronization.
•
Outline the critical selection problem
•
List the classic problems of synchronization
•
Explain the fundamentals of deadlock.
•
Outline the Methods for handling deadlocks
•
List the ways to detect, prevent, avoid and recover from Deadlocks
Learning Outcomes
At the end of this chapter, you are expected to:
•
Elaborate on the concept of Process Synchronization.
•
Discuss the critical selection problem
•
Outline the classic problems of synchronization
•
Elaborate on the concept of Deadlock.
•
Discuss the Methods for handling deadlocks
•
Outline the ways to detect, prevent, avoid and recover from Deadlocks
Operating Systems Building Blocks | Process Management
105
Process Synchronisation and Deadlocks
2.2.1 Process Synchronisation
(i) Background
A cooperating process is one that can either affect or get affected by the other processes that
are in the system. These processes can either be authorised to share data only through files or
can directly share a memory region or messages. In this chapter, we discuss several
mechanisms to make sure the orderly execution of cooperating processes that share a rational
address space, so that data stability is maintained.
Let us return to our consideration of the bounded buffer. As we indicated, our answer
permits at most BUFFER.SIZE - 1 items in the buffer at the same time. Suppose we need to
modify the algorithm to fix this deficiency. One probability is to add an integer variable
counter, initialized to 0. When an item is added to the buffer, the counter is incremented.
Similarly, when we remove an item from the buffer, the counter is decremented. The code for
the creator process can be modified as follows:
while (true)
{
/* produce an item in nextProduced */
while (counter == Buffer.Size);
/* do nothing */
buffer[a] = nextProduced;
a = (a + 1) % Buffer-Size;
counter++;
}
The code for the consumer process can be modified as follows:
while (true)
{
while (counter == 0);
/* do nothing */
nextConsumed = buffer [b] ,b = (b + 1) % Buffer_Size;
counter--;
/* consume the item in nextConsumed */
}
106
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
(ii) Mutual exclusion
A method of making sure that if one process is accessing shared modifiable data, the other
processes will be excluded from doing the similar thing.
Formally, while one process implements the shared variable, all other processes craving at the
same time must be kept waiting; when that process is done executing the shared variable, one
of the processes waiting; to do so it must be allowed to proceed. In this fashion, each method
implementing the shared data (variables) rejects all others from doing so simultaneously. This
is called Mutual Exclusion.
Note that mutual exclusion must be enforced only when processes have access to shared
flexible data - when processes are executing operations that do not conflict with one another
they must be allowed to proceed concurrently.
Mutex enters into the picture when two threads work on the similar data at the same time. It
acts as a lock and is the most basic synchronization tool. When a thread attempts to acquire a
mutex, it gains the mutex only if it is available, or else the thread is set to sleep condition.
Mutex can be imposed at both the hardware and software levels. It can also be imposed at the
kernel level by deactivating interrupts for the smaller number of instructions. This helps to
avoid the corruption of shared data structures.
If several processors share the same memory, a flag is set to disable and the resource
acquisition is enabled based on the availability. In the software areas, Mutex is imposed by the
busy-wait tool, equipped with algorithms such as Dekker's algorithm, the Peterson's
algorithm, black-white bakery algorithm, Szymanski's algorithm and Lamport's bakery
algorithm.
The solutions described above can be used to build the synchronisation primitives below:
•
Locks
•
Readers–writer locks
•
Recursive locks
•
Semaphores
•
Monitors
•
Message passing
Operating Systems Building Blocks | Process Management
107
Process Synchronisation and Deadlocks
•
Tuple space
Several forms of common exclusion have side-effects. For instance, classic semaphores allows
deadlocks, in which one method gets a semaphore, another process gets a second semaphore
and then both have to wait forever for the other semaphore to be released. Other general sideeffects include starvation, priority inversion, in which a higher priority thread waits for a
lower-priority thread; in which a process does not get sufficient resources to run to
completion and high latency.
(iii) The critical process
Consider a system comprising of n processes {P0, P1, ..., Pn-1}. Each process has a section of
code, called as critical section, in which the process may be varying updating a table, writing a
file, common variables and so on. The essential feature of the system is that, when one
process is performing in its critical section, no other process is to be permitted to perform in
its critical section. That is, no two processes are performing in their critical sections at the
same time. The critical-section problem is to plan a protocol that the processes can use to
cooperate. Each process must request authorization to enter its critical section. The section of
code executing this request is the entry section. The critical section is pursued by an exit
section. The remaining code is the remainder section. The common structure of a usual
process Pi is shown below. The entry section and exit section are confined in boxes to
highlight these important segments of code.
do{
\ entry section
critical section
exit section
remainder section
} while (TRUE);
An answer to the critical-section problem must satisfy the following three requirements:
•
Mutual exclusion: If process Pi is performing in its critical section, then no other
processes can be executing in their critical sections.
•
Progress: If no process is performing in its critical section and some processes wish to
enter their critical sections, then only those processes that are not performing in their
remainder sections can take part in the decision on which will enter its critical section
next, and this selection cannot be postponed indefinitely.
108
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
•
Bounded waiting: There exists a limit or bound, or on the number of times that other
processes are permitted to enter their critical sections after a process has made a
request to go in its critical section and before that request is approved.
We assume that each process is performing at a nonzero speed. However, we can make no
hypothesis concerning the relative speed of the n processes. At a given point in time,
numerous kernel-mode processes may be active in the operating system. As a result, the code
executing an operating system (kernel code) is subject to several possible race conditions. For
an instance a kernel data structure that maintains a list of all open files in the system. This list
must be changed when a new file is opened or closed (adding the file to the list or removing it
from the list). If two processes were to open files at the same time, the separate updates to this
list could result in a race condition. Other kernel data structures that are prone to possible
race conditions contain structures for managing memory allocation, for managing process
lists, and for interrupt handling. It is up to kernel developers to confirm that the operating
system is free from such race situations. Two general methods are used to handle critical
sections in operating systems:
•
pre-emptive kernels and
•
nonpreemptive kernels.
A pre-emptive kernel permits a process to be pre-empted while it is running in kernel mode.
A nonpreemptive kernel prevents a process running in kernel mode to be pre-empted; a
kernel-mode process will run till it exits kernel mode, blocks, or willingly yields control of the
CPU.
Obviously, a nonpreemptive kernel is basically free from race conditions on kernel data
structures, as only one process is active in the kernel at a time. We cannot say the similar
about nonpreemptive kernels, so they must be carefully designed to ensure that shared kernel
data is free from race conditions.
Peterson’s Solution
A classic software-based solution to the critical-section problem which is known as Peterson's
solution. This solution is presented because it provides a decent algorithmic description of
solving the critical-section problem. Peterson's solution is limited to two processes that
substitute execution between their remainder sections and critical sections. The processes are
numbered Po and Pi. As per the convenience, when giving P,-, we use Pj to denote the other
process; that is, j equals 1 — i.
Operating Systems Building Blocks | Process Management
109
Process Synchronisation and Deadlocks
Peterson's solution needs two data items to be shared between the two Processes:
•
int round
•
Boolean sign[2]
The variable round specifies whose turn it is to enter its critical section. That is, if round == i,
then process Pi is allowed to perform in its critical section. The sign array is used to specify if
a process is prepared to enter its critical section. For instance, if sign[i] is true, this value
shows that Pi is ready to enter its critical section. With a clarification of these data structures
complete, we are now ready to describe the algorithm as shown below:
do {
sign[i] = TRUE;
round = j ;
while (sign[j] round == j ) ;
critical section
sign[i] = FALSE;
remainder section
} while (TRUE);
To enter the critical section, process P, first sets sign[i] to be true and then sets round to the
value j, thereby declaring that if the other process wishes to enter the critical section, it can do
so. If both processes try to pass in at the same time, turn will be set to both i and j at roughly
the same time. Only one of these assignments will last; the other will take place but will be
overwritten immediately.
The ultimate value of turn decides which of the two processes is permitted to enter its critical
section first.
(iv) Synchronisation Hardware
In synchronisation hardware, we explore several more solutions to the Critical-section
problem using methods ranging from hardware to software based APIs available to
application programmers. All these answers are based on the principle of locking; however, as
we shall see, the design of such locks can be quite sophisticated. Hardware features are
capable of making any programming task easier and improve system efficiency.
The critical-section problem might be resolved simply in a uniprocessor environment if we
could avoid interrupts from happening while a shared variable was being altered. In this way,
we could be sure that the present sequence of instructions would be permitted to execute in
order without pre-emption. No other instructions would run, so no unexpected changes
110
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
could be made to the shared variable. This is the method taken by nonpreemptive kernels.
This solution unfortunately not as feasible in a multiprocessor environment. Disabling
disturbs on a multiprocessor can be time consuming, as the message is sent to all the
processors.
This message delivering delays entry into each critical section, and system efficiency reduces.
Also, the effect on a system's clock is considered, if the clock is kept updated by interrupts.
Many modern computer systems therefore offer special hardware instructions that permit us
either to test and modify the content of a word or to interchange the contents of two words
atomically—that is, as one uninterruptible unit.
The TestAndSet() instruction can be defined as shown below:
booleanTestAndSet(boolean *target) {
Boolean pq = *target;
*target = TRUE;
return pq;
}
The main characteristic is that the instruction is performed automatically. Thus, if two
TestAndSet () commands are executed concurrently (each on a different CPU), they will be
performed sequentially in some arbitrary order. If the machine supports the TestAndSet ()
command, then we can execute mutual exclusion by declaring a Boolean variable lock,
initialized to false. The structure of process P, is shown below:
do {
while (TestAndSetLock(&lock))
; // do nothing
// include critical section
lock = FALSE;
// include remainder section
}while (TRUE);
The Swap() instruction, in context to the TestAndSet() instruction, functions on the contents
of two words; it is defined as shown below:
void Swap(boolean *x, boolean *y) {
boolean temp = *x;
*x = *y;
*y = temp;
}
Like the TestAndSet () command, it is implemented atomically. If the machine supports the
Swap() instruction, then mutual exclusion can be provided as follows. A global Boolean
Operating Systems Building Blocks | Process Management
111
Process Synchronisation and Deadlocks
variable lock is declared and is set to false. In addition, each process has a native Boolean
variable key. The structure of process P, is shown below:
do {
key = TRUE;
while (key == TRUE)
Swap (&lock, &key);
// include critical section
lock = FALSE;
// include remainder section
} while (TRUE);
Though these algorithms satisfy the mutual-exclusion requirement, they do not satisfy the
bounded-waiting requirement.
Another algorithm using the TestAndSet() instruction that fulfils all the critical-section
requirements. The common data structures are:
•
boolean waiting[num];
•
boolean lock;
These data structures are set to false. In order to prove that the mutual exclusion requirement
is met, we note that process Pi can go in its critical section only if either waiting[i] == false or
key -- false. The value of key can turn false only if the TestAndSet() is performed. The first
process to implement the TestAndSet () will find key == false; all others must wait.
do {
waiting [i] = TRUE;
key = TRUE;
while (waiting[i] && key)
key = TestAndSet(&lock);
waiting [i] = FALSE;
//include critical section
k = (i + 1) % num;
while ((k != i) ScSc ! w a i t i n g [ j ] )
k = (k + 1) % num;
if (k == i)
lock = FALSE;
else
waiting[k] = FALSE;
// include remainder section
}while (TRUE);
The variable waiting[i] can become false only if the other process leaves its critical section;
only one waiting [i] is set to false, managing the mutual-exclusion requirement. To prove that
the growth requirement is met, we note that the arguments presented for mutual exclusion
also applicable here, since a process exiting the critical section either sets lock to false or sets
112
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
waiting[k] to false. Both permit a process that is waiting to go into its critical section to
proceed. In order to prove that the bounded-waiting requirement is met, we note that, when a
process leaves its critical section, it inspects the array waiting in the cyclic ordering (z' + 1, i +
2,...,num — 1, 0, ..., i — 1). It labels the first method in this ordering that is in the entry
section (waiting [j] =- true) as the next one to enter the critical section. Any process waiting
to go in its critical section will hence do so within num — 1 turns. Alas for hardware
designers, executing atomic Test And- SetQ commands on multiprocessors is not a trivial
task.
(v)
Semaphores
A semaphore in basic form is a preserved integer variable that can assist and inhibit approach
to shared resources in a multiprocessing circumstance. The various hardware-based solutions
to the critical-section problem are difficult for application programmers to apply. To conquer
this difficulty, we can use a synchronisation tool which is known as a semaphore.
A semaphore X is an integer variable that is different from initialization, is accessed only via
two standard atomic operations such as wait () and signal (). The wait() operation was
originally named as P (from the Dutch problem, "to test"); signal () was originally termed as V
(from verhogen, "to increment"). The definition of wait () is as follows:
wait(X) {
while X<= 0
; // no-op
X--;
}
The definition of signal () is as follow:
signal(X) {
X + + ;
}
All the alteration to the integer value of the semaphore in the wait () and signal() operations
must be carried out inseparably. That is, when one process adjusts the semaphore value, no
other process can concurrently change that same semaphore value. In the case of wait(S), the
integer value of S i.e. S < 0, and its probable change (S--), can be carried out without
interruption.
The two most basic kinds of semaphores are counting semaphores and binary semaphores.
Counting semaphores describe multiple resources which value can range over an unregulated
domain. The binary semaphores describe two possible states and the value of this semaphore
Operating Systems Building Blocks | Process Management
113
Process Synchronisation and Deadlocks
can range between 0to1. In some systems, binary semaphores are called as mutex locks, as
they are locks that give mutual exclusion. Counting semaphores can be applied to manage
access to a given resource containing a finite number of instances while binary semaphores
can be applied to handle the critical section problem.
(vi) Classic problem of synchronisation
In this section, we give a number of synchronisation problems as examples of a large class of
concurrency-control problems. These problems are used for testing every newly offered
synchronisation scheme. In our solutions to the problems, we take semaphores for
synchronisation.
do { *
// produce an item in nextp
wait(empty);
wait(mutex);
// add nextp to buffer
signal(mutex);
signal(full);
}while (TRUE) ,-
1. The Bounded-Buffer Problem: The bounded-buffer problem is generally used to
illustrate the power of synchronisation primitives. We show here a general structure of
this scheme without binding ourselves to any specific implementation. We assume that
the pool contains n buffers, each capable of holding one item. The mutex semaphore
offers mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The
full and empty semaphores count the number of empty and full buffers. The semaphore
empty is initialised to the value n; the semaphore full is reset to the value 0. The code for
the producer process is shown in Figure 1.8 and the code for the consumer process is
given in below:
do {
wait(full);
wait(mutex);
// remove an item from buffer to nextc
signal(mutex);
signal(empty);
// consume the item in nextc
}while (TRUE);
2. The Readers-Writers Problem: A database is shared among various process running
concurrently either to read or update. The processes performing read and write are called
readers and writers simultaneously. Problems arise only when two process write at the
114
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
same time. To solve the problems, writers are given an exclusive access to the shared
database. This synchronisation problem is denoted as the readers-writers problem.
This problem has several variations. The simplest one is that none of the readers should
wait for other readers to finish just because a writer is waiting. The second one is that,
once a writer is ready, that writer performs its write as fast as possible.
In the solution to the first readers-writers problem, the reader processes share the
following data structures:
a. Semaphore a, b
b. Int RC
The semaphores a and b are initialised to 1; RC is initialised to 0. The semaphore ‘b’ is
equal to both writer and reader processes. The semaphore ‘a’ is used to guarantee mutual
exclusion when the variable RC is updated.
Reader those who enters or exits the critical section. It is not used by readers who enter or
exit while other readers are in their critical sectors. The code for a writer process is shown
below:
do {
wait(b);
// writing is performed
signal (b) ,}while (TRUE);
The code for a reader process is given below:
do { *
wait(a);
RC + + ;
if (RC == 1)
wait(b);
signal(a);
// reading is performed
wait (a) ,RC--;
if (RC == 0)
signal(b);
signal(a);
}while (TRUE);
Note that, if a writer is in the critical section and ‘n’ readers are waiting, then one reader is
queued on b, and n-1 readers are line up on a. Also, when a writer executes signal (b), it
resumes the implementation of either the waiting readers or a single waiting writer. The
Operating Systems Building Blocks | Process Management
115
Process Synchronisation and Deadlocks
selection is made by the scheduler. The readers-writers problem and its answers has been
generalised to provide reader-writer locks on some systems. Acquiring reader-writer lock
needs specifying the mode of the lock: either read or write access. When a process only
wishes to read shared data, it needs the reader-writer lock in read mode; a process wishing
to modify the shared data must require the lock in write mode. Many processes are
permitted to concurrently acquire a reader-writer lock in read mode; only one process
may gain the lock for writing as exclusive access is necessary for writers.
Reader-writer locks are very useful in the following situations:
a. In applications where it is easy to recognise which processes only writes shared data
and which threads only reads shared data.
b. In applications that have more readers than writers. This is because reader writer
locks generally need more overhead to establish than semaphores or mutual exclusion
locks, and the overhead for setting up the reader-writer lock is compensated by the
increased concurrency of allowing multiple readers.
3. The Dining-Philosophers Problem: Consider five philosophers who can eat and think,
sharing a circular table with five chairs, each belonging to one philosopher. The table has
a bowl of rice, and the table is arranged with five single chopsticks (Figure 2.2.1). When a
philosopher thinks, she does not interact with her colleagues. From time to time, a
philosopher becomes hungry and tries to pick up the two chopsticks that are closest to her
(the chopsticks that are between her and her left and right neighbours). A philosopher
may pick up only one chopstick at a time and she cannot collect a chopstick that is already
in the hand of a neighbour. When a hungry philosopher has both her chopsticks
simultaneously, she eats without releasing her chopsticks. When she is finished eating, she
puts down both of her chopsticks and again starts thinking.
116
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Figure 2.2.1 The situation of the dining philosophers
The dining-philosophers problem is looked as a classic synchronisation problem. It is a
simple representation of the need to assign several resources among several processes in a
deadlock-free and starvation-free manner.
One simple solution is to signify each chopstick with a semaphore. A philosopher tries to
grab a chopstick by executing a wait () operation on that semaphore; she releases her
chopsticks by executing the signal() operation on the suitable semaphores. Thus, the
shared data are semaphore chopstick; where all the elements of chopstick are modified to,
1. The structure of philosopher is shown below:
do {
wait (chopstick [a] ) ,wait(chopstick [ (a + 1) % 5] ) ;
// eat
signal(chopstick [a]);
signal(chopstick [(a + 1) % 5]);
/ / think
}while (TRUE);
Although this solution guarantees that no two neighbours are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock. Suppose that all five
philosophers become hungry at the same time and each grabs her left chopstick. Now all
the elements of chopstick will be equal to 0. When each philosopher tries to grab her right
chopstick, she will be delayed forever. Several possible remedies to the deadlock problem
are recorded next. In Section monitors, we present a solution to the dining-philosophers
problem that confirms freedom from deadlocks. Allow at most four philosophers to be
sitting simultaneously at the table.
Operating Systems Building Blocks | Process Management
117
Process Synchronisation and Deadlocks
•
Let a philosopher to pick up her chopsticks only if both chopsticks are available
(to do this she must pick them up in a critical section).
•
Using an asymmetric solution; that is, an odd philosopher picks up first her left
chopstick and then her exact chopstick, whereas an even philosopher picks up her
right chopstick and then her left chopstick. Finally, any acceptable solution to the
dining-philosophers problem must guard against the possibility that one of the
philosophers would starve to death. A deadlock-free solution does not necessarily
eliminate the possibility of starvation.
(vii) Critical Region
To avoid the problems in the shared storage, find some way to prohibit more than one
process from writing and reading the shared data simultaneously. That part of the program
where the shared memory is going to access is called the Critical Section. To avoid flawed
results and race conditions, one must identify codes in Critical Sections in individual thread.
The properties of the code that form a Critical Section are:
•
Codes that state one or more variables in a “read-update-write” fashion while any of
those variables is possibly being modified by another thread.
•
Codes that modify one or more variables that are possibly being referenced in “readup data-write” fashion by another thread.
•
Codes use a data structure while any part of it is possibly being changed by another
thread.
•
Codes change any part of a data structure while it is possibly in use by another thread.
Here, the vital point is that when one process is performing shared modifiable data in its
critical section, no other process is to be allowed to perform in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.
(viii) Monitors
Though semaphores offer a convenient and effective mechanism for process synchronisation,
an inappropriate use can result in timing errors that are hard to detect. We have seen an
example of such errors in the use of counters in our solution to the producer-consumer
problem. In the example in background section of this chapter, the timing issue happened
118
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
rarely, and even the counter value appeared to be reasonable—off by only 1- Yet, the solution
is clearly not an acceptable one. So for this reason semaphores were introduced in the first
place. Sadly, such timing errors can still occur when semaphores are used.
To illustrate the way to review the semaphore solution to the critical section problem. All
processes reveal a semaphore variable mutex, which is initialised to 1. Each process must
implement wait (mutex) before inserting the critical section and signal (mutex) later. If this
order is not observed, two processes may be in their critical sections simultaneously. Note
that these difficulties will rise even if a single process is not well performing.
Let a process swap the order in which the wait() and signal () operations on the semaphore
mutex are performed, resulting in the following execution:
signal(mutex);
critical section
wait(mutex);
In this case, several processes may be performing in their critical sections simultaneously,
violating the mutual-exclusion requirement. This error may be discovered only if a number of
processes are concurrently active in their critical sections. Note that this state may not always
be reproducible.
Let a process replaces signal (mutex) with wait (mutex). That is, it executes
wait(mutex);
critical section
wait(mutex);
In this case, a deadlock will occur.
Suppose a process deletes the wait (mutex), or the signal (mutex), or both. In this case, either
a deadlock will occur or a mutual exclusion is violated. These examples show that several
types of errors can be generated easily when programmers use semaphores improperly to
solve the critical-section problem. Similar problems may arise in the other synchronisation
models. To deal with such errors, researchers have developed high-level language constructs.
(ix) Operating System Synchronisation
On the basis of synchronisation, processes are categorized as one of the following two types:
•
Independent Process: Execution of one process does not affect the execution of other
processes.
Operating Systems Building Blocks | Process Management
119
Process Synchronisation and Deadlocks
•
Cooperative Process: Execution of one process affects the execution of other
processes. Process synchronisation problem arises in the case of Cooperative process
also because resources are shared in Cooperative processes.
•
Critical Section Problem: Critical section is a code segment that can be accessed by
only one process at a time. Critical section contains shared variables which need to be
synchronised to maintain consistency of data variables.
Do {
Entry Section
Critical Section
Exit Section
Remainder Section
} while (True);
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
•
Mutual Exclusion: If a process is executing in its critical section, then no other
process is allowed to execute in the critical section.
•
Progress: If no process is in the critical section, then no other process from outside
can block it from entering the critical section.
•
Bounded Waiting: A limit on the number of times a process can request to enter the
critical section.
•
Peterson’s Solution: Peterson’s Solution is a classical software based solution to the
critical section problem.
In Peterson’s solution, we have two shared variables:
•
boolean flag[i]: Initialised to FALSE, initially no one is interested in entering the
critical section
•
int turn: The process whose turn is to enter the critical section.
do {
flag[i] = TRUE;
turn = j ;
while (flag[j] turn == j ) ;
critical section
flag[i] = FALSE;
remainder section
} while (TRUE);
120
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Peterson’s Solution preserves all three conditions:
•
Mutual Exclusion is assured as only one process can access the critical section at any
time.
•
Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.
•
Bounded Waiting is preserved as every process gets a fair chance.
The disadvantages of Peterson’s Solution:
•
•
It involves Busy waiting.
•
It is limited to two processes.
Test-And-Set: Test-And-Set is a hardware solution to the synchronisation problem. In
Test-And-Set, we have a shared lock variable which can take either of the two values, 0 or
1.
•
Before entering into the critical section, a process inquires about the lock. If it is locked, it
keeps on waiting till it become free and if it is not locked, it takes the lock and executes
the critical section.
•
In Test-And-Set, Mutual exclusion and progress are preserved but bounded waiting
cannot be preserved.
(x)
Atomic Transaction
The mutual exclusion of critical sections guarantees that the critical sections are executed
atomically. An instance funds transfer, in which one account is debited and another is
credited. Clearly, it is essential for data consistency either both the credit and debit occur or
does not occur. Consistency of data, along with storage and retrieval of data, is a concern
often related with database systems.
A major problem occurring in processing transactions that is the protection of atomicity
despite the probability of failures within the computer system. Some transaction is simply an
arrangement of read and writes operations finished by either a commit operation or an abort
operation. A commit operation represents that the transaction has stopped its execution
successfully, whereas an interrupt operation represents that the transaction has terminated its
normal execution due to some error or a failure of the system. An aborted transaction has no
Operating Systems Building Blocks | Process Management
121
Process Synchronisation and Deadlocks
effect on the state of the data that it has already upgraded. Thus, the state of the data executed
by an aborted transaction must be recovered before the transaction started carrying out and
the transaction has been cancelled.
There are various types of storage media such as:
•
Volatile storage: The Information that exists in volatile storage does not generally
handle system crashes. An approach to volatile storage is extremely fast. It is because
of the speed the memory acquires itself and because it is easy to obtain directly any
data item in volatile storage.
•
Non-volatile storage: The Information available in non-volatile storage normally
manages system crashes. Loss of information occurs due to the failure of disks and
tapes. Non-volatile storage is slower than volatile storage by several orders of
magnitude.
•
Stable Storage: Information exists in stable storage is never lost. To carry out an
approximation of such storage, we require to duplicate information in several nonvolatile storage caches with independent failure modes and to upgrade the
information in a regulated manner.
•
Log Based memory: One way to establish atomicity is to register on stable storage,
information specifying all the upgradations made by the transaction to the various
data it executes. Here, the system manages on stable storage, a data structure known
as the log. Each log record explains a single operation of a transaction write and has
the following fields:

Transaction name: The particular name of the transaction that carried out the
write operation
•

Data item name: The different name of the data item inscribed.

Old value: The value of the data item gives priority to the write operation

New value: The value that the data item will obtain after the write
Checkpoints: When the failure in a system occurs, we must ask the log to recognise
those transactions that require being renovated and those that require being ruined.
In principle, it is required to explore the entire log to do these determinations. There
are two types Checkpoints: When the failure in a system occurs, we must ask the log
122
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
to recognise those transactions that require being renovated and those that require
being ruined. In principle, it is required to explore the entire log to do these
determinations. There are two major demerits to this approach:

The searching process is time taking.

Most of the transactions according to our algorithm that require to be
renovated have actually upgraded the data that the log suggests that they need
to change.

To decrease these types of overheads, we initiate the concept of checkpoints.
During execution, the system manages the write-ahead log. In addition, the
system regularly executes checkpoints that need the following sequence of
actions to occur:

Output all log records currently available in volatile storage (usually main
memory) onto stable storage.

Output all changed data exist in volatile storage to the stable storage.

Output a log record <checkpoint> onto stable storage.

The presence of a <checkpoint> record in the log supports the system to
integrate its recovery procedure. Major demerits to this approach.
Operating Systems Building Blocks | Process Management
123
Process Synchronisation and Deadlocks
Self-assessment Questions
1) A collaborating or cooperating process can either be _______ to share data only
through files or directly share a logical address space (that is, both code and data) or
messages
a) Allowed
b) Blocked
c) Deleted
d) Ignored
2) Which is one of the storage media type?
a) Involatile Storage
c) Instable Storage
b) Volatile Storage
d) Variable Storage
3) The ______ exclusion of critical sections guarantees that the critical sections are
executed atomically.
a) Nominal
b) Lateral
c) Mutual
d) Parallel
4) The different name of the data item inscribed, is known as ______.
a) Transaction name
b) Data item name
c) Old value
d) New Value
5) The critical-section problem might be resolved simply in a _______ environment if
we could avoid interrupts from happening while a shared variable was being altered.
a) Uniprocessor
b) Multiprocessor
c) Arbitrary
d) Open
6) A semaphore X is an _____ variable that, aside from initialization, is accessible only
through two standard atomic operations: wait () and signal ().
a) Integer
b) Function
c) Program
d) Value
7) The ______ is generally used to illustrate the power of synchronisation primitives.
a) Bounded-buffer problem
b) Symmetric problem
c) Asymmetric problem
d) The Readers-Writers Problem
8) Information available in _______ storage normally manages system crashes.
a) Volatile
b) Non-Volatile
c) Stable
d) Unstable
124
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
2.2.2 Deadlocks
A deadlock is a condition where two programs which need to share the same resources
prevent each other from functioning by making the required resources inaccessible thereby,
ceasing the functions of both the programs.
(i) System Model
A system consists of a fixed number of resources to be circulated among a number of
competing processes. The resources are divided into several types, each consisting of some
number of identical instances. Memory space, files, and I/O devices, CPU cycles are the
examples of resource types. If a system contains two CPUs, then the resource type CPU has
two instances. Likewise, the resource type printer may have five instances.
Suppose a process requests an instance of a resource type, the allotment of any instance of the
type will fulfil the request. If it will not, then the instances are not alike, and the resource type
classes have not been defined appropriately. For an example, a system may have two printers.
These two printers may be termed to be in the related resource class if no one bothers which
printer prints which output. If one printer is on the eighth floor and the other is in the
basement, then people on the eighth floor may not see both printers as equal and distinct
resource classes may need to be defined for each printer.
In the normal mode of operation, a process may use a resource in only the following
sequence:
•
Request: If the request cannot be approved immediately, then the requesting process
must wait till it can obtain the resource.
•
Use: the process can work on the resource.
•
Release: The process releases the resource.
The request and release of resources are called system calls. Examples are: the open () and
close file (), request () and release (), device (), and allocate () and free (). Request and release
of resources that are not managed properly by the operating system can be achieved through
the wait () and signal () operations on semaphores or over acquisition and release of a mutex
lock. For each and every usage of a kernel managed resource through a processor thread, the
operating system checks to be sure that the process has requested and has been allotted the
resource. A system table records if each resource is free or allotted for each resource that is
Operating Systems Building Blocks | Process Management
125
Process Synchronisation and Deadlocks
allocated, the table records the process to which it is allocated. If a process requests a resource
that is presently allocated to another process, it can be added to a queue of processes waiting
for this resource.
A set of processes is in a deadlock state when each process in the set is to come for an event
that can be produced only by another process in the set. The events with which we are mostly
concerned here are resource acquisition and release. The resources maybe either physical
resources (for example, printers, tape drives) or logical resources (for example, files,
semaphores).
To explain a deadlock state, let’s consider a system with three CD RVV drives. Suppose each
of three processes grips one of these CD-RW drives. If each process requests another drive,
the three processes will be in a deadlock state. Each is waiting for the event "CD RVV is
released," which can be done only by one of the other waiting processes.
Deadlocks may also include different resource types. For an example, let’s take a system with
one printer and one DVD drive. Suppose the process P. is holding the DVD and process P; is
holding the printer. If P; requests the printer and P. requests the DVD drive, a deadlock
happens.
A programmer who is developing multi-threaded applications should pay specific attention
to this problem. Multi-threaded programmers are good candidates for deadlock because
multiple threads can strive for shared resources.
(ii)
Deadlock Characterisation
In a deadlock, processes never end executing, and system resources are knotted up, stopping
other jobs from starting.
Essential Conditions: a deadlock situation can arise if the following four conditions hold
simultaneously in a system:
•
Mutual exclusion: At least one resource should be held in a non-sharable mode; that
means, only one process can use the resource at a time. If another process requests that
resource, the requesting process must be delayed till the resource has been released.
126
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Deadlock With Mutex Locks
Let us see how deadlock can happen in a multi-threaded pthread program using mutex
locks. The pthread_mutex_init () function sets an unlocked mutex. Mutex locks are
obtained and released using pthread_mutex_lock (), respectively. If a thread tries to
acquire a locked mutex, the call to X pathread_mutex_lock () blocks the thread till the
owner of the mutex lock raises pathread_mutex_unlock ().
Two mutex locks are created in the code example:
/* Create and initialise the mutex locks */
pathread_mutex_tFirst_mutex ;
pathread_mutex_tsecond_mutex ;
pathread_mutex_init (&first_mutex,NULL) ;
pathread_mutex_init (&second_mutex,NULL) ;
Next, two threads thread_one and thxead_two are formed, and both of these threads have
access to both mutex locks, thread_one and thread_two run in the functions
do_work_one (), and do_work_two (), respectively. The code below shows a deadlock
example:
/* thread one runs in this function*/
Void *do_work_one (void *param)
{
pthread_mutex_lock (&first_mutex);
pthread_mutex_lock (&second_mutex);
/**
do some work
*/
pthread_mutex_unlock (&second_mutex);
pthread_mutex_unlock (&first_mutex);
pthread exit(0);
}
/* thread two runs in this function*/
Void *do_work_two (void *param)
{
pthread_mutex_lock (&second_mutex);
pthread_mutex_lock (&first_mutex);
/**
do some work
*/
pthread_mutex_unlock (&first_mutex);
pthread_mutex_unlock (&second_mutex);
pthread exit(0);
}
Here in this example, thread_one tries to acquire the mutex locks in the order (1)
first_mutex, (2) second_mutex, while thread_two attempts to obtain the mutex locks: in
Operating Systems Building Blocks | Process Management
127
Process Synchronisation and Deadlocks
the order, (1) second_mutex (2) first_mutex. Deadlock is possible if thread_one acquires
first_mutex while thread_two acquires second_mutex.
Even though dead lock is possible, it will occur if thread_one is able to acquire and release
the mutex locks for to first_mutext and second_mutex before thread_two attempts to
acquire the locks. This example shows a problem with handling deadlocks; it is difficult to
identify and test for deadlocks that may occur only under certain circumstances.
•
Hold and wait: A process should hold at least one resource and should wait to get
additional resources that are presently being held by other processes.
•
No pre-emption: Resources cannot be pre-empted; that is, a resource can be
released only freely by the process holding it, after that process has completed its
task.
•
Circular wait: A set {Pv, Pi, ..., Pn} of waiting processes must exist such that Pv is
waiting for a resource held by P1, P1 is waiting for a resource held by P2,…., Pn-1 is
waiting for a resource held by Pn, and P,, is waiting for a resource held by Pn.
We highlight that all four conditions must hold for a deadlock to occur. The circular-wait
condition suggests the hold-and-wait condition, so the four conditions are not completely
liberated.
Resource-Allocation Graph:
Deadlocks can be described more specifically in terms of a directed graph called a system
resource-allocation graph. This graph includes a set of vertices V and a set of edges E. The set
of vertices V is divided into two different types of nodes: P - {Pi, Pi,,..,Pn}, the set involving of
all the active processes in the system, and R = {R[, R2,…,Rm }, the set involving of all resource
types in the system.
128
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Figure 2.2.2: Resource Allocation Graph
The figure. 2.2.2 indicates an example of resource allocation graph.
•
REQUEST: If the process Pi does not have any outstanding request, then it can
simultaneously request any number of resources R1...Rm.
•
ACQUISITION: It the process Pi comprises of any outstanding requests, then all
those requests can be satisfied simultaneously.
•
RELEASE: If the process Pi does not have any outstanding request, then the held
resources can be released.
Resource Allocation with a deadlock
Here, in Figure 2.2.3 two minimal cycles exist in the system:
P1- R1 P2  R3  P3  R2 P1
P2 R3 P3 R2 P2
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is
held by process P3. Process P3 is waiting for either process P1 or process P2 to release
resource R2. Process P1 is waiting for process P1 to release resource R1.
Operating Systems Building Blocks | Process Management
129
Process Synchronisation and Deadlocks
Figure 2.2.3: Resource Allocation Graph with a deadlock
Resource Allocation with a cycle and no deadlock
Now let us see the resource-allocation graph in Figure 2.2.4. In this example, we also have a
cycle. P1 R1 P3 R2P1
However, there is no deadlock. See that process P4 may release its instance of resource type
R2. That resource can then be assigned to P3, breaking the cycle.
In short, if a resource-allocation graph does not have a cycle, then the system is not in a
deadlocked state. If there is a cycle, then the system may or may not be in a deadlocked state.
Figure 2.2.4: Resource Allocation Graph with a cycle and no deadlock
130
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
(iii) Methods of handling deadlocks
Commonly speaking, we can deal with the deadlock problem in one of three following
ways:
•
Deadlock Prevention and Avoidance: Make sure that the system never enters into
the deadlock state.
•
Deadlock Detection and Recovery: If any deadlock has occurred, detect it and make
sure to recover that deadlock.
•
Deadlock Ignorance: Ignoring the fact that there would not be a deadlock.
The third solution is used by most operating systems, including Windows and UNIX; it is
then up to the application developer to write programs that tackle deadlocks.
To confirm that deadlocks do not occur, the system can either use deadlock prevention or a
deadlock avoidance scheme. Deadlock prevention offers a set of methods for ensuring that at
least one of the required conditions cannot hold. These methods avert deadlocks by
constraining how demands for resources can be made.
If a system does not apply either deadlock avoidance or deadlock-prevention algorithm, then
a deadlock may arise. In this state, the system can propose an algorithm that observes the
state of the system to define if a deadlock has arose and an algorithm to recover from the
deadlock.
If a system neither guarantees that a deadlock will not occur nor provides a mechanism for
deadlock finding and recovery, then we may reach a situation where the system is in a
deadlocked state yet has no way of identifying what has happened. In this case, the
unidentified deadlock will result in a decline of the system's performance, because resources
are being held by processes that cannot run properly and because many processes, as they do
requests for resources, will enter a deadlocked state. Ultimately, the system will stop working
and will need to be restarted manually.
Although this method may not seem to be a feasible approach to the deadlock problem, it is
nonetheless used in most operating systems.
Operating Systems Building Blocks | Process Management
131
Process Synchronisation and Deadlocks
(iv) Deadlock Prevention
For a deadlock to occur, the four necessary conditions i.e. Mutual exclusion, No preemption,
Hold and wait, and Circular wait must hold. By ensuring that minimum one of these
conditions does not hold, we can prevent the incident of a deadlock.
•
Mutual Exclusion: The mutual-exclusion situation must hold for non-sharable
resources. For instance, a printer cannot be concurrently shared by several processes.
Sharable resources, indifference, do not need mutually exclusive access and thus
cannot be engaged in a deadlock. Read-only files are the best example of a sharable
resource. If numerous processes attempt to open a read-only file at the similar time,
they can be granted concurrent access to the file. A process never requests to wait for
a sharable resource. In general, still, we cannot prevent deadlocks by rejecting the
mutual-exclusion condition, since some resources are basically non-sharable.
•
Hold and Wait: To guarantee that the hold-and-wait condition never happens in the
system, we must ensure that, whenever a process demands a resource, it does not hold
any other extra resources. One protocol that can be used needs each process to
demand and be allocated all its resources before it initiates execution. We can
implement this facility by demanding that system calls requesting resources for a
process head all other system calls. An alternative protocol permits a process to
request resources only when it has none. A process may demand some resources and
also use them. Before it can demand any additional resources, still, it must release
resources entirely that it is presently allocated.
•
No Preemption: The third essential condition for deadlocks is that there should not
be any preemption of resources that have previously been allocated. To confirm that
this condition does not hold, we can use the below protocol. If a process is holding
few resources and needs another resource that cannot be instantly allocated to it (that
is, the process need to wait), then all resources presently being held are preempted. In
different words, these resources are indirectly released. The preempted resources are
then added to the set of resources for which the process is waiting. The process will be
resumed only when it can reclaim its old resources and the new ones that it is
demanding. Otherwise, if a process demands some resources, we initially check
whether they are obtainable. If they are, we assign them. And if they are not, we check
whether they are assigned to some other process that is waiting for extra resources. If
so, we preempt the wanted resources from the waiting process and assign them to the
132
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
demanding process. If the resources are neither obtainable nor held by a waiting
process, the demanding process must wait. When it is waiting, more or less of its
resources may be preempted, but then only if another process demands them. A
process can be resumed only when it is assigned the new resources it is demanding
and recovers any resources that were preempted when it was waiting. This protocol is
frequently applied to resources whose state can be simply saved and restored later,
like CPU registers and memory space. It cannot normally be applied to such resources
as tape drives and printers.
•
Circular Wait: To prevent circular wait, order the resources and make sure that each
process can utilise those resources in the order. The complexity of this algorithm
might increase which may lead to poor utilisation of the resources.
(v)
Deadlock Avoidance
As most of the prevention algorithms have poor utilisation of the resources, it reduces the
throughputs. A better way is to avoid the deadlocks by knowing the resource usage by the
processes and also knowing about the resources allocated and available, and future releases
and requests by all the processes. Most of the avoidance algorithms need prior information on
the maximum requirement of the resources. Depending on this, it can be decided whether a
process should wait or not, thus preventing circular wait.
A system that is in a safe state (as shown in Figure. 2.2.5), avoids to get into an unsafe state
and thereby avoiding the deadlock situation. In an unsafe state, the deadlocks are bound to
happen. To ensure a safe state, there should be a safe sequence of the processes and the
resources must be properly allocated. The deadlock avoidance algorithms also face the
problem of low resource utilisation, since in most of the cases the resource allocation is not
properly done.
The Resource Allocation Graph (RAG) is normally used to prevent deadlocks. If the RAG
does not have cycles, then there is no deadlock. If the RAG has cycles, then there is a chance
for deadlock. If the RAG has a single instance of each resource, then there is a deadlock.
When there are multiple instances for the resources then RAG is not useful. For such cases,
Banker’s algorithm is used.
Operating Systems Building Blocks | Process Management
133
Process Synchronisation and Deadlocks
Figure 2.2.5: Safe, Unsafe, and Deadlock State Spaces
Bankers Algorithm:
The resource-allocation-graph algorithm is not relevant to a resource allocation system with
many instances of every resource type. The deadlock avoidance algorithm that we define next
is applicable to such a system but is less effective than the resource-allocation graph scheme.
This algorithm is generally known as the banker's algorithm.
When a new process arrives the system, it must state the maximum number of instances of
every resource type that it may need. This number may not surpass the total number of
resources in the system. When a user demands a group of resources, the system must examine
whether the distribution of these resources will leave the system in a safe state. If it will, the
resources are assigned; otherwise, the process must wait till some other process releases
sufficient resources.
Several data structures must be managed to implement the banker's algorithm. These data
structures encrypt the state of the resource-allocation system. Consider n be the number of
processes in the system and m is the number of resource types. We need the subsequent data
structures:
•
Available: A vector of dimension m indicates the number of obtainable resources of
every type. If Available[j] equals to k, there are k instances of resource type Ri
obtainable.
134
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
•
Allocation: An n x m matrix describes the number of resources of every type
currently assigned to each process. If Allocation[i][j] equals to k, then process Pi is
presently allocated k instances of Rj resource type.
•
Max: An n x m matrix describes the maximum request of each process. If Max[i][j]
equals to k, then process Pi may request maximum k instances of resource type Rj.
•
Need: An n x m matrix specifies the remaining resource need of each process. If
Need[i][j] equals to k, then process Pi, may need k more instances of Rj resource type
to complete its task.
Note that Need[i][j] = Max[i][j] - Allocation[i][j].
These data structures differ over time in both value and size. To simplify the management of
the banker's algorithm, we next create some notation. Let X and Y be vectors of size n. We say
that X < Y only if X[i] < Y[i] for all the i = 1, 2, ..., n. For instance, if x"= (1,7,3,2) and Y =
(0,3,2,1), then Y < X. Y < X condition Y < X and Y + X. We can treat every row in the
matrices “Allocation” plus “Need” as vectors and denote to them as Allocation j and Need i.
The vector Allocation specifies the resources presently allocated to process Pi the vector Need
i specifies the extra resources that process Pi, may still demand to complete its task.
(vi) Deadlock Detection
If a system does not work either a deadlock-prevention or a deadlock avoidance algorithm,
then a deadlock situation may arise. In this situation, the system must provide:
•
An algorithm that inspects the state of the system to determine if a deadlock has
occurred
•
An algorithm to recover from the deadlock
The detection-and-recovery scheme needs overhead that involves not only the run-time costs
of keeping the necessary information and executing the detection algorithm but also the
losses intrinsic in recovering from a deadlock.
1. Single Instance of Each Resource Type: If all resources have only a single instance, then
we can describe a deadlock detection algorithm that employs a variant of the resourceallocation graph, called a wait-for graph. We can develop this graph from the resource-
Operating Systems Building Blocks | Process Management
135
Process Synchronisation and Deadlocks
allocation graph by eliminating the resource nodes and collapsing the proper edges, as
shown in Figure 2.2.6.
More specifically, an edge from Pi, to Pj, in a wait-for graph indicates that process Pi is
waiting for process Pj to release a resource that Pi requires. An edge P, Pi remains in a
wait-for graph if and only if the corresponding resource allocation graph includes two
edges P, —>Rq and Rq Pi for some resource Rq. For example, we show a resourceallocation graph and the corresponding wait-for graph.
Figure 2.2.6: Resource-allocation graph and corresponding wait-for graph
A deadlock occurs in the system if and only the wait-for graph involves a cycle. To
determine deadlocks, the system requires maintaining the wait-for graph and time to time
invoke an algorithm that examines for a cycle in the graph. An algorithm to detect a cycle
in a graph needs an order of n2 operations, where n is the number of vertices in the graph.
2. Several Instances of a Resource Type: The wait-for graph pattern is not valid to a
resource-allocation system with multiple instances of each resource type. Now we turn to
a deadlock detection algorithm that is appropriate to such system. The algorithm works
several time-varying data structures that are similar to those used in the algorithm of
bankers.
a. Available: A vector of length m shows the number of available resources of each
type.
b. Allocation: An n x m matrix describes the number of resources of each type
presently allocated to each process.
136
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
c. Request: An n x m matrix shows the existing request of each process. If
Request[i][j] equals to k, then process Pi is requesting k more instances of
resource type Rj.
The ≤ relation between two vectors is defined before in this chapter. To simpler the
notation, we give the rows in the matrices allocation and request as vectors; we denote to
them as Allocation: and Request i. The finding algorithm defined here simply examines
every possible allocation sequence for the processes that remain to be completed.
1. Work and Finish the vectors of length m and n, respectively. Initialize Work Available. For i = 0,1,..., n-1, if Allocation, ≠ 0, then Finish[i] = false; otherwise,
Finish[i] = true.
2. Find an index i such that both
a. Finish[i] = =false
b. Requesti ≤ Work
If no such i exists, go to step 4.
3. Work = Work + Allocation i
a. Finish[i] = true
b. Go to step 2.
4. If Finish[i] == false, for some /', 0 ≤ i< n, then the system is in a deadlocked state.
Moreover, if Finish[i] == false, then process Pi is deadlocked.
This algorithm requires an order of in x n2 operations to detect whether the system is in a
deadlocked state.
You may surprise why we recover the resources of process Pi (in step 3) as soon as we
determine that Requesti ≤ Work (as step 2b). Pi is presently not involved in a deadlock (as
Requesti ≤ Work). Therefore, we take an optimistic attitude and presume that Pi will need
no more resources to complete its task. It will return all currently allocated resources to
the system. If the assumption is incorrect, a deadlock may occur. That deadlock will be
identified the next time the deadlock-detection algorithm is invoked.
Operating Systems Building Blocks | Process Management
137
Process Synchronisation and Deadlocks
(vii) Recovery from Deadlocks
When a detection algorithm decides that a deadlock exists, some alternatives are available.
One option is to inform the operator that a deadlock has been occurred and to operator deals
with the deadlock manually. Another one is the possibility to let the system recuperate from
the deadlock automatically. There are two options to break a deadlock. One is simply to
terminate one or more processes to break the circular wait. The other is to pre-empt some
resources from one or more of the deadlocked processes.
Process Termination:
To remove deadlocks by aborting a process, we use one of two methods. In both of the
methods, the system regains all resources allocated to the processes which are terminated.
•
Abort all deadlocked processes: This method breaks the deadlock cycle clearly, but is
expensive; the deadlocked processes may have calculated for a long time, and the end
results of these partial computations should be cancelled and probably will have to
compute it later.
•
Abort one process at a time till the deadlock cycle is eliminated: This method gains
considerable overhead, then, after each process is aborted, a deadlock-detection
algorithm must be raised to determine or regulate whether any processes are still
deadlocked.
Aborting a process may not be that easy. If the process was in the middle of updating a file,
dismissing it will leave that file in an incorrect state. Likewise, if the process was in the middle
of printing data on a printer, the system must reset the printer to an exact state before starting
to print the next job.
If the partial termination method is used, then we should detect which deadlocked process
should be terminated. This determination is a policy decision, alike to CPU-scheduling
decisions. We should abort those processes whose termination will suffer the minimum cost.
Unluckily, the term minimum cost is not a precise one. Many other factors may affect which
process is chosen, those are:
•
What is the priority of the process?
•
How long the process has computed and how long the process will compute before
completing its assigned task?
138
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
•
How many resources and what type of resources are used in the process? (example, if
the resources are simple to pre-empt)
•
How more resources the process needs in order to complete?
•
How many processes will need to be terminated?
•
If the process is interactive or batch?
Resource Pre-emption:
To eradicate deadlocks by using resource pre-emption, we sequentially pre-empt some
resources from processes and give these resources to other processes till the deadlock cycle is
broken.
If pre-emption is essential to deal with deadlocks, then three issues need to be spoken:
•
Selecting a victim: Which processes and which resources are to be pre-empted? As in
process termination, we should dictate the order of pre-emption to reduce cost. Cost
factors may involve such parameters as the number of resources a deadlocked process
is holding and the time the process has consumed during its execution.
•
Rollback: If we pre-empt a resource from a process, what must be done with that
process? It cannot endure with its normal execution. It is missing some necessary
resource. We must roll back the process and restart it from the safe state.
In general, it is quite difficult to know what a safe state is the easiest solution is a total
rollback. Terminate the process and then restart it. Though it is more effective to roll
back the process only as far as needed to break the deadlock, this method needs the
system to keep more information about the state of all running processes.
•
Starvation: How do we confirm that starvation will not occur? That is how can we
guarantee that resources will not always be pre-empted from the same process?
In a system where victim selection is based mainly on cost factors, it may occur that the same
process is always chosen as a victim. As a result, this process doesn’t complete its assigned
task, a starvation situation that must be distributed within any practical system. We must
confirm that a process can be chosen as a victim an only finite number of times. The most
common solution is to involve the number of rollbacks in the cost factor.
Operating Systems Building Blocks | Process Management
139
Process Synchronisation and Deadlocks
Self-assessment Questions
9) In which sequence does the process work on the resource in the normal mode of
operation?
a) Request
b) Use
c) Release
d) Rest
10) In mutual exclusion at least resource should be held in a ______ mode.
a) Shareable
b) Silent
c) Executable
d) Non-shareable
11) In which case should a process hold at least one resource and wait to get additional
resources that are presently being held by other processes?
a) Hold and wait
b) Wait and hold
c) Hold and wait function
d) Wait function
12) What ensures that the system never enters into the deadlock state?
a) Deadlock detection and recovery
b) Deadlock prevention and avoidance
c) Deadlock ignorance
d) Deadlock avoidance
13) If a system does not work with deadlock prevention or a deadlock avoidance
algorithm, then a ______ situation may arise.
a) Critical
b) Deadlock detection
c) Deadlock
d) Deadlock recovery
14) If all resources have only a single instance, then we can describe a deadlock detection
algorithm that employs a variant of the resource – allocation graph, called as
________.
a) Wait-for-graph
b) Wait of graph
c) Deadlock graph
d) Single instance graph
15) In which case does an n x m matrix does describes the number of resources of each
type presently allocated to each process?
a) Available
b) Allocation
c) Request
d) Accept
140
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Summary
o A collaborating process is one that can either affect or get affected by the other
processes that are in the system. These processes can either be authorised to share
data only through files or can directly share a memory region or messages.
o Mutual Exclusion is a method of making sure that if one process is accessing
shared modifiable data, the other processes will be excluded from doing the
similar thing.
o Mutex enters into the picture when two threads work on the similar data at the
same time. It acts as a lock and is the most basic synchronization tool.
o The critical-section problem is to plan a protocol that the processes can use to
cooperate. Each process must request authorization to enter its critical section.
o The critical section is pursued by an exit section. The remaining code is the
remainder section.
o There exists a limit or bound, or on the number of times that other processes are
permitted to enter their critical sections after a process has made a request to go
in its critical section and before that request is approved.
o A pre-emptive kernel permits a process to be pre-empted while it is running in
kernel mode.
o A non pre emptive kernel prevents a process running in kernel mode to be preempted
o The critical-section problem might be resolved simply in a uniprocessor
environment if we could avoid interrupts from happening while a shared variable
was being altered.
o A semaphore in basic form is a preserved integer variable that can assist and
inhibit approach to shared resources in a multiprocessing circumstance.
Operating Systems Building Blocks | Process Management
141
Process Synchronisation and Deadlocks
o A database is shared among various process running concurrently either to read or
update. The processes performing read and write are called readers and writers
simultaneously.
o The mutual exclusion of critical sections guarantees that the critical sections are
executed atomically.
o In a deadlock, processes never end executing, and system resources are knotted up,
stopping other jobs from starting.
o For a deadlock to occur, the four necessary conditions i.e. Mutual exclusion, No
preemption, Hold and wait, and Circular wait must hold. By ensuring that
minimum one of these conditions cannot hold, we can prevent the incident of a
deadlock.
o The wait-for graph pattern is not valid to a resource-allocation system with multiple
instances of each resource type.
o When a detection algorithm decides that a deadlock exists, some alternatives are
available. One option is to inform the operator that a deadlock has occurred and
theoperator deals with the deadlock manually.
o To remove deadlocks by aborting a process, we use one of two methods. In both of
the methods, the system regains all resources allocated to the processes which are
terminated.
o In a system where victim selection is based mainly on cost factors, it may occur that
the same process is always chosen as a victim. As a result, this process doesn’t
complete its assigned task, a starvation situation that must be distributed within any
practical system.
o Several data structures must be managed to implement the banker's algorithm.
These data structures encrypt the state of the resource-allocation system.
o As most of the prevention algorithms have poor utilisation of the resources, it
reduces the throughputs. A better way is to avoid the deadlocks by knowing the
resource usage by the processes and also knowing about the resources allocated and
available, and future releases and requests by all the processes.
142
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Terminal Questions
1. Explain the concept of Process Synchronization
2. Elaborate on the fundamentals of deadlock.
3. List the classic problem of synchronisation.
Answer Keys
Self-assessment Questions
Question No.
Answer
1
a
2
b
3
c
4
b
5
a
6
a
7
a
8
b
9
b
10
d
11
a
12
b
13
c
14
d
15
b
Operating Systems Building Blocks | Process Management
143
Process Synchronisation and Deadlocks
Activity
Activity Type: Offline
Duration: 30 Minutes
Description:
Consider the situation in which three of the four processes have been allocated three units
each and the fourth has been allocated no units. Is this a deadlock situation? Why? Explain
how the preceding scenario can occur.
144
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
Case Study
Consider the real time scenario for a traffic deadlock depicted in Figure below
Discussion Questions:
1. Show that the four necessary conditions for deadlock indeed hold in this example.
State a simple rule for avoiding deadlocks in this system.
Operating Systems Building Blocks | Process Management
145
Process Synchronisation and Deadlocks
Bibliography
e-References
•
Deadlock Prevention, Avoidance and Recovery. Retrieved 24 Oct, 2016 from
http://www.javajee.com/deadlock-prevention-avoidance-detection-and-recoveryin-operating-systems
•
Process Synchronisation. Retrieved 24 Oct, 2016 from http://quiz.geeksforgeeks.o
rg/process-synchronization-set-1/
•
Synchronisation problems. Retrieved 24 Oct, 2016 from http://cs.stackexchang
e.com/questions/17945/solutions-to-synchronization-problem-need-to-beexecuted-in-critical-section
Image Credits
•
Figure 2.2.1: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
•
Figure 2.2.2: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
•
Figure 2.2.3: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
•
Figure 2.2.4: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
•
Figure 2.2.5: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
•
Figure 2.2.6: https://it325blog.files.wordpress.com/2012/09/operating-systemco
ncepts-7-th-edition.pdf
146
Process Management | Operating Systems Building Blocks
Process Synchronisation and Deadlocks
External Resources
•
Tanenbaum, A.S., & Woodhull, A.S. (2006). Operating Systems: Design and
Implementation. Upper Saddle River, NJ: Pearson/Prentice Hall.
•
Kamal, R. (2008). Embedded Systems: Architecture, programming and design.
Boston: McGraw-Hill Higher Education.
•
Flynn, I. M., &McHoes, A. M. (2001). Understanding operating systems. Pacific
Grove, CA: Brooks/ Cole Thomson Learning.
Video Links
Topic
Link
Process Scheduling
https://www.youtube.com/watch?v=THqcAa1bbFU
First Come First Serve (FCFS)
https://www.youtube.com/watch?v=HIB3hZ-5fHw
Difference between Process and Thread
https://www.youtube.com/watch?v=O3EyzlZxx3g
Operating Systems Building Blocks | Process Management
147
Process Synchronisation and Deadlocks
Notes:
148
Process Management | Operating Systems Building Blocks
Operating Systems Building Blocks
MODULE - III
Storage
Management
MODULE 3
Storage Management
Module Description
The main goal of studying Storage Management is to understand the concept of memory
management, and virtual management. This also helps to understand the concept of file
system interface, file system implementation, and disk management.
By the end of this module, students will learn how to manage the memory in the operating
system. They will also learn about demand paging, creation of process, thrashing, and
demand segmentation. The students will also learn about access methods, directory
structures, file system mounting, file sharing, and protection and consistency semantics. The
students will also learn about file system structure and its implementation, disk structure, disk
attachment, disk scheduling, disk management, swap space management, and stable storage
implementation.
By the end of this module, students will be able to elaborate on the concept of memory
management, and discuss on the virtual memory management. The students will also be able
to elaborate on the basics of file system interface, elaborate on the concept of File System
Implementation, and discuss on the concept of Disk Management.
Chapter 3.1
Memory management and Virtual Management
Chapter 3.2
File System Interface, Implementation and Disk Management
Operating Systems Building Blocks
Memory management and Virtual management
Chapter Table of Contents
Chapter 3.1
Memory management and Virtual management
Aim ..................................................................................................................................................... 149
Instructional Objectives................................................................................................................... 149
Learning Outcomes .......................................................................................................................... 149
3.1.1 Memory Management ........................................................................................................... 150
(i) Background ........................................................................................................................ 150
(ii) The logical and physical address space.......................................................................... 151
(iii) Swapping .......................................................................................................................... 151
(iv) Contiguous Memory Allocation.................................................................................... 153
(v) Paging and Structure of Paging Table............................................................................ 154
(vi) Segmentation ................................................................................................................... 158
(vii) Example – The Intel Premium ..................................................................................... 159
Self-assessment Questions ..................................................................................................... 160
3.1.2 Virtual Memory Management.............................................................................................. 161
(i) Background ........................................................................................................................ 161
(ii) Demand Paging ................................................................................................................ 161
(iii) Process Creation.............................................................................................................. 162
(iv) Page Replacement Algorithm ........................................................................................ 162
(v) Allocation of Frames ........................................................................................................ 167
(vi) Thrashing ......................................................................................................................... 167
(vii) Operating System Example ........................................................................................... 168
(viii) Other Consideration..................................................................................................... 169
(ix) Demand Segmentation ................................................................................................... 171
Summary ........................................................................................................................................... 173
Terminal Questions.......................................................................................................................... 174
Answer Keys...................................................................................................................................... 175
Activity............................................................................................................................................... 176
Case Study ......................................................................................................................................... 177
Bibliography ...................................................................................................................................... 178
e-References ...................................................................................................................................... 178
External Resources ........................................................................................................................... 179
Video Links ....................................................................................................................................... 179
Operating Systems Building Blocks | Storage Management
Memory management and Virtual management
Aim
To provide the students with the fundamentals of Memory Management and Virtual
Management
Instructional Objectives
After completing this chapter, you should be able to:
•
Discuss the concept of memory management
•
Outline the concept of Swapping and Contiguous Memory Allocation
•
Discuss the concept of Segmentation
•
Explain virtual memory management.
•
Outline the Page Replacement Algorithm
•
Describe the concept of Demand Segmentation
Learning Outcomes
At the end of this chapter, you are expected to:
•
Elaborate on the concept of memory management
•
Discuss the concept of Swapping and Contiguous Memory Allocation
•
Recall the concept of Segmentation
•
Discuss on the virtual memory management
•
Discuss the Page Replacement Algorithm
•
Outline the concept of Demand Segmentation
Operating Systems Building Blocks | Storage Management
149
Memory management and Virtual management
3.1.1 Memory Management
Memory in computer systems is organised as an array of bytes or words that are identified by
its unique address. Memory management pertains to assigning memory to programs and
processes, coordinating and controlling memory, optimising the system performance.
(i)
Background
Registers that reside in the CPU are easier to access and accessible within one CPU cycle.
Performing simple instructions and performing basic operations on the contents of the
register can be done using the CPUs. Memory access to other drivers consumes more than
one CPU cycles via the memory bus. In certain cases, where the CPU takes many clock cycles
to complete, the processor is stalled or made to pause, since the data required is yet to be
retrieved. This can be avoided by adding cache between the main memory and CPU.
Logical address spaces are defined with the help of base and limit registers which are stored
by the operating system. Both the ends of the base and limit registers are also included in the
address space. To protect hardware addresses, the limits are checked each time an attempt to
access the memory is made.
Attempts to access the OS memory or other user’s memory result in an exception or fatal
error. However, the operating system has unlimited access to the memory in the kernel mode.
After a process is chosen from the input queue, the data and instructions pertaining to the
process are loaded into memory and space becomes available once the process terminates. An
address binding is a method of mapping the symbolic addresses i.e. the source program, to
the re-locatable addresses. The loader or editor turns the re-locatable addresses to absolute
addresses.
Symbolic address-> Re-locatable address -> absolute address
•
Compile time: If the memory location of the process is known during compile time,
the absolute code can be generated and the compiler code starts from the location and
extends. Recompilation is necessary if the location of the code changes.
•
Load time: If the memory location of the process is not known, the compiler creates
re-locatable code which is bound to absolute address only during load time.
Reloading the user code will make it executable, even in the case of change of address.
150
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
•
Execution time: In case, the process is shifted from one memory segment to another,
then the binding is delayed until the execution time and done with the help of special
hardware.
(ii)
The logical and physical address space
Logical address or the virtual address is the address generated by the CPU and physical
address space is the address seen by the memory unit and loaded into the memory address
register. Logical addresses generated by the program form the logical address space whereas
the physical address space is formed with the set of physical addresses.
Memory Management Unit (MMU): This maps the virtual addresses to the physical
addresses with the help of hardware device during run time.
Relocation register R has the base address and all the addresses generated in the user process
is added to the base address.
Say if relocation register holds 7500 and logical address generated is 10, then the physical
address would be 7500+10=7510
•
Range of logical address: 0 to max
•
Range of physical address: R+0 to R+max
Dynamic loading is made use of for better utilisation of memory space, and the routine is not
loaded till the routine is called and all the routines are stored in re-locatable load format.
Advantages of dynamic loading are:
•
No need to load the routines that are unused
•
No special OS support is required
•
Useful when huge codes are needed to handle the error routines
(iii) Swapping
The execution of the process is possible only if it is in the memory. Swapping provides the
shifting of the process to a disk space which is then shifted to the memory to resume the
process execution as shown in Figure 3.1.1. For example, in a round-robin algorithm, once
Operating Systems Building Blocks | Storage Management
151
Memory management and Virtual management
the allocated time slice allotted for the process lapses and the time slice allotted is large
enough, it lets the other process to perform reasonable amounts of computing.
Figure 3.1.1: Swapping Process
Swapping gives preference to higher priority processes which require memory and replaces
the lower priority processes. This variant of swapping which provides priority-based
scheduling is named roll out, roll in.
A process which is previously swapped out is again swapped in to occupy the same space that
was occupied before rolling out, in the case of assembly time binding. Else, the swapped-in
process is allotted a different address space.
Swapping makes use of a store for backing up. This store should be a;
•
Fast device
•
Large device which can store direct access mapped to memory images
Constraints of Swapping:
152
•
The process to be swapped should be idle
•
I/O Processes accessing buffers cannot be swapped
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Disadvantages of using the standard swapping are:
•
High swapping time
•
Less execution time
(iv) Contiguous Memory Allocation
Contiguous memory allocation is a method deployed to partition the available memory space
for both the user and the Operating system processes, operating in the system. To
accommodate the several user processes, the single contiguous memory location is assigned
to a process.
Memory mapping is done by adding the value in the Relocation Register R to the logical
address which should fall under the physical address in the limit register.
Relocation register scheme supports dynamic size change of the Operating system by lying off
infrequently used code. Several processes and the block sizes that constitute the input queue is
available, which is allocated by a scheduling algorithm.
Memory allocation involves division of memory into partitions of fixed size such that each
partition has a fixed size and each process takes up a partition. Free partitions are selected and
are loaded with the process present in the input queue. This partition is freed after the process
ends. In this method, the available space is consolidated as a hole which is checked for
available space when a process is to be loaded. If the hole is big enough, then the process is
loaded. Else the process is kept on hold till space is freed. Holes which are adjacent to each
other are merged to form a larger hole.
The strategies used for hole allocation may be:
1. First fit: The first hole that is bigger enough to fit the process is allocated
2. Best fit: The hole which is just bigger enough to fit the process is allocated
3. Worst fit: The bigger hole in the list of available holes is allocated
External fragmentation occurs wherever best fit or first fit strategies are employed, since the
loading and removing the processes from the hole creates segments of broken memory. This
is avoided by including a block to accommodate these bits of fragmented memory between
every two processes. Based on the system best fit or first fit algorithm is employed keeping in
mind the fragmentation the strategy can induce.
Operating Systems Building Blocks | Storage Management
153
Memory management and Virtual management
Ways to handle fragmentation:
1. Compaction or shuffling of memory spaces creates one large available block of data
combining the fragments of empty memory space. Compaction cannot be adopted
everywhere.
2. Making the address space of the processes non-contiguous.
(v)
Paging and Structure of Paging Table
Paging is a method of letting memory allotted to processes to be non-sequential, as a measure
to avoid fragmentation. However, the storage unit that is used to backup needs to have
enough space to perform paging. Paging breaks the memory into:
•
Logical: Into similar sized pages
•
Physical: Into similar sized frames
Figure. 3.1.2Paging Model of Logical and Physical Memory
During address translation, CPU divides addresses into:
•
Page number
•
Page offset
As shown in Figure. 3.1.2, Page table serves as an index to the pages with the help of their
respective page number and the base address of the page.
154
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Using paging, the free frames are allotted to the process that needs it, thereby preventing
fragmentation.
As shown in Figure. 3.1.3, hardware support is necessary to follow paging and can be
provided by using a set of registers-high in speed, dedicated to paging. However, the
efficiency of the process reduces since every memory that is accessed goes through page
mapping.
Translation lookaside buffer (TLB) reduces the mapping time by presenting the associative
memory with an item that consists of a value and key. The item is checked simultaneously
with all the keys in the TLB and once it matches the value is returned. TLB is high-speed
associative memory which holds only a few of the page table entries, also expensive to
implement. TLB may store ASID (address space identifiers).
Page protection is enabled by checking if the page is read-only or has read-write access using
a bit assigned to the page for this purpose. The bit can be expanded to include execute only
access as well.
Valid-invalid bit is added to each page and is;
•
Valid: If the associated logical address stores the page
•
Invalid: If the associated logical address does not store the page
Figure 3.1.3: Paging Hardware
Operating Systems Building Blocks | Storage Management
155
Memory management and Virtual management
Structure of Page Table
PLTR (Page table length register) can be used to include the size of the page table.
The techniques which can be adopted in fabricating the page table are:
•
Hierarchical Paging
•
Hashed Paged Tables
•
Inverted Page Tables
Hierarchical Paging
With logical space using 232 to 264 bits for its address, each page entry becomes large. Page
table can be disintegrated into smaller pieces to avoid a single large page table.
As shown in Figure. 3.1.4, a logical address is made up of page number-20 bits, page offset-12
bytes which on splitting become page number-10 bits, page offset-10 bits i.e. the inner page
table becomes an entry in the outer page table. This works for 32bit logical address.
When 64-bit logical addresses come into the play, three-layer or four-level paging table is
made use of.
Figure 3.1.4: Hierarchical Paging
156
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Hashed Page Tables
Figure 3.1.5: Hashed Page Tables
When address space is denoted with the help of more than 32 bits, a virtual page number
called hash value is used, as shown in Figure. 3.1.5. A linked list of elements pointing to the
same location is saved as an entry in the hash table and the hash table contains many such
linked lists. Each element in the hash table is made up of the following:
•
Virtual page number
•
Value of the mapped page frame
•
Pointer to the next element in the list
For 64-bit address space, clustered page tables are adopted which hold hash values that refer
to several pages as opposed to hash tables in which each entry directs to a single page.
Inverted Page Tables
Each process is associated with the page table which contains information about the page the
process is making use of. Page table has a slot for each entry’s virtual address which is mapped
to a physical address. Page tables contain millions of entries and is memory consuming.
To solve the above overhead, inverted page tables which store real page of memory and the
process associated with it, is used. Address space identifier is required by every entry of the
table, in certain cases of inverted table to map the process to the pertaining physical address.
In short, the inverted page table maps virtual page to its physical page frame.
Format: < process id, page number>
Operating Systems Building Blocks | Storage Management
157
Memory management and Virtual management
As shown in Figure. 3.1.6, at every memory occurrence the inverted page table is searched
with ordered pair <process id, page number> of the virtual address against the entries in the
inverted page table and on finding the match, the physical address is generated.
Inverted page table decreases the memory usage but, time required for searching the virtual
address against all the entries in the table is high.
Figure 3.1.6: Inverted Page Tables
(vi) Segmentation
Separation of the actual layout of physical memory from the user’s aspect of memory is
mandatory to protect the memory space. Segmentation is a memory management technique
which supports user’s aspect of memory with the logical address being viewed as a collection
segments.
Segments are identified by segment number and an offset which is the size of the segment.
<segment number, offset>
Segments can be of different sizes.
158
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Separate segments are created by the compiler for:
•
C library
•
Stacks that are used by each thread
•
Heap from which the memory allocation is done
•
Code
•
Global variables
Segment number has:
•
The base address which is the physical address of the location where the segment is
saved
•
The limit address which provides the segment length.
For example, if 33rd byte of segment 1 needs to be accessed whose base address value is 4400,
then 4400+33= 4433 is given the access.
(vii) Example – The Intel Premium
Intel Pentium provides support for segmentation with paging and proper segmentation with
segments to be as large as 4GB, and a maximum number of segments allowed per process is
16KB. Logical space is partitioned into two:
•
The first 8KB segments preserved in the local descriptor table and
•
The next 8KB shared among all the processes preserved in the global descriptor table.
The logical address has 16 bits, and page address is specified with 32 bits since the two-level
paging schemes are deployed. The first ten high order bits relate to the entry in the outermost
page table which is termed as page directory by Pentium.
Operating Systems Building Blocks | Storage Management
159
Memory management and Virtual management
Self-assessment Questions
1) What does memory management manage?
a) Cache Memory
b) Main Memory
c) Secondary memory
d) Flash memory
2) Which type of address do programs deal with?
a) Logical Address
b) Absolute Address
c) Physical Address
d) Relative Address
3) Which one of the following is the address generated by the CPU?
a) Physical Address
b) Absolute Address
c) Paging
d) Logical Address
4) What is the mechanism of moving the process temporarily out of main memory to
secondary storage?
a) Segmentation
b) Memory Allocation
c) Swapping
d) Paging
5) Which of the following is the memory management technique in which system stores
and retrieves data from secondary storage for use in main memory?
a) Fragmentation
b) Paging
c) Mapping
d) Swapping
6) Which phase creates relocatable code?
a) Compile Time
c) Execution Time
160
b) Load Time
d) Paging Time
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
3.1.2 Virtual Memory Management
(i)
Background
Virtual memory provides an entirely new perspective for the users over the physical memory.
Though only a small physical memory is available, the illusion of a quite large virtual memory
can be created. Virtual memory allows the loading of processes partially, omitting the features
less used. Virtual memory also allows for the memory or the file sharing between the
processes.
The benefits of virtual memory are:
•
The physical space available is not considered.
•
Programs run faster as comparatively lesser input/output swapping into the memory.
•
More programs can run concurrently due to the availability of large virtual space.
•
Page sharing is allowed by virtual memory which speeds up the process creation via
page sharing using fork().
(ii)
Demand Paging
The whole program is normally loaded into the memory to be executed though it may not be
needed. To make the process more efficient only those pages required are placed into the
memory which is known as demand paging. Demand paging is implemented using a lazy
swapper mechanism which swaps a page into the memory on demand.
During the execution of the process, the pager figures the pages which are to be swapped-in,
rather than swapping all the pages. This mechanism saves memory and swap time. Hardware
support is required to distinguish between the pages which are stored in the disk and
memory. The valid bit is employed here which gives info regarding the location of the page. If
the bit is valid, then the page is in the memory and in functioning state else it is in the disk or
currently unavailable. Valid pages execute normally as they are memory resident. Invalid
pages are considered as page fault trap. Page faults are handled as follows:
•
•
Internal table is checked for the validity of the process

If valid, the pages are loaded into the memory

It invalid, it terminates the process
Free frame is determined and is loaded with the desired page.
Operating Systems Building Blocks | Storage Management
161
Memory management and Virtual management
•
Internal table is modified to set the validity of process (set to valid).
•
Restart the former interrupted instruction.
Secondary memory
This memory holds the pages which are not in memory, also identified as the swap space.
The following values are used to calculate the service time for the Page-fault:
•
Time required for reading the page
•
Time required to service the page fault interrupt
•
Time required to restart the process
(iii) Process Creation
Processes are created using a loader dedicated for the purpose which parses and loads the
executable files, aligns and parses sections, dynamically loads the linked libraries.
The process varies from threads in the aspect that threads have a stack dedicated whereas
processes do not have the privilege. Processes must have one thread at the least starting at the
entry point of the process and must be loaded as an executable image done with the help of
separate loader. Processes hold a virtual address space.
Process creation consists of the following steps:
1. Loading the executable file onto the physical address.
2. For the specified address space, the process is created.
3. PCB (Process control block) or the TCB (task control block) data structure
manifesting the process is created.
4. Main thread for the process is fabricated
5. The executable image is mapped to the virtual address space.
(iv) Page Replacement Algorithm
When there is no free frame available on the main memory to load a page requested by the
process which is residing on the disk, a page fault or exception occurs. One solution to this
problem is to swap a process and all the associated pages out of memory, but this method
162
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
cannot be used in all circumstances since it reduces the level of multiprogramming. Another
common solution is page replacement which frees frames by switching its contents to the
swap space.
The below-given steps are to be followed for the page replacement:
•
Page to be loaded onto the disk is chosen, and its disk location is found
•
Search for a free frame

If available, the free frame is used

When there is no free frame available, page replacement algorithm frees frames
from the victim frame and updates frame table and page table
•
The page that is required is read into the freed frame, and the page table and frame
table are updated.
•
The user process is restarted.
This algorithm performs two-page transfers in case a free frame does not exist (i.e. the victim
page transfer done to free frame and the desired page transfer read into the freed frame). It
can be reduced by using dirty or modify bit which signals if the page is:
•
Modified, if the bit is set to one
•
Not modified from the time it was read into the memory and is available if the bit is
set to zero
Reference string: 9, 0,1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,1, 2, 0, 1, 9, 0,1 with the memory able to hold
first three frames.
This algorithm acts as a foundation to the demand paging. Page replacement can be
implemented by any one of the following algorithms:
•
FIFO page replacement:
The easiest page replacement algorithm is FIFO which replaces the first page loaded in the
memory. All the pages are linked to the time that they were brought into the memory.
FIFO creates a queue and replaces the page present at the beginning of the queue.
For example, Consider the reference string with (9,0,1) is loaded into the memory.
Operating Systems Building Blocks | Storage Management
163
Memory management and Virtual management
Frame 2 (fourth frame in reference string) creates a page fault and then is made to replace
frame 9.
Now the frames in memory are (2,0,1).
Since Frame 0 is already in the memory, no page fault occurs.
Frame 3 replaces frame 0, so (2,0,1) becomes (2,3,1) and is proceeded
(2,3,0)->(4,3,0)->(4,2,0)->(4,2,3)->(0,2,3)->(0,1,3)->(0,1,2)->(1,1,2)->(1,9,2)->(1,9,0)
The constraint of FIFO is that when a heavily used page is being constantly replaced by
FIFO, it requires loading of that page again and again.
•
Optimal page replacement:
Belady’s anomaly is used in this algorithm, which guarantees the lowest page fault rate for
fixed number of frames. Here the frame which is not used for a relatively large amount of
time is replaced.
For example, In the reference string, (9,0,1) is loaded into memory.
Frame 9 is replaced by frame 2 since frame 9 is not used for the 17 frames in between.
Now the frames in memory are (2,0,1).
Since Frame 0 is already in the memory, no page fault occurs.
Frame 3 replaces frame 1, (2,0,1) becomes (2,0,3) and so on since one occurs in the
reference string after 10 frames and is proceeded by
(2,4,3)->(2,0,3)->(2,0,1)->(9,0,1)
The constraint of this method is that the string should be priorly known.
•
LRU page replacement (Least Recently Used)
This algorithm replaces the frame which was not in use for the longest amount of
time/page checks.
164
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
This algorithm works with the backward reference of the string and checks which frame
has not been used for a longer duration while the page fault has occurred.
For example, In the reference string, (9,0,1) is loaded in memory.
Frame 9 is replaced by frame 2 since frame 9 has not been used for the longest duration of
the three frames loaded into memory.
Now the frames in memory are (2,0,1).
Since Frame 0 is already in the memory, no page fault occurs.
Frame 3 replaces frame 1, (2,0,1) becomes (2,0,3) and so on since one fault occurs in the
reference string after 10 frames.
(4,0,3)->(4,0,2)->(4,3,2)->(0,3,2)->(1,3,2)->(1,0,2)->(1,0,9)
To implement this algorithm, stack or counters are used.

A stack of pages is made use in the LRU algorithm.

Counter or logical clock keeps track of each page entry and the associated time of the
use field which is updated after each reference.
•
LRU-Approximation page replacement
When there is insufficient hardware support for LRU algorithm then any one of the
followings are followed:

Additional reference bit algorithm
8-bit reference byte is used here which is shifted toward the lowest bit, and the
lowest unsigned integer interpretation of the 8 bits are swapped out i.e. the page
with the 8-bit reference byte as 00000000 is swapped out as it is very rarely used,
and one with 01110111 is used for now and then, and the one with 11001001 is
frequently used. This is another implementation of LRU algorithm.
Operating Systems Building Blocks | Storage Management
165
Memory management and Virtual management

Second chance algorithm
This algorithm is another implementation of FIFO replacement algorithm using a
circular queue and pointer to the page which is to be replaced next. A reference bit
is also added to each bit where,
o page is replaced if the bit is 0
o page is retained if the bit is 1

Enhanced second chance algorithm
This variation of the second chance algorithm uses an ordered pair instead of a
reference bit.
Table 3.1.1: Enhanced second chance algorithm
•
Reference bit, modified
bit
Meaning
0,0
Neither used not modified recently
0,1
Now recently used but recently modified
1,0
Recently used but not recently modified
1,1
Recently used and recently modified
Counting based page replacement
In this algorithm, a counter is allotted to track the quantity of references to each page
and the

LFU algorithm replaces the page that has the smallest count.

MFU algorithm replaces the page with the highest count with the concept that
the page having the smallest count is just read into memory and is still to be
used.
166
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
(v)
Allocation of Frames
The strategies for allocation of frames are determined using the number of the available
frames.
For a process to execute the frames that are to be allotted to each process must be more; else,
it slows the process execution by raising the page fault rate. Frame allocation decides how the
frames are allotted to the processes from:
•
The minimum frames per process as defined by the system’s architecture
•
The maximum frames based on the available physical memory
Types of frame allocation are:
•
Proportional allocation: Allocating frames based on the process size
•
Equal allocation: Dividing the available frames equally among the requesting
processes
•
Global vs. Local allocation: The global replacement admits the replacement of the
frame in the process from the global set of frames, though they have been already
alloted to other processes. Local replacement allows for the replacement of the frame
only from the frames that has been alloted for the process.
(vi) Thrashing
Low priority processes have the problem of not getting enough frames, since the frames
allocated is less than the minimum number of frames required by the system architecture.
Since the process does not have enough frames, it keeps paging again and again, since the
page which is replaced is immediately needed again and again. The process which undergoes
the above problem is said to be thrashing.
Thrashing is a result of the improper allocation of frames. Thrashing is caused as follows:
•
When CPU utilisation is low, the degree of multiprogramming is increased, and more
processes need frames and take the frames from other processes via global
replacement. This increases the paging requests and causes the processes to wait,
which in turn decreases the CPU utilisation. Again, the CPU scheduler tries to
increase CPU utilisation by starting execution of more processes.
Operating Systems Building Blocks | Storage Management
167
Memory management and Virtual management
This can be prevented by introducing the concept of locality. Locality model creates page
faults in the vicinity of the process when the process does not have enough frames to be
executed. Once the frames in the locality are freed and are big enough for the process, the
process is executed.
Working set model implements the concept of locality, defining a working set window which
includes the most recently used pages(A). If a page is requested, it is checked in the working
set window. If it is active, then it is found in the working set. If the pages are not active for a
specified time, then those pages are removed.
PPF or the page fault frequency is monitored as a measure to prevent thrashing and increase
CPU utilisation. The PPF has a permissible range which is constantly monitored.
•
If PPF is low, the frames that are alloted to each process are more than what is
required
•
If PPF is high, frames are removed from processes and suspended.
(vii) Operating System Example
Windows
Windows makes use of demand paging to implement the concept of virtual memory and
clustering is used to handle page faults by reading the fault page as well as the other pages
following it. Working-set minimum number of pages is allocated to in each process whenever
there is space available. When the amount of memory available is smaller than the workingset minimum, a tactic called automatic work set trimming which frees the pages from
processes which hold pages more than the working-set minimum so that the available space
becomes more than the working-set minimum.
Solaris
Here on an occurrence of a page fault, kernel allots page to the process from the list of free
pages which makes it necessary to make the kernel have free space available.
Kernel checks four times a second if the number of free pages is less than the lots free value
which is set to 1/64 of the total physical memory. Solaris maintains a list of pages which have
been freed from the process yet not rewritten which can be reclaimed later on if the need
occurs before the page is moved to the free list. Another enhancement provided by Solaris is
168
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
recognising pages from shared libraries and skipping the page-scanning process of these
pages when encountered.
(viii) Other Consideration
The other considerations to be kept in mind while employing the appropriate paging
algorithm, replacement algorithm and frame allocation are as follows:
•
Prepaging
Demand paging all the time increases the page faults to a large extent. Especially if a
process is restarted all the process which are just swapped out are again brought into
memory. Prepaging involves keeping a set of pages in the working set and
remembering the working set even if the process is to be suspended.
Prepaging is employed if the cost of prepaging is lesser than the cost of page faults
caused otherwise.
•
Page size
Page size is taken into consideration as

Small page size increases the number of pages and entries in the page table,
decrease total I/O and total allocated memory and makes it easier for the
locality concept reducing internal fragmentation as the memory is maximum
utilised.

Large page size: All the contents of the page are allocated or transferred even if
they are not needed, and provides for smaller page table.
•
TLB(Translation Lookaside Buffer) Reach
Hit ratio of the translation lookup buffer is taken into consideration and TLB reach is
the amount of memory that can be reached through the pages listed in the table and
can be increased by increasing the entries in the TLB which is done with the overhead
of incurring internal fragmentation and is also expensive. TLB is handled by the
software due to its varying page sizes.
•
Inverted Page Table:
It decreases the memory space required to track virtually to physical address
translations.
Operating Systems Building Blocks | Storage Management
169
Memory management and Virtual management
Entry in the inverted page table:
<process id, page number>.
Since information about the virtual memory is stored in each physical frame, reducing
the overhead caused by physical-virtual memory translations. However, this works
only if the referenced page is currently loaded in the memory.
•
Program structure:
The user with proper knowledge about demand paging can design programs which
can reduce paging.
inti , j ;
int [128][128] data;
for (j = 0; j < 128; j++) //column index
for (i = 0; i< 128; i++) // row index
{data[i] [j] = 0;}//accessed column wise
In the above code, data is accessed column wise leading to more page faults. Each row
consists of 128 occupying an entire page. In order to access the next element page
fault occurs every time leading to 128*128 fault.
Had the loops been accessed row wise, the page faults are just 128.
for (i = 0; i< 128; i++) //row index
for (j =0 ; j < 128; j++) //column index
{data[i] [j] = 0; }//accessed row wise
•
I/O interlock:
Few pages need to be locked during demand paging especially I/O operations done
involving the virtual memory.
Locking is necessary since when an I/O is requested by I/O device and while the
device is waiting in the queue CPU is executing another process which leads to a page
fault and the global page replacement replaces the page containing the memory buffer
of the waiting I/O device. This problem can be solved by:

Not to execute I/O operations in user memory

Locking the pages using a bit and locked pages cannot be replaced. However,
since lock bit cannot be turned off and the frame remains blocked.
170
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
(ix) Demand Segmentation
Demand Segmentation is based on the type of data the process holds. The sections or regions
are distributed among the threads or processes in the system.
In certain approaches, the logical address is divided into multiple tables and dividing the table
into sections permitting the memory to be shared. Demand segmentation provides virtual
memory to processes.
Self-assessment Questions
7) What is Virtual memory?
a) An extremely large main memory
b) An extremely large secondary memory
c) An illusion of extremely large memory
d) A type of memory used in super computers
8) Which of the following is the mechanism that brings a page into a memory only when
it is needed?
a) Segmentation
b) Fragmentation
c) Demand Paging
d) Page Replacement
9) What does PCB stand for?
a) Program Control Block
c) Process Communication Block
b) Process Control Block
d) Printed Control Block
10) Which of the following is the entity where the memory is shared?
a) Processes
b) Threads
c) Instructions
d) Pages
11) In FIFO page replacement algorithm, when a page must be replaced, ________.
a) Oldest page is chosen
b) Newset page is chosen
c) Random page is chosen
d) Middle page is chosen
Operating Systems Building Blocks | Storage Management
171
Memory management and Virtual management
12) What does Belady’s anomaly make use of?
a) Optimal Page Replacement
b) FIFO Page Replacement
c) LRU Replacement
d) LRU Approximation Page Replacement
13) Which of the following mechanism brings a page into memory only when need arises?
a) Page Replacement
b) Fragement
c) Demand Paging
d) Segmentation
14) Which of the following mechanism makes use of Backward Reference of String?
a) Optimal Page Replacement
b) FIFO Page Replacement
c) LRU Replacement
d) LRU Approximation Page Replacement
15) What does TLB stand for?
a)Translation Lookaside Buffer
c) Translation Lookalike Buffer
172
b) Transition Lookaside Buffer
d) Transition Lookalike Buffer
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Summary
o Memory in computer systems is organised as an array of bytes or words that are
identified by its address.
o Logical address or the virtual address is the address generated by the CPU and
physical address space is the address seen by the memory unit and loaded into the
memory address register.
o Logical addresses generated by the program form the logical address space
whereas the physical address space is formed with the set of physical addresses.
o Memory Management Unit (MMU) maps the virtual addresses to the physical
addresses with the help of hardware device during run time.
o Swapping provides the shifting of the process to a backing store which is then
shifted to the memory to continue its execution.
o Contiguous memory allocation is a method deployed to partition the available
memory space for both the user and the Operating system processes operating in
the system.
o The strategies used for hole allocation are first fir, best fir, and worst fit.
o External fragmentation occurs wherever best fit or first fit strategies are employed,
since the loading and removing the processes from the hole creates segments of
broken memory.
o CPU divides addresses into Page number and Page offset.
o Translation lookaside buffer (TLB) reduces the mapping time by presenting the
associative memory with an item. It consists of a value and key.
o The techniques which can be adopted in fabricating the page table are:
Hierarchical Paging, Hashed Paged Tables, and Inverted Page Tables.
o Segmentation is a memory management technique which supports user’s aspect of
memory with the logical address being viewed as a collection segments.
Operating Systems Building Blocks | Storage Management
173
Memory management and Virtual management
o Virtual memory provides an entirely new perspective for the users over the
physical memory. Though only a small physical memory is available, the illusion
of a quite large virtual memory can be created.
o FIFO page replacement algorithm is FIFO which replaces the first page loaded in
the memory.
o LRU Page Replacement algorithm works with the backward reference of the string
and checks which frame has not been used for a longer duration while the page
fault has occurred.
Terminal Questions
1. Elaborate on the concept of Memory Management.
2. Explain the concept of Demand paging.
3. List various page replacement algorithms with examples.
174
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Answer Keys
Self-assessment Questions
Question No.
Answer
1
b
2
a
3
d
4
c
5
b
6
b
7
c
8
c
9
b
10
a
11
a
12
a
13
c
14
c
15
a
Operating Systems Building Blocks | Storage Management
175
Memory management and Virtual management
Activity
Activity Type: Offline
Duration: 45 minutes
Description:
The following figure indicates a part of memory, available for allocation. The memory is
divided into segments of fixed sizes.
Three processes A, B, and C with the respective sizes of 12 KB, 10 KB and 9 KB are to be
allocated successively.
Describe the results of the allocation when the following allocation methods are used:
a. first fit
b. best fit
c. worst fit
d. next fit
Which algorithm makes use of the memory space the best?
176
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
Case Study
Company XYZ is planning to develop an operating system. The operating system is still in
development phase and requires proper memory management techniques to be a successful
one. The memory space in the system is 4 gigabytes with 32 bits being used to address each
memory location. The operating system is developed for commercial systems where page
fault is avoided as much as possible even with the overhead of repeated requests for paging.
Discussion Questions:
1. What sort of paging table would you suggest in this scenario?
2. Which page replacement algorithm will suit the requirements drafted by XYZ?
3. State the pros and cons of the suggested methods.
Operating Systems Building Blocks | Storage Management
177
Memory management and Virtual management
Bibliography
e-References
•
Contiguous Memory Allocation, Retrieved 2 Nov, 2016 from http://www.indiastu
dychannel.com/resources/136567-Contiguous-Memory-Allocation.aspx
•
Virtual Memory Management, Retrieved on 2 Nov, 2016 from http://www.utiliz
ewindows.com/what-is-virtual-memory-and-why-do-we-need-it/
•
Demand paging and segmentation, Retrieved on 2 Nov, 2016 from http://superu
ser.com/questions/454747/swaping-paging-segmentation-and-virtual-memoryon-x86-pm-architecture
Image Credits
•
Figure 3.1.1: https://it325blog.files.wordpress.com/2012/09/operating-system-co
ncepts-7-th-edition.pdf
•
Figure 3.1.2: https://it325blog.files.wordpress.com/2012/09/operating-system-co
ncepts-7-th-edition.pdf
•
Figure 3.1.3: https://it325blog.files.wordpress.com/2012/09/operating-system-co
ncepts-7-th-edition.pdf
•
Figure 3.1.4: https://www.scribd.com/document/68853752/Operating-SystemsMemory-management
•
Figure 3.1.5: https://www.scribd.com/document/68853752/Operating-SystemsMemory-management
•
Figure 3.1.6: https://www.scribd.com/document/68853752/Operating-SystemsMemory-management
178
Storage Management | Operating Systems Building Blocks
Memory management and Virtual management
External Resources
•
Doeppner, T.W. (2011). Operating System in Depth. Hoboken, NJ: Wiley
•
Sakamoto, K., & Furumoto, T. (2012). Pro multithreading and memory
management for iOS and OS X. New York. Apress
•
Stallings, W. (2005). Operating systems: Internals and design principles. Upper
Saddle River, NJ: Pearson/Prentice Hall.
Video Links
Topic
Link
Memory Management
https://www.youtube.com/watch?v=qdkxXygc3rE
Virtual Memory in OS
https://www.youtube.com/watch?v=g07hhEcsKLY
Paging
https://www.youtube.com/watch?v=4nRqJZx-Vt8
Operating Systems Building Blocks | Storage Management
179
Memory management and Virtual management
Notes:
180
Storage Management | Operating Systems Building Blocks
File System Interface, Implementation and Disk Management
Chapter Table of Contents
Chapter 3.2
File System Interface, Implementation and Disk Management
Aim......................................................................................................................................................181
Instructional Objectives................................................................................................................... 181
Learning Outcomes .......................................................................................................................... 181
3.2.1 Filesystem Interface ............................................................................................................... 182
(i) File Concept ....................................................................................................................... 182
(ii) Access methods ................................................................................................................ 184
(iii) Directory Structure ......................................................................................................... 184
(iv) Filesystem Mounting ...................................................................................................... 189
(v) File Sharing ........................................................................................................................ 189
(vi) Protection and Consistency Semantics......................................................................... 190
Self-assessment Questions ..................................................................................................... 192
3.2.2 Filesystem Implementation .................................................................................................. 193
(i) Filesystem Structure .......................................................................................................... 193
(ii) Filesystem Implementation............................................................................................. 193
(iii) Directory Implementation ............................................................................................. 195
(iv) Allocation Methods ......................................................................................................... 196
(v) Free space Management .................................................................................................. 199
(vi) Efficiency and Performance ........................................................................................... 200
(vii) Recovery .......................................................................................................................... 201
Self-assessment Questions ..................................................................................................... 202
3.2.3 Disk Management .................................................................................................................. 203
(i) Overview of Mass Storage Structure ............................................................................... 203
(ii) Filesystem Implementation............................................................................................. 204
(iii) Disk Attachment ............................................................................................................. 204
(iv) Disk Scheduling ............................................................................................................... 205
(v) Disk Management ............................................................................................................ 207
(vi) Swap-space Management ............................................................................................... 208
(vii) Stable storage implementation ..................................................................................... 208
Self-assessment Questions ..................................................................................................... 210
Summary ........................................................................................................................................... 211
Terminal Questions.......................................................................................................................... 213
Answer Keys...................................................................................................................................... 213
Operating Systems Building Blocks | Storage Management
File System Interface, Implementation and Disk Management
Activities ............................................................................................................................................ 214
Case Study ......................................................................................................................................... 215
Bibliography ...................................................................................................................................... 216
e-References ...................................................................................................................................... 216
External Resources ........................................................................................................................... 217
Video Links ....................................................................................................................................... 217
Storage Management | Operating Systems Building Blocks
File System Interface, Implementation and Disk Management
Aim
To provide the students with the fundamentals of Filesystem Interface, Filesystem
Implementation and Disk Management
Instructional Objectives
After completing this chapter, you should be able to:
•
Explain the concept of Filesystem Interface.
•
Explain the concept of Filesystem implementation
•
Explain Disk Management
Learning Outcomes
At the end of this chapter, you are expected to:
•
Elaborate on the basics of Filesystem interface
•
Elaborate on the concept of Filesystem Implementation
•
Discuss on the concept of Disk Management
Operating System | Storage Management
181
File System Interface, Implementation and Disk Management
3.2.1 Filesystem Interface
•
File Concept
The file system is a collection of files and a directory structure that organises the file
information.
File Concept
Users consider files as the smallest logical units of the operating system. A file is a collection
of information which is stored on secondary storage devices. The files may range from source
code or text or sequence of lines or bytes or numeric data. Hence the concept of files is wide.
File is said to be:
182
•
Text file, if it is a sequence of lines or pages
•
Source file, if it contains subroutines and functions along with code
•
Object file, if it contains sequences understood by system linker
•
Executable file, if it can be used by the loader to execute
•
Multimedia file, if it includes audio or video
•
Print preview file, which is in the format for printing as ASCII code
•
Library files, containing routines which can be loaded
•
Batch file,if it contains commands to the command interpreter
Operating System | Storage Management
File System Interface, Implementation and Disk Management
As shown in Table 3.2.1, attributes of the files are as follows:
Table 3.2.1 Attributes
Attribute
Its correspondence
Name
Human readable form of name
Identifies
Unique number which the system uses to
identify the file
Type
Reflects the type of file
Location
Pointer which locates the file
Size
Size in bytes/words/blocks
Protection
Access information (read/write/execution access)
Time-date and user
identification
Time stamp with data about modification
File is an abstract data. The operations that can be performed on a file are:
•
Creating a file: Space is allotted and file entry is made in the directory
•
Writing a file: Write pointer is used to write contents in the file
•
Reading a file: Name of the file and read pointer are used
•
Repositioning within a file: Current file position pointer is shifted to another
location within the file
•
Deleting a file: Releasing of file space, deleting the file entry in the directory
•
Truncating a file: Deleting the contents of the file while retaining its attributes
Open file table contains all the information about the currently open files. Information
associated with open file is:
•
File pointer: Usage of file pointer overrides the need to specify the offset in read() and
write()
•
File open count: File open counter monitors the number of opens and closes, and
when the count reaches zero, the entry is removed from the open file table
•
Disk location of the file: The disk location is needed to perform operations on the file
Operating System | Storage Management
183
File System Interface, Implementation and Disk Management
•
Access rights: The access mode specifies the type of operations allowed on file
File locks are used for files which are shared among processes and are locked by a process to
make the file unavailable its other processes. Locking files are mandatory or advised, based on
the OS.
• Access methods
Files need to be accessed in order to perform operations. Various access methods are as
follows:
•
Sequential access:
This mode of access is the most frequent mode and simplest, accessing one record
after the other sequentially i.e. when a read operation is performed; the file pointer
automatically shifts to the next record.
•
Direct access:
This method of access is also called as relative access where records can be accessed
randomly in no specific order. Index relative to the beginning of the file called relative
block number is made use of to point to a particular record.
To read the next record, the record pointer is incremented by 1(relative block number
to the next record is 1).
•
Other access methods:
Index for the file is created and records are reached using the index. For large files,
primary index file stores entries of secondary index files, which in turn leads to the
record.
• Directory Structure
The directory structure is used to store the file system. It can be divided into minidisks, slices
or partitions. The combination of partitions lead to volumes and the files in the volume is
found out using the volume table of contents or device directory. The operations that can be
done in the directory are:
184
•
Searching for a file: Searching the directory for a particular file.
•
Creating a file: Creating new files and adding them to the directory
Operating System | Storage Management
File System Interface, Implementation and Disk Management
•
Deleting a file: Deleting files from the directory
•
List of Directory: Listing the files in the directory
•
Renaming a file: Renaming the file in the directory even if the file is in use and also
repositioning the file in the directory
•
Traversing the file system: The contents and structure of the file system is saved at
regular intervals making it easier for traversing or browsing through the contents of
the directory
Keeping in mind the operations that must be applied to the directory, the different logical
structure of the directories are as follows:
•
Single level directory:
A simple directory structure where all the files are saved in one and only directory in the
system. This directory is organised at the same level and can be easily accessed. However,
when there are multiple users, it is difficult to manage the directory as there are several
files in the same directory.
•
Two level directory: (Figure. 3.2.1)
To overcome the problems faced in the single level directory, the two level directory
assigns a separate directory to every user. The outer directory is named the MFD or
master file directory, indexed by user name or account number, which points to the
nested directory called User file directory (UFD) which is unique to each user.
When a file is searched in the system, the MFD is searched for the user file directory.
Once the user file directory is found, then the target file is searched.
The collision problem is solved since each user has his own directory, but when the user
want to access other user’s files, the problem arises, since certain systems do not allow
that. Every file in the system has a path name, so that they can be reached.
When the target file is not found in the local UFD, then the system file directory which
contains the system files are searched. The search path is the sequence or list of directories
that are searched before finding the UFD, which contains the target file.
Operating System | Storage Management
185
File System Interface, Implementation and Disk Management
Figure 3.2.1: Two-Level Directory
•
Tree-structured directory: (Figure 3.2.2)
Directories are organised using a tree structure, which gives an access to each file using
the unique path name. Each process contains a current directory which contains all the
files that are accessed by the process. If the current directory does not contain the file,
then a system call is made with the target directory name as a parameter.
The types of path names are:

Absolute path name: This path name is complete in nature, starting from the root
directory and ends at the targeted file name.
For example, root/spell/exec/srt/first.txt

Relative path name: This path name is fabricated from the current directory.
For example, if the current directory is root/spell then exec/srt/first is the relative
path name.
The disadvantage of using the tree-structured directory is that the path names are longer and
the entire path name needs to be remembered to access the file.
186
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Figure 3.2.2: Tree-Structured Directories
•
Acyclic graph directory: (Figure. 3.2.3)
When two processes share a file space which is isolated from other processes, the
common subdirectory is shared and an acyclic graph directory is used. The same
subdirectory can be a part of two directories. Acyclic graph directory allows sharing of
subdirectories and files among the processes. Now, the same shared file has more than
one pathname, since the same file is traversed more than one time via different paths.
Deletion of files also becomes difficult since it may leave dangling pointers, pointing to
other files. This can be handled by deleting all the references to the file before deleting it.
Another approach of sharing files is duplicating the file and storing it in both the
subdirectories of processes which require it. However, it is difficult to have consistency in the
shared file.
Operating System | Storage Management
187
File System Interface, Implementation and Disk Management
Figure 3.2.3: Acyclic-Graph Directories
•
General Graph directory: (Figure. 3.2.4)
If cycles are allowed in a graph structure, then the traversing of the graph becomes infinite
and it leads to performance issues. Hence, it is ensured that the general graph structure is
free of cycles. There are algorithms which detect cycles in the graph structure, though they
are expensive in terms of execution.
When links are added to the tree structure, the structure becomes a general graph
directory.
During the deletion of a file, the count of references to the file should be zero.
188
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Figure 3.2.4: General Level Directory
•
Filesystem Mounting
To make the filesystem available to the processes, they must be mounted on the system. The
multiple volumes that constitute the directory are made available by mounting the multiple
volumes and making the volumes present within the file system name space.
Procedure to mount:
To mount a file, parameters like the name of the file and the mount point where the file
system is to be attached are sent to the Operating system. The mount point is initially an
empty directory. The device is checked for the validity of the file system and the format of the
directory. The mount command is used to see the list of file systems that are in a mounted
state.
In Macintosh, the file systems or devices are mounted to the root level automatically.
•
File Sharing
When two or more users work towards the same goal, file sharing for the users become easy
by reducing the efforts required.
Operating System | Storage Management
189
File System Interface, Implementation and Disk Management
When multiple users share the files, protection mechanisms for the files are to be devised. The
system can protect the files by allowing only read access to the files of other users or by giving
a special grant access to the shared files.
In UNIX system, file sharing is enabled by granting all the permissions on the files to the
owner, while granting one set of operations on the file to the certain users in the file group
and another set of operations on the file to all the other users.
When a request for an operation is made, corresponding userID is checked with the owner id
of the file. Based on the access level of the user, decisions are made.
The remote file system is deployed for sharing of resources as files via FTP or any other
protocol.

In a client-server model, the files are stored in the server and the machines accessing
the files from the server is treated as clients, identified by IP address or any other
unique ID. The files available on the server at directory level are specified and
authentication of the client via key exchange or any other secure mechanism is
adopted before the sharing of available files.

In Distributed information system, unified access to files via remote computing is
made possible. Domain name system provides host to network address translations
enabling file sharing across the World Wide Web and internet.

File systems and sharing can fail due to the corruption of the directory, cable failure,
host adapter failure and other hardware issues.
•
Protection and Consistency Semantics
Consistency semantics are the criteria for evaluating file system that supports file sharing. File
session is a series of operations and accesses made by the user which starts with open () and
ends with close ().
•
UNIX semantics:
In UNIX, the write operations on an open file are made visible to all the users who
have the file open. One mode of sharing makes all the users share the pointer which
has many drawbacks.
190
Operating System | Storage Management
File System Interface, Implementation and Disk Management
•
Session semantics:
In AFS(Andrew File System), the write operations on an open file are not visible to all
the users who have the file open. The changes made to the file are reflected only after
closing the file.
•
Immutable shared file semantics:
Here shared files are locked or immutable.
Protection
File protection is safeguarding the information from an improper access. Based on the type of
access, the protection mechanism is designed. Type of access can range between allowing free
access to files or entirely prohibiting access to the files.
Commonly, the access depends upon the identity of the user. Different users have different
type of accesses. ACL or access control list is deployed to match the user ids against the type
of access they enjoy. If the user request violates the ACL, the access is denied. However,
constructing ACL is tedious and complicates space management, as the directory entry needs
to be of variable size.
Users are classified as follows:
•
Owner – Creator of file
•
Group – Users sharing the file enjoying similar access privilege
•
Universe – All the users who are not in the above categories
A password can also be made used to give the access to the file. But remembering all the file
passwords is impossible. One password for all files can be deployed, but it is less secure.
Operating System | Storage Management
191
File System Interface, Implementation and Disk Management
Self-assessment Questions
1) Which of the following access method uses Index?
a) Sequential Access
b) Direct Access
c) Contiguous Access
d) Swap Access
2) Which type of directory allocates user file directory unique to each user?
a) Tree-Structured
b) Two-Level
c) Single-level
d) Acyclic Graph
3) What does UNIX semantics state?
a) Write on an open file is not allowed
b) The write operations on an open file are made visible to all the users who have
the file open
c) Various copies of file are kept
d) Locking of open files is done
4) What is the function of file lock?
a) File available to processed
b) File unavailable to all processes
c) File unavailable to processes other than the one which is working on the file
d) File hidden
5) When is the file removed from the system?
a) When the file is not accessed by processes at all
b) When the file open count becomes zero
c) When the file open count becomes one
d) When the file is corrupted
192
Operating System | Storage Management
File System Interface, Implementation and Disk Management
3.2.2 Filesystem Implementation
(i) Filesystem Structure
The filesystem is layered in approach.
•
The lowest level is made up of device drivers along with interrupt handlers to transfer
the information in between the device driver and the main memory. The human
readable high-level commands are translated into a hardware specific instruction.
•
The hardware specific instructions are interpreted by the I/O control level. It also
controls the device location on which the command acts.
•
Next is the basic file system makes use of generic commands to read or write into the
physical blocks of device drivers.
•
Next is the file organisation module translates the logical addresses into physical
addresses with the knowledge of the physical blocks. This level also includes the free
space manager which keeps track of free blocks. This information is used while
allocating physical blocks.
•
The logical file system includes the file system structure as metadata and uses the file
control block to maintain the structure. This level is responsible for the secure access
and the protection of files.
•
Application programs are the top most layer of the file system which accesses the files.
•
Layered filesystem minimises the duplication of code.
(ii)
Filesystem Implementation
A collection of structures is used for the implementation of the filesystem based on the OS
and the filesystem.
Some of the structures used are as follows:
•
Boot control block:
This structure contains booting information pertaining to the OS and is empty in the
absence of the OS.
Operating System | Storage Management
193
File System Interface, Implementation and Disk Management
•
Volume control block:
Number of blocks, free block count, blocks in the partition and other volume related
information is stored in the volume control block.
•
Master file table:
This is a directory structure per file system which includes file names in the system
along with the inode numbers.
•
Per file FCB: It contains the details regarding the file such as the size of data blocks,
the location of data blocks, ownership, file permissions, etc.
•
In-memory mount table: It holds the information about each mounted volume.
•
In-memory directory structure cache: It contains recently accessed directories and
its related information.
•
The system-wide open file table holds every open file’s FCB copy.
•
Per process open file table has pointers to the entries in the system-wide open file
table.
When a new file is created, the application system looks into the logical file system for the
format of the directory structure and a new or free FCB (file control block) is allocated and is
updated in the system directory.
While opening a file using open(), a pointer is returned to the appropriate entry in the per
process file system table. Before the open() is applied, the open file table is checked to check
whether the process is already open.
When a file is closed by a process, per process table entry is deleted and the global open table
count is decreased by one.
The disk is divided into partitions which cannot be used to mount filesystems. The raw
partition can be used for various purposes like swap space in UNIX, databases and boot
information. The root partition contains the OS kernel and the other system files which are
mounted during the boot time. When mounting takes place, the validity of the file system is
checked. Mount table is updated as a file system is mounted along with the type of files held.
194
Operating System | Storage Management
File System Interface, Implementation and Disk Management
The two main functions of the Virtual File System are:
•
Separation of file system generic operations from their respective implementation by
means of an interface
•
VFS designates a network-wide unique code for the file which works throughout the
network with the help of file representation structure called vnode
Four object types used in Linux are:
•
inode object (for an individual file)
•
file object(representing open file)
•
superblock object(for the entire file system) and
•
dentry object(for individual directory entry) to implement VFS.
(iii) Directory Implementation
Different forms of directory implementation are:
•
Linear list:
Though this method is time-consuming, it is considered as the simplest form of
directory implementation. When a new file is to be created, all the entries in the linked
list are searched to ensure that no such entry exists and the new file is appended at the
end of the list. Entries in the directory can be reused by:

allotting a special name

maintaining a list of free entries

copying the last entry to the freed location and freeing the last entry
A disadvantage of this method is that it makes use of linear search to find a file. A
software cache of the most recently used files is a way to reduce search time. Sorted
tree or B-tree are few ways for a better implementation.
•
Hash table
In this method, a linear list of the entries in the directory and a hash data structure is
deployed. Hash table computes a value for the file name and returns a pointer to the
file entry. File insertion and deletion is easy in this structure. The disadvantage with
Operating System | Storage Management
195
File System Interface, Implementation and Disk Management
hash table is that it is fixed. Chained overflow hash table is a variation of hash table in
which each entry is a linked list can be made use of to hold more entries.
(iv) Allocation Methods
Allocation of disk space can be done in the following ways:
•
Contiguous allocation( Figure. 3.2.5)
Contiguous disk allocation involves the allocation of a set of contiguous blocks in a
linear order on the disk. Accessing the next contiguous location in the block needs no
head movement or in certain cases minimal head movement.
Contiguous allocation of disk space to a file of n blocks, starting at block b occupies b,
b+1, b+2,…, b+n-1. Sequential access is used in contiguously allocated space starting
from start block of the disk space allotted to the file and reads the next block in an
incremental fashion till the end block of the file is read.
Contiguous disk allocation suffers from external fragmentation with free space
scattered here and there on the disk. Large hole creation by copying the contents to
another storage unit is a way to handle external fragmentation consolidating all free
space into a large hole. This process can be done off-line(file system is in the
unmounted state) or on-line.
Too little allocation of disk space terminates the program whereas too much of disk
space wastes the free space and leads to fragmentation, so right amount of space is
allocated for each process keeping track of the free space in the allocated memory.
196
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Figure 3.2.5: Contiguous Allocation of Disk Space
•
Linked Allocation (Figure. 3.2.6)
Linked allocation of disk space involves a linked list of disk blocks whose location can
be scattered anywhere on the disk with the directory storing pointers to the first and
last block of the file. External fragmentation does not occur here as only components
of the file are linked together.
Figure 3.2.6: Linked Allocation of Disk Space
Operating System | Storage Management
197
File System Interface, Implementation and Disk Management
In an empty file, the pointer in the directory is initialised to nil and size field of the
directory is set to zero. When a write to the file is attempted, a new block is found,
data is written into and linked to the previous entry of the file and continues to grow
as blocks are linked to the list.
However, linked list provides support only for sequential access and lot of space is
required by pointers which are considered as disadvantages of linked allocation.
To avoid these problems clustering of blocks and linking them rather than blocks
since it decreases space used by pointers and makes free space management easier. But
this approach aids to external fragmentation and does not support direct access.
One major drawback of linked allocation is that once a pointer is lost, the sequence is
difficult to recover. Doubly linked lists are used to avoid this problem.
•
Indexed Allocation (Figure. 3.2.7)
Indexed allocation solves the problem of linked allocation which doesn’t support
direct access. An index block is used which stores the pointers to various blocks.
When a file is created all the pointers in the index block are set to nil and the index
block is updated whenever contents are written to any block.
One drawback with indexed allocation is that even if few blocks and pointers are made
use of, even then an entire index block is dedicated for the purpose. The index block
may be allocated small space but this result in pointers not having enough space.
The variations of indexed allocation are:

Linked scheme:
Index block here is a disk block which can be read and written to, directly.
Several index blocks are linked together for large files.

Multilevel Index:
Here first level index blocks point to a collection of second level index blocks.
As the file size increases the level of indexing increases.
198
Operating System | Storage Management
File System Interface, Implementation and Disk Management

Combined scheme:
15 pointers are made use of, with 12 pointers pointing directly to blocks with
the last three being indirect blocks with 13th pointing to single indirect block,
14th pointing to double indirect block and 15th pointing to triple indirect
block.
The performance of the allocation method varies based on the implementation used.
In small files, the contiguous allocation is used, and as the file grows the other two
methods are resorted.
Figure 3.2.7: Indexed Allocation of Disk Space
(v)
Free space Management
When files are deleted, free space should be kept track of with the help of free space list which
can be implemented in the following ways:
•
Bit Vector:
Bit vector or bitmap is used to keep track of free space. This approach is pretty simple
and efficient in finding the first occurrence of free block or free space in n consecutive
free blocks.
1 is allotted if the block is free, else zero is assigned.
In consecutive 15 blocks, 5, 7, 9, 11 are free then the bit vector would be
Operating System | Storage Management
199
File System Interface, Implementation and Disk Management
000001010101000
Calculation of block number while finding the first occurrence of free space is:
Number of bits per word * number of 0- value words+ offset of first 1 bit
•
Linked List
In this approach, all the free disk blocks are linked together such that each free block
has the pointer to the next free block with the pointer to the first occurrence in a
special pointer. This method is not used, since traversing the list is a costly operation
though it is not a frequent action.
•
Grouping
The addresses of n free blocks are stored in the first free block. N-1 blocks are free and
the nth block contains the addresses of next set of n blocks. This implementation gives
easy access to free space.
•
Counting
When the free space is contiguous rather than storing the address of all the free space,
the first free block and the number of free spaces n that follow are kept track. The free
space list stores the address of the first block and the number of free spaces that follow
making the implementation smaller and easier.
(vi) Efficiency and Performance
The efficiency of disk space relies on the usage of apt disk allocation and directory algorithms.
While dealing with inodes, file system performance can be increased by spreading them
across the volume.
The type of data kept in the file directory also should be taken into consideration. Last write
date is recorded to decide if the file needs back up and is updated each time the file is
accessed. However, this becomes a constant need for frequently accessed files.
Efficiency in terms of pointers is based on the space required by the pointer (16/32/64 bits).
The size of the pointer limits the size of the file.
200
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Performance
Local memory is included in disk controllers to form an onboard cache, stores entire tracks at
a time. When a seek is requested, the heads are relocated into places. An entire track is read,
starting from whatever sector is currently under the heads. The requested sector is returned,
and the unrequested section of the track is cached in the disk's electronics.
Buffer cache stores blocks that may be used and page cache uses virtual memory techniques
to cache pages. In a unified virtual memory, page caching caches both pages and file data.
When systems do not interface with buffer cache, a double cache is required; the file system
data is cached twice.
Priority paging is employed by Solaris where the page scanner gives priority to pages in the
process rather than page cache, maximising memory use and minimising thrashing.
Synchronous writes make the routine calling and waits till the data reaches the disk to be
reflected. However, asynchronous writes increase the performance as the data that is stored in
the cache.
(vii) Recovery
The main memory stores files and directories to be secure against system failure.
Consistency Checking:
If there are two copies of directory information, then the version in main memory is up to
date. When the computer crashes the buffer and cache contents are lost leaving the file in an
inconsistent state, Consistency checker program checks for and fix inconsistencies. In this
case, linked allocation is used then with the help of the link to next block, the entire structure
can be recovered. However, when a directory entry is lost, it can be disastrous which is not
known by data blocks. UNIX caches directory entries for this reason.
Backup
Backup of data to a secondary storage device ensures that the data is not lost even if an entire
system crashes. The information about the previous backup is stored to reduce the amount of
data copied during regular backups. Incremental backup is taking backup of entities which
are not already backed up. This method is adopted since full backup every time is expensive.
Whenever a need occurs, the crashed elements are restored.
Operating System | Storage Management
201
File System Interface, Implementation and Disk Management
The incremental backup schedule is as follows:
Day 1: Full backup of the entire system is done
Day 2: Copying the files changed since day 1
Day 3:Copying the files changed since day 2
.
.
Day N: Copying the files changed since day N-1
Self-assessment Questions
6) Which of the following uses generic commands for read or writing into physical
blocks of device drivers?
a) Logical File System
b) Basic File System
c) Application Program
d) Interrupt Handler
7) What is the correct sequence of layered file system?
a) Devices, I/O control, Basic filesystem, File organisation module, Logical file
system, application programs
b) Application programs, Logical filesystem, File organisation module, Basic file
system, I/O control, Devices
c) Basic file system, Logical filesystem, Application programs, File organisation
module, I/O control, Devices
d) Devices, Application programs, I/O control, Logical filesystem, Basic
filesystem, File organisation module
8) Why is directory information kept in main memory or cache?
a) Fill up the cache
b) Increase free space in secondary storage
c) Decrease free space in secondary storage
d) Speed up access
202
Operating System | Storage Management
File System Interface, Implementation and Disk Management
9) What are the features of layered file system?
a) Minimising duplication of node, Complex access
b) Maximising duplication of node, Easier access
c) Maximising duplication of node, Complex access
d) Minimising duplication of node, Easier access
10) What type of storage device ensures that the backup data is not lost even if an entire
system crashes?
a) Main
b) Primary
c) External
d) Secondary
3.2.3 Disk Management
(i)
Overview of Mass Storage Structure
Apart from main memory, the following secondary and tertiary storage devices are used:
•
Magnetic Disk:
Magnetic disks provide for the secondary storage of the system commonly with the
diameter ranging between 1.8 to 5.25 inches. Platters in the magnetic disk hold the
information with a read/write head above the surface of the platter and disk arms for
moving the heads simultaneously. The surface of the platter is divided into tracks
which are further divided into sectors. The cylinder is the set of arms with the same
arm position. With respect to the magnetic disk:

Transfer rate: Speed of data flow between computer and drive

Random access time/Seek time: Time taken to shift the disk arm to the desired
cylinder

Rotational latency: Time taken for the target sector to reach the disk head.
Disk head may make contact with the fragile disk platter leading to head crash and
loss of data. Magnetic disks are removable and multiple disks can be used in one
system.
Operating System | Storage Management
203
File System Interface, Implementation and Disk Management
•
Magnetic Tapes:
Magnetic tapes are another secondary storage medium which can hold quite a large
amount of data and is a permanent type of storage. However, the speed of access in
magnetic tape is 1/100th of access speed using magnetic disks. The tape is usually
wound or rewound to reach the target data, and once the head is set, the speed of
access is equal to that of magnetic disks
(ii)
Filesystem Implementation
Disks are divided into logical blocks and the disk is viewed and addressed as a large onedimensional array of blocks. Block is mostly 512 bytes or in rare cases 1024 bytes. Sector 0 is
the first track which is located in the outermost cylinder and the order proceeds that way.
Logical blocks can be converted to disk address (with cylinder number, track number, sector
number to reach the sector) on need. However, the difficulty in the mapping is:
•
Disks have defective sectors which are not noticed due to mapping
•
Number of sectors in each track is not fixed
•
Usage of constant linear velocity (CLV) ensures that the bits per track ratio are
uniform. The tracks on the outer layers have more circumferences and can hold more
sectors. Decreasing the number of bits from inner to outer tracks can be done to keep
the data rate uniform for the outer and inner tracks and is called Constant Angular
Velocity (CAV).
(iii) Disk Attachment
Disk storage can be accessed in two ways, either by I/O ports or through a remote host in a
distributed file system.
•
Host attached storage:
Disk storage is accessed via local I/O ports. I/O bus architecture is made use by a
typical desktop named IDE or ATA. Fibre channel or SCSI is made use of in
sophisticated systems.

SCSI, a bus architecture can support 16 devices on the bus including SCSI
initiator or control card with SCSI disk being the common target. SCSI allows
for addressing 8 logical units in each target.
204
Operating System | Storage Management
File System Interface, Implementation and Disk Management

FC or fibre channel is an optical fibre connection with high-speed serial
architecture. It can either be arbitrated loop addressing up to 126 devices or a
large switched fabric with 24-bit address space.
•
Network attached storage
NAS is a special purpose storage system accessed by remote host storage over a data
connection. Network attached storage is accessed by means of remote procedure calls
which maybe TCP or UDP over an IP network. The RAID array is used to implement
the attached storage unit.
This type of storage is convenient to share a storage pool with easy access
convenience.
•
Storage area network
Bandwidth on the data network is required for network attached storage which in
turn increases the latency of the network. Storage area network is a network private to
the storage units and servers. Since the storage units and servers are connected locally
if a system has low disk space, more disk space can be allotted from the storage unit. A
SAN switch can prohibit or allow access between the storage and host. SAN is flexible
and can add any number of hosts and storage units.
(iv) Disk Scheduling
Disk scheduling uses an algorithm which works on reducing the seek time where seek time is
the time required by the disk arm to make the cylinder head reach the target sector. The
additional time required to rotate the disk till the target sector is reached is known as the
rotational latency. Bandwidth is the rate of transfer of bytes.
In this sequence of disk queue with requests for the following cylinders, the scheduling
algorithms are illustrated:
98, 183, 37, 122, 14, 124, 65, 67 with initial head position 53.
FCFS scheduling:
The simplest form of scheduling which follows the first come first serve rule.
The head is moved to 98 from 53, then to all the other cylinders as in the queue without any
optimisation technique applied.
Operating System | Storage Management
205
File System Interface, Implementation and Disk Management
SSTF scheduling:
Shortest seek time first technique selects or processes the requests with least seek time with
respect to the current head position.
With respect to head position in the example (53), the queue is sorted based on the distance
of the target cylinder from the head.
Sorted queue:
65 (seek distance: 12)
67 (seek distance: 14)
37 (seek distance: 16)
14 (seek distance: 39)
98 (seek distance: 45)
122 (seek distance: 69)
124 (seek distance: 71)
183 (seek distance: 133)
SCAN scheduling:
In scan scheduling, disk arm scans the cylinder from one end to another servicing the
requests in the direction and the rest of requests are serviced during the reverse scan in the
opposite direction.
Current head: 53 and the direction of the scan are to be known.
Scan direction is towards 0 and on reaching the end of the disk, the direction is reversed.
Requests serviced during initial scan: 37, 14
Requests serviced during the reverse scan: 65, 67, 98, 122, 124, and 183.
C-Scan scheduling:
C-Scan is similar to Scan scheduling, but it does not service requests during reverse scan and
cylinders are treated as circular list redirected to the starting of the cylinder once it reaches
the end rather than reverse scanning.
206
Operating System | Storage Management
File System Interface, Implementation and Disk Management
LOOK scheduling:
In look scheduling once the last request in the queue is serviced the end is not reached, the
direction is reversed once the last request in queue is requested, i.e. if the cylinder in the
example has 199 sectors, the direction in the initial scan is changed at 14 rather than scanning
up to 0 and changed when the request for 183 is processed rather than going up to 199 which
improves the efficiency of this scheduling.
(v)
Disk Management
An empty magnetic disk is blank and must be formatted into sectors readable and writable by
the disk controller via a process called low-level formatting. Low-level formatting fills the disk
with a data structure (containing the header, data and trailer). Low-level formatting is mostly
done during manufacture itself.
ECC (error correcting code) is included in the header and trailer and is computed during I/O
and stored. Whenever a read request comes in, ECC is calculated and compared with the
stored value. If they do not match, corruption has occurred.
For a disk to hold files,
•
First step: Partition the disk as cylinder(s) (one or more cylinders can be obtained
after partition)
•
Second step: Logical formatting where OS stores initial file system and data structures
onto the disk
Grouping of blocks into clusters is done to increase efficiency.
The boot program is mandatory for the computer system to start, initializing all the aspects of
the system. The bootstrap program contains simple code which finds the OS kernel and loads
it into memory in units called boot blocks. The disk which has boot partition is termed a boot
disk or system disk and the partition which contains the OS and device drivers is termed boot
partition.
Disks are subjected to mechanical movement leading to bad blocks which can be handled
as follows:
•
Manual handling where the bad blocks are detected and made not of in FAT entry so
that allocation routines do not allot them.
Operating System | Storage Management
207
File System Interface, Implementation and Disk Management
•
In sophisticated disks, the controller has a track of bad blocks.
Bad block routine:
1.
OS tries to read a logical block 90.
2.
ECC is calculated and the sector is confirmed to be a bad block.
3.
A special command is invoked to replace the bad block with a spare block.
4.
Whenever a request for 90 occurs, it is automatically replaced by the block which was
assigned in step 3.
(vi) Swap-space Management
Swap space management is a low-level OS task providing virtual memory. Swap space can
hold any data or code or an entire process image or pages which are forced to be temporarily
pushed out of memory. The amount of swap space required varies from system to system
based on the amount of virtual and physical memory. Linux has more than one swap space
which is spread over the I/O devices. Swap space must be large enough to hold data and small
enough not to waste data.
Location of swap space may be in the
•
File system: Here, the swap space is considered as one large file and normal procedure
is used to create, name or allocate space via the file system routines. Though this
approach is easy to implement is inefficient. External fragmentation may increase in
this case thereby increasing the seek operation and increasing the number of swaps.
To deal with fragmentation contiguous memory is allotted. However, the traversal
cost cannot be reduced.
•
Disk partition separately dedicated to swap space managed by a swap space storage
manager, optimising the speed with which the swap space is accessed.
(vii) Stable storage implementation
Stable storage, as the name suggests is stable in nature and is implemented by multiple storage
devices with individual failure mode such that failure of one device does not affect the other.
While recovering all copies in multiple storages is made to be consistent with the correct
value.
208
Operating System | Storage Management
File System Interface, Implementation and Disk Management
A disk write can result in successful completion with the data being written to the disk or
partial failure with some sectors written with new data and few sectors corrupted or total
failure when the entire disk write had not happened.
When failure is detected, recovery options are initiated and two physical blocks are dedicated
to each logical block and the write is declared only if both the physical blocks are successfully
written the same value. In the case of failure, the contents of the blocks are compared with
one another. If one block is detected with an error, then it is replaced with the contents of the
other block. If an error is present, but the block which is erroneous is not known, then the
contents of the first block are replaced with that of the second block.
Memory being non-volatile is considered a stable storage.
Operating System | Storage Management
209
File System Interface, Implementation and Disk Management
Self-assessment Questions
11) In the given queue sequence 105, 58, 45, 99, 2, 20, with current head position 54 and
direction of scan towards 0; which cylinder does the SSTF access next?
a) 58
b) 45
c) 2
d) 99
12) In the given queue sequence 105, 58, 45, 99, 2, 20, with current head position 54 and
direction of scan towards 0; which cylinder does the Scan Scheduling access next?
a) 58
b) 45
c) 2
d) 105
13) In the given queue sequence 105, 58, 45, 99, 2, 20, with current head position 54 and
direction of scan towards 0; which cylinder does the FCFS access next?
a) 58
b) 45
c) 2
d) 105
14) Which of the following is a special purpose storage system accessed by remote host
storage over a data connection?
a) Host Attached Storage
b) Storage Area Network
c) Disk Attached Storage
d) Network Attached Storage
15) How many devices can an FC fibre hold?
a) 126
c) 64
210
b) 24
d) Infinite
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Summary
o A file is a collection of related information which is recorded on secondary
storage such as magnetic tapes, magnetic disks, and optical disks.
o A filesystem is used to control how data are stored and retrieved.
o The basic idea behind mounting filesystems is to merge multiple filesystems into
one large tree structure.
o Filesystems can be mounted by root only unless root has previously configured
certain filesystems to be mountable onto certain pre-determined mount
positions.
o The traditional Windows OS runs an extensive two-tier directory structure, in
which the first tier of the structure separates volumes by drive letters, and a tree
structure is applied below that level.
o Various forms of distributed filesystems allow remote filesystems to be mounted
onto a local directory structure and accessed using a normal file access
commands.
o When a computer system remotely mounts a filesystem that is physically located
on another system, then the system which physically owns the files acts as a
server, and the system which mounts them is called as the client.
o The Domain Name System (DNS), provides a unique naming system all over the
Internet.
o Domain names are managed by the Network Information System (NIS).
o A newer approach is the Lightweight Directory-Access Protocol, LDAP, which
provides a secure single sign-in for all users to access all resources on a network.
o Consistency Semantics concerns with the consistency between the numbers of
views of shared files on a networked system.
Operating System | Storage Management
211
File System Interface, Implementation and Disk Management
o Block sizes in a hard disk may vary from 512 bytes to 4K or it can be larger.
o The file organisation module concerns with files and their logical blocks, and how
they plan to physical blocks on the disk.
o A boot-control block, (per volume) also known as the boot block in UNIX or the
partition boot sector in Windows consists of information about how to boot the
system off of this disk.
o The File Control Block, FCB, (per file) comprises of details about ownership, size,
permissions, dates, etc. UNIX stores this data in nodes and NTFS in the master file
table as a relational database structure.
o Contiguous Allocation needs that all blocks of a file should be kept together
contiguously.
o A Storage-Area Network, SAN, joins computers and storage devices in a network,
using storage protocols instead of network protocols.
o Bandwidth is measured by the amount of data transferred divided by the total
amount of time from the first request being made to the last transfer being
completed, (for a series of disk requests.)
212
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Terminal Questions
1. Elaborate on the concept of Filesystem Interface.
2. List the various methods of File Allocation.
3. Explain Disk Scheduling.
Answer Keys
Self-assessment Questions
Question No.
Answer
1
b
2
b
3
b
4
c
5
b
6
b
7
b
8
d
9
d
10
d
11
b
12
b
13
d
14
d
15
a
Operating System | Storage Management
213
File System Interface, Implementation and Disk Management
Activities
Activity Type: Offline/Online
Duration: 45 Minutes
Description:
Research on various disk scheduling algorithms and discuss when they are used.
a. FIFO
b. SSTF
c. LOOK
214
Operating System | Storage Management
File System Interface, Implementation and Disk Management
Case Study
The success of the retail giant Staples’ is dependent on three core values: a broad selection of
quality merchandise, everyday low prices, and a focus on providing superior customer
service. The disk management took a lot of time.
Staples was making use of the OS utilities to maintain the DASD (Direct Access Storage
Device) on all their 14 iSeries systems. They needed to IPL (Initial Program Load) once in a
week and they did not have an idea on how fast the DASD on the iSeries was growing.
Managing the resource was extremely labour-intensive.
Staples had a problem with Disk Management as it took a lot of time and had continuous
capacity problems. So they started looking for a quick solution that automated the complete
maintenance process and provide all the variance and trend information without any
involvement of the operator. DASD-Plus was selected for this purpose. S4i DASD-Plus
seemed to be a robust system, which was easy to implement and operate.
This system had saved loads of man hours for Staples by automating the critical tasks to
control the disk resources. This was considered to be ‘best-of-breed’ for the disk management
for the complete iSeries enterprise. It provides a feature-rich operations capabilities and
simple-to-use front end that enable operations staff to dedicate their time to more productive
tasks.
Discussion Questions:
1. Analyse the case study for the problems faced by Staples and present your
observations.
Operating System | Storage Management
215
File System Interface, Implementation and Disk Management
Bibliography
e-References
•
Filesystem. Retrieved 2 Nov, 2016 from https://it325blog.files.wordpress.com/2
012/09/operating-system-concepts-7-th-edition.pdf
•
Filesystem Implementation. Retrieved 2 Nov, 2016 from https://it325blog.files.w
ordpress.com/2012/09/operating-system-concepts-7-th-edition.pdf
•
Disk Management. Retrieved 2 Nov, 2016 from https://it325blog.files.wordpre
ss.com/2012/09/operating-system-concepts-7-th-edition.pdf
Image Credits
•
Figure 3.2.1: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/10_Fil
eSystemInterface.html
•
Figure 3.2.2: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/10_Fil
eSystemInterface.html
•
Figure 3.2.3: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/10_Fil
eSystemInterface.html
•
Figure 3.2.4: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/10_Fil
eSystemInterface.html
•
Figure 3.2.5: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/12_
FileSystemImplementation.html
•
Figure 3.2.6: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/12_
FileSystemImplementation.html
•
Figure 3.2.7: https://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/12_
FileSystemImplementation.html
216
Operating System | Storage Management
File System Interface, Implementation and Disk Management
External Resources
•
Davis, W. S. (1987). Operating systems: A systematic view. Reading, MA: AddisonWesley Pub.
•
Biswal, R. (2013). Operating system. New Delhi: Anmol Publ.
•
Ahmad, A. (2010). Operating system. New Delhi: Knowledge Book Distributors.
Video Links
Topic
Link
Linux Filesystem
https://www.youtube.com/watch?v=9dS6zadm33U
Contiguous Memory
https://www.youtube.com/watch?v=F4YTeWBwNIY
Disk Scheduling Algorithms
https://www.youtube.com/watch?v=X63lwwQtpic
Operating System | Storage Management
217
File System Interface, Implementation and Disk Management
Notes:
218
Operating System | Storage Management
Operating Systems Building Blocks
MODULE - IV
Protection and
Security
MODULE 4
Protection and Security
Module Description
The main goal of studying Protection and Security is to understand the concept of protection
in the operating system. They will also understand the concept of security.
By the end of this module, students will learn about the goals of protection, principles of
protection, domain of protection, access matrix and its implementation, revocation of access
rights, capability based systems, and language based protection. The students will also learn
about the security problem, user authentication, one time password, system threats,
cryptography, and computer security classification.
By the end of this module, students will be able to elaborate on the goals and principles of
protection in a modern computer system, illustrate how protection domains along with access
matrix specify the resources a process may access, and elaborate on the capability and
language based protection system resources a process may access, and explain capability and
language based protection system. They will also be able to discuss security, threats and
attacks, brief on the concept of Cryptography, and elaborate on the countermeasures to
security attack.
Chapter 4.1
Protection
Chapter 4.2
Security
Operating Systems Building Blocks
Protection
Chapter Table of Contents
Chapter 4.1
Protection
Aim ..................................................................................................................................................... 219
Instructional Objectives................................................................................................................... 219
Learning Outcomes .......................................................................................................................... 219
4.1.1 Goals and Principles of Protection ...................................................................................... 220
Self-assessment Questions .................................................................................................... 221
4.1.2 Domain of Protection ............................................................................................................ 222
Self-assessment Questions .................................................................................................... 225
4.1.3 Access matrix and its implementation ................................................................................ 226
Self-assessment Question ..................................................................................................... 231
4.1.4 Revocation of Access rights .................................................................................................. 232
4.1.5 Capability-based systems ...................................................................................................... 233
Self-assessment Questions .................................................................................................... 235
4.1.6 Language based protection ................................................................................................... 236
Self-assessment Question ..................................................................................................... 239
Summary ........................................................................................................................................... 240
Terminal Questions.......................................................................................................................... 241
Answer Keys...................................................................................................................................... 242
Activity............................................................................................................................................... 243
Case Study ......................................................................................................................................... 243
Bibliography ...................................................................................................................................... 246
e-References ...................................................................................................................................... 246
External Resources ........................................................................................................................... 246
Video Links ....................................................................................................................................... 247
Operating Systems Building Blocks | Protection and Security
Protection
Aim
To provide the students with the fundamentals of Protection in Operating System
Instructional Objectives
After completing this chapter, you should be able to:
•
Outline various goals and principles of protection
•
Discuss how protection domains combined with an access matrix are used to
specify the resources a process may access
•
Elaborate the capability and language based protection system
Learning Outcomes
At the end of this chapter, you are expected to:
•
Discuss the goals and principles of protection
•
Illustrate how protection domains along with access matrix specifies the
resources a process may access
•
Recall the capability and language based protection system
Operating Systems Building Blocks | Protection and Security
219
Protection
4.1.1 Goals and Principles of Protection
Protection in Operating System aims at making the system more reliable. Protection
mechanisms are made to allocate system resources in par with the stated policies of the
system and support in the enforcement of security policies. Protection in OS focuses on the
threats, internal to the system.
Protection remains mandatory in the OS since the number of malicious violation of access
denial keeps on increasing day by day due to financial, political and other reasons. A perfect
mechanism is a must to keep the system intact without malpractices.
Protection aims at detecting potential errors related to access of resources in the system,
thereby making sure the resources of the system are protected against threats.
Protection aims at providing mechanisms which govern usage of resources as per the norms
which can be implemented in any layer such as:
•
Design of the OS
•
Individual user
•
Management
Principles of protection
A key guiding principle plays a vital role in the design of the system which makes the system
consistent and simple to understand.
One common protection is the principle of least privilege. The principle of least privilege
states that just enough privilege or access is given to each component according to the
component’s importance in carrying out processes. The lesser the privilege, lesser the risk
involved. For example, consider a guard of an apartment having a key. If the key has access
only to the compound gate and the flats in the apartment have separate keys which cannot be
opened by the guard’s key, then the less privilege principle is followed and the risk associated
with losing the guards key is comparatively less.
An operating system which follows principle of least privilege defines specific fine-grained
access controls via system calls and services. This leads to a reliable operating system.
220
Protection and Security | Operating Systems Building Blocks
Protection
Implementation of the principle of the least privilege in real world computing environment is
as follows:
•
A service employee who tracks attendance of employees has access to just the
employee database and commands.
•
Creating separate accounts for users in a shared system with privileges allocated to
users based on their need.
Self-assessment Questions
1) Which principle state that the programs, users and even the systems be given just
enough privileges to perform their task?
a) Principle of Process Scheduling
b) Principle of Operating System
c) Principle of Least Privilege
d) Principle of Design
2) Which of the following limits potential damage malicious access can do to the system?
a) Principle of Least Privilege
b) Need to know Principle
c) Principle of Design
d) Principle of Process Scheduling
Operating Systems Building Blocks | Protection and Security
221
Protection
4.1.2 Domain of Protection
Domain of protection defines the scope of the objects in the particular domain by assigning
privileges to access software and hardware. A domain can be a process, a user or a procedure.
Software privileges can be read, write and execute access, hardware privileges include print
operation on printer and so on. The same object can have different privileges in different
domains.
Domain Structure
Each process is associated with a protection domain. The protection domain defines the types
of operations that can be invoked by each object and the set of objects that can be defined.
Format:
<Object name, rights set>
For example, domain D1 has the access right <file f1, {read}>
/*process in D1 can only read file f1 and no other operations can be performed on file f1
which fall under the protection domain D1*/
Characteristics of Domain:
•
Domains can share the access rights and need not be discrete or disjoint.
•
Association between a process and domain can be static or dynamic.

Static if the resources that are available to the process are fixed and do not vary
within the process’ lifetime. However the need to know mechanism is violated in
the case of static association as the domain cannot be modified to reflect
minimum access rights when the need occurs. i.e. when a read, write access
domain D2 has to be changed to only read for a particular process P2, static
association allows P2, write access as well.

Dynamic if the resources available to the process constantly changes, allowing
domain switching where a process from one domain can switch to another
domain. When the content of a domain is static, the dynamic effect is provided by
creating a new domain with the required content and switching the process to the
new domain.
222
Protection and Security | Operating Systems Building Blocks
Protection
Figure 4.1.1: Domains sharing the access rights
In the above Figure. 4.1.1,
D1 is a disjoint domain whereas D2 and D3 are overlapping domains sharing the right to
print object O4.
Processes in domain D1 can read and write object O3 and O1 and can execute object O2.
Domain can be any one of the following:
•
User:
The identity of the user determines the access to objects. When the current user logs
out, and another user logs in domain switching occurs.
•
Process:
The identity of the process determines the objects it can access. Domain switching
occurs when one process communicates via messages to any other process and waits
for a response.
•
Procedure:
In the case of procedure being a domain, the set of accessible objects are the variables
locally defined in the procedure.
Domain in UNIX:
In UNIX, each user is associated with a domain and switching a domain means the
identification of the user is changed temporarily. Each file has a domain bit known as
setuidbit and its owner identification. When the setuid bit it turned on, user who tries to
access the file has its userid changed to the userid of the owner of the file. This cannot be
done if the setuid bit is turned off.
Consider the example with users A (user id=A1), B (user id=B1) and file F owned by user B;
Operating Systems Building Blocks | Protection and Security
223
Protection
When the user A tries to access F whose domain bit is off and the setuid bit is on,the userid of
user A is set to B1 so that access can be granted to user A. This temporary userid changes
once the access to file F is no longer needed.
Other methods to change the domain in OS are:
•
Place privileged programs in a special directory and to set the userid to the root of the
directory or the userid of the owner of the special directory
•
Change domains in which the domain definition can be done by user ID.
Domain in MULTICS
Figure 4.1.2: MULTICS Rings
As shown in Figure. 4.1.2, the MULTICS system hierarchy of the protection domains in the
form of a ring structure, with the innermost ring being the most privileged.
IF j is less than i, then Di is a subset of Dj.
The rings are numbered from zero to seven. In a two ring scenario, monitor mode
corresponds to the inner circle and the user mode corresponds to the circle D1 or outer circle.
Segmented address space is made use of with each segment corresponding to a file associating
it with one of the rings. Access bits for read/write/execution are added. Current-ring-number
is assigned to each process to validate the access.
Domain switching:
Domain switching, switching from one ring to another, in MULTICS, must happen in a
controlled manner. Access brackets (a couple of integers b,b2 where b1 is less than or equal to
b2), limit (integer b3, b3 is greater than b2), the list of gates(entry between domains where the
segments can be called) are included for controlled domain switching.
224
Protection and Security | Operating Systems Building Blocks
Protection
Execution of a process with access bracket (b1, b2) and ring i happens only if i is a value
between b1 and b2 or equal to either b1 or b2. Else
•
If the value of i is less than b1, then the execution is allowed with fewer privileges and
the value of parameters must be copied and made accessible to the calling procedure.
•
If the value of i is greater than b2, then the execution is allowed if b3 is equal to or
greater than i, via the entry point declared in the list of gates.
Disadvantages:
•
Need-to-know principle is violated
•
Complex in nature
•
Less efficient
Self-assessment Questions
3) Which of the following is assigned to each process to validate the access?
a) Current Ring Number
b) Segmented Address Space
c) Access Bits
d) Domain Switching
4) What is the domain bit to which files are associated in UNIX is known as?
a) SUID
b) SETUD
c) SETUID
d) SEUID
5) If the set of resources available to the process is fixed throughout the process’s
lifetime, then its domain is said to be dynamic
a) True
b) False
6) Which of the following is violated in case of static association?
a) Principle of Least Privilege
b) Principle of Process Scheduling
c) Principle of Design
d) Need to know Principle
Operating Systems Building Blocks | Protection and Security
225
Protection
4.1.3 Access matrix and its implementation
Protection model can be plotted as a matrix which is called as the access matrix.
•
Rows of the access matrix correspond to the domains
•
Columns of the access matrix correspond to the objects
•
Entry(i,j) in the matrix correspond to access rights- set of operations that a process in
the domain Di can apply on object Oi
Policy decisions involving the right to be included in every entry (i,j) is implemented via the
access matrix. For every new object creation, a column is added to the matrix and a new row
for every domain defined. The access matrix is defined by the users and plays a vital role in
the dynamic and static association between the processes and domains.
An example of access matrix is shown in Fig. 4.1.3
Figure 4.1.3: Access Matrix
The resources in above matrix Files f1, F2, F3 and one printer where processes in,
•
D1 can read files F1 and F3
•
D2 can print using the printer
•
D3 can read file F2 and execute file F3
•
D4 can read and write both files F1 and F3
Switching domains
Switching domain by a process is considered an operation and can be controlled by adding
domains as columns (as objects). By including the access matrix itself as an object, permission
226
Protection and Security | Operating Systems Building Blocks
Protection
to modify the access matrix is granted. However, the entries in the access matrix are
considered as objects to be protected.
To switch from Domain Di to Dj, Dj should be included as an object and the entry (Di,Dj)
should contain the value “switch” as shown in Table 4.1.1
Table 4.1.1: Access Matrix
Domain/Object
D1
F1
F2
F3
Read
Read
D1
D2
Read
D3
D3
D4
Switch
Print
D2
D4
Printer
Switch Switch
Execute
Read
Read
Write
Write
Switch
In the above access matrix, the following domain switching is permissible from,
•
D1 to D2
•
D2 to D3, D4
•
Processes in domain D3 cannot switch domain
•
D4 to D2
Copyright within a column (*):
Any entry within the access matrix succeeded by an asterisk symbol means that the right can
be copied within the column. However, the variants of copyright are:
•
Once a right is copied from entry (i,j) to (k,j), the right is no longer available to (i,j)
and will be a right only to (k,j). It is also said the right is transferred.
•
When a right with copyright is copied only the right is transferred i.e. R* is copied
from (i,j) to (k,j) R is created and further copying of R is prohibited within the
column.
Copyrights can be on any one of the following or all when specified so.
•
Copy
Operating Systems Building Blocks | Protection and Security
227
Protection
•
Transfer
•
Limited
Owner right in (i,j) allows the modification of rights in the jth column to every process in the
Domain Di.
In the following table, domain D1 owns the file F1 and the processes operating in domain D1
can add or remove rights in the column F1.Similarly, domain D2 owns the file F3 and F2 and
the processes operating in domain D2 and D3 can modify rights in the column F2 and F3.
Table 4.1.2: Before copying rights
Domain/Object
F1
F2
Owner
D1
F3
Write
Execute
D2
Read*
Read*
Owner
Write
Owner
D3
Execute
Table 4.1.3: After copying rights
Domain/Object
F1
F2
Owner
D1
D2
D3
F3
Write
Execute
Read*
Read*
Owner
Write
Write*
Owner
Write
Write
The access matrix in Table 4.1.2 can be modified to Table 4.1.3 with changes being made by
process(es) in D2 adding writing of file F2 and F3 right to processes in domain D3.
228
Protection and Security | Operating Systems Building Blocks
Protection
Control right can be included as an entry when Domain is an object.
If the entry at Di,Dj is “control” then a process in domain Di can modify Dj.
Implementation of Access Matrix:
The difficulty that arises while implementing the access matrix is that most of the entries will
be empty, leading to space occupying the data structures. The following implementation
techniques are followed based on the appropriateness.
•
Global table
•
Access list for objects
•
Capability lists for domains
Global table:
Ordered triples with the format <domain, object, rights-set> constitute the global table.
If((triple<Di,Oj,Rk> is an entry
{operation is allowed to continue}
in
the global table) & M belongs to
Rk)
Else {Exception is raised}
The above condition is checked when an operation M is executed on an object Oj within the
domain Di.
Though this method is though not the most efficient method, it is the simplest form of access
matrix. Few drawbacks of the global table are:
•
Occupies large space and cannot be stored in the main memory and additional
memory device is required
•
Special groupings of domains or objects is not possible
•
Needs separate entry for every domain
Access list for objects:
The access list is created for each object in the access matrix. Ordered pair with format
<domain, rights-set> defines the domains whose set of access rights is not null. Access list of
objects also allows extending a list adding the default set of access rights. The default set is
first checked which increases the efficiency of this implementation model.
Operating Systems Building Blocks | Protection and Security
229
Protection
If((entry<Dj, Rk> is an entry in the access list for object Oi) & M belongs to Rk)
{operation is allowed to continue}
Else {Exception is raised}
The above condition is checked when an operation M is executed on an object Oj within the
domain Di.
Capability lists for domains:
Here the domains are prioritised and capability list for a domain is a list of objects and
operations allowed on the particular domain. Capability refers to the object represented by its
address or physical name.
Operation M is executed on an object Oj within the domain Di, specifying the pointer or
capability for Oi as a parameter. Access is allowed after the possession of the capability from
the capability list.
Capability list is a protected object and cannot be accessed. The protection of capability lists is
taken care by:
•
Tagging if an object is an accessible data or capability by means of an extension bit
•
Splitting the address space associated with the program into two, with one holding
program’s data and instructions and the other with capability lists
Lock-Key mechanism:
This mechanism makes use of ideas from access lists and capability lists assigning
•
lock or unique bit patterns to objects
•
key or unique bit patterns to domains
Process in the domain can access the object only if that domain has a key that will match the
unique lock assigned to the object. Users have no access to the lock or keys,
The technique for choosing the best implementation for the access matrix is done by
weighing the pros and cons of each method.
•
230
Global table is simple in approach yet space and time-consuming
Protection and Security | Operating Systems Building Blocks
Protection
•
Access lists makes determining the access rights for a domain, difficult and making
addition of any access to object difficult since every access must be checked which is
time consuming
•
Capability lists correspond to user needs. However, revocation of capabilities is
difficult
•
Lock key mechanism is considered efficient and flexible as keys can be passed from
one domain to another and the access rights can be modified by changing the locks.
The only consideration is the length of the key
A combination of access lists and capabilities are deployed with searching the access list the
first time and attaching the capability to the process else declaring exception in case it is not
found.
Self-assessment Question
7) Access matrix model for user authentication does not contain _________.
a) Objects
b) Domains
c) Page Set
d) A function which returns an object type
Operating Systems Building Blocks | Protection and Security
231
Protection
4.1.4 Revocation of Access rights
Revocation of access rights becomes a necessity in dynamic protection system or in the case
of access threats. The following are to be considered during revocation of rights:
•
Immediate versus delayed:
This aspect deals with the timeframe of revocation i.e. if the revocation is to occur
immediately or after a particular delay. In case of delayed revocation, the time at
which revocation is to take place is also decided.
•
Selective versus general:
This aspect deals with the users who have access right to that object if all the users
have the right revoked or a specific set of people.
•
Partial versus total:
This aspect deals with if all access rights related to the object is revoked.
•
Temporary versus permanent:
The nature of revocation of right is discussed here if the right can be revoked now and
obtained when necessary.
Revocation with respect to storage structure:
Revocation is easier with access list as the access list can directly be searched for access rights
to be revoked and can be deleted from the list. In the case of capability, list revocation
becomes difficult as the capability lists are spread in the memory space. The following
methods can be implemented for easier revocation of capability lists:
232
•
Reacquisition: Domain wise deletion of capabilities, periodically. The process trying
to reacquire the capability may not be able to do so once the access is revoked.
•
Back-pointers: A list of pointers maintained within the object redirecting to all the
capabilities of the object. Revocation can be done by following the pointers and
making the necessary changes. The implementation of this method is costly though.
This method is followed in MULTICS.
•
Indirection: By making the capabilities point to a unique global table entry which in
turn points to the object. This makes revocation easier by searching and deleting the
desired global table entry, and space can be reused. The disadvantage of this method is
that it does not allow selective revocation.
Protection and Security | Operating Systems Building Blocks
Protection
•
Keys: A unique bit pattern or key is appended with the capability and master key
appended with each object, and on request for an operation the key is compared
with the master key, and in case they match operation occurs else an exception is
raised. Revocation can be done by altering the value of the master key. This
method allows selective revocation. The owner of the object is allowed to set the
key value.
4.1.5 Capability-based systems
An operating system which deploys capability based security is called a capability-based
system.
With respect to the two capability based systems (hydra, Cambridge CAP system), complexity
and types of policies used are outlined.
Hydra
Hydra is a popular example of capability based system, flexible in nature. The system
identified and understands fixed set of access rights, namely read, write or execute. Hydra
gives the user freedom to declare some extra rights as and where necessary.
In Hydra, user’s program solely interprets user denied rights, but the system also protects of
these rights. This constitutes to be a significant development in protection technology.
Capabilities access the objects/procedures indirectly which implements such operations. To
perform any operation on an object, the capability the object contains for that particular
object must hold the name of the operation being invoked among its auxiliary rights. This
limitation helps in discrimination of access rights. Each time an operation is invoked this
routine is followed per instance per process.
The Hydra system allows procedures to modify formal parameters if considered trustworthy.
It prioritises trustworthy or reliable processes over the calling process, the rights of the
reliable process are prioritised. This happens only after a process is deemed completely
reliable. The reliability is not taken forward to the program segments or procedures the
process executes.
The system is acquainted with the names of the user-defined processes in case it is concerned
with the user-defined rights. During the initialisation of the object, operations that can be
performed on the object become secondary rights stored in a capability. When a process is to
be performed in the Hydra system, on an object, the object must hold the name of that
Operating Systems Building Blocks | Protection and Security
233
Protection
operation in the corresponding capability with this check being performed per instance per
process.
Amplification is a characteristic of this capability based system, i.e. a reliable process is
permitted to execute on any object and modify its parameters.
Usage:
This protection system can be of use to the programmers, with the help of appropriate
libraries.
Example: Cambridge CAP System
The Cambridge CAP system includes two types of capabilities:
•
Microcode in the CAP machine via data competency used to offer write, read and
execute access to objects
•
Software competence is not understood nut made secure by the CAP microcode.
Application programmers construe software competencies as protected or privileged
measures. When a protected procedure is implemented by a procedure, access to read or
write the contents of a software capability is temporarily allowed. This leaves the
understanding of the software competencies up to the individual subsystems and limits the
potential damage that could be caused by a privileged procedure which has malfunctioned.
Protected processes are granted access only to the software capabilities in which the process is
part of. Checks are made when passing software capabilities to protected processes.
Individual programmers find it difficult to use CAP system since libraries are not offered for
control as opposed to the Hydra system.
234
Protection and Security | Operating Systems Building Blocks
Protection
Self-assessment Questions
8) Which of the following is a list of pointers maintained within the object redirecting to
all the capabilities in the object?
a) Keys
b) Back Pointers
c) Reacquisition
d) Indirection
9) Which of the following makes revocation easier by searching and deleting the desired
global table entry and the space can be reused?
a) Indirection
b) Reacquisition
c) Keys
d) Back Pointers
10) Which of the following aspect deals with the nature of revocation rights?
a) Immediate vs. Delayed Aspect
b) Selective vs. General Aspect
c) Partial vs. Total Aspect
d) Temporary vs. Permanent Aspect
11) Which of the following aspect deals with timeframe of revocation?
a) Immediate vs. Delayed Aspect
b) Selective vs. General Aspect
c) Partial vs. Total Aspect
d) Temporary vs. Permanent Aspect
12) Which of the following aspect deals with users who have access rights to the object?
a) Immediate vs. Delayed Aspect
b) Selective vs. General Aspect
c) Partial vs. Total Aspect
d) Temporary vs. Permanent Aspect
13) Which of the following refers to domain wise deletion of capabilities?
a) Back-Pointers
b) Keys
c) Indirection
d) Reacquisition
14) Amplification is a characteristic of ________.
a) Hydra
b) Cambridge CAP System
c) UNIX
d) MULTICS
Operating Systems Building Blocks | Protection and Security
235
Protection
4.1.6 Language based protection
The operating system kernel takes care of the protection provided in computer systems
validating each attempt and acts as a security agent for protected resources. This validation
process is complex and requires hardware support to reduce the cost the process. In certain
cases the process is compromised by the system designer when necessary. The goals of
protection have become more streamlined with increase in complexity of the operating
system. When the validation takes place the identity and functional nature of resource is
checked.
The protection system varies with each programming language due to the difference in
approach of objects and abstract data types. Since the resource use varies depending upon the
application, the protection system should be available as a tool to the application designer
rather than being available only to the designer of the operating system.
Compiler based environment:
In a compiler based environment, programming languages have a say in the protection
system. With the help of a declarative statement (as an extension), about the resource which
is integrated into the language, the protection system is defined. This definition as a program
is composed in the programming language for which it is defined.
Advantages of this approach:
•
The declaration is simple and not complicated in the form of consequent calls to the
procedures.
•
The protection requirements are independent, irrespective of facilities in the
operating system.
•
Designer of the subsystem is not required to provide enforcement techniques.
•
Access privileges are associated with the data types oriented with the particular
programming language, as a declarative notion.
The technique used by the programming language depends upon the degree of support from
the operating system and hardware. Considering the Cambridge CAP system, storage
reference is done with the help of the capability which prevents the access of resource foreign
to its protection environment. A programming language provides protected procedures to
realise protection policies of the language.
236
Protection and Security | Operating Systems Building Blocks
Protection
If a system does not have a protection kernel, then the task is completed by the programming
language though the security provided is not as reliable as a kernel mechanism. The
protection provided by language works on the assumption that no changes are made prior or
during the execution of code in the compiler.
Advantages of language based system:
•
Security: As opposed to the security provided by the kernel, security in compiler
based systems relies on the translator, the storage mechanism that protects memory
segments storing the compiled code, security level of files which contains the program
to be loaded.
The kernel maybe software or hardware supported, where hardware based kernel
provides better security as opposed to the software based system against protection
violations.
•
Flexibility: Protection kernel offers adequate flexibility just enough to supply
facilities, provide enforcement of its own policies and not be replaced or extended
with a system offering lesser disturbances.
•
Efficiency: The protection system enforced with the support of hardware is greatly
efficient. With the enforcement of intelligent compiler, overhead of kernel calls can be
avoided.
Programming language allows high-level descriptive information for usage and allocation of
resources. In case of failure or non-existent hardware support, software protection is enforced
by the programming language. Software capability is used as an object for
computation through an application program.
Components of a program may be able to create or examine software capabilities and are later
sealed making in inaccessible to those programs which try to tamper the data structure.
However the less privileged programs can copy or send reference to other privileged program
components.
A dynamic access control mechanism which is safe, efficient in nature, useful in nature,
contributing to the overall reliability manages resources and their access by means of
language constructs.
Operating Systems Building Blocks | Protection and Security
237
Protection
Mechanisms defined by constructs:
1.
Distribution of capabilities among customer processes in a safe and efficient manner,
making sure only if a user process has access to resource only if capability to the
resource is granted
2.
The type of operations that can be invoked on a resource, i.e. read grant to a file
allows the read operation whereas write grant allows both read and write operations
3.
The specific order in which operations on a resource is invoked. i.e. open a file-> read
a file
This order should be followed for the set of operations to take place.
Protection in Java
Java is composed of classes, data fields and methods to perform functions on the fields with
many built-in protection mechanisms to run in a protected environment. Java supports
dynamic loading of unauthorised classes and executing then within the same Java Virtual
Machine in the network. This makes security of unprotected classes, a concern.
JVM has difficulty in determining the class responsible to decide the access to protected
resources and if it should open network connection for unauthorised requests. The protection
domain depends upon the source of the class, with the domain allowing access to classes from
trusted servers. Library class is required to permit a network connection, and to access a
resource any method in the calling sequence should assert the privilege.
Stack inspection is the implementation approach in Java with JVM stacking the current
method calls or invocations. Stack inspection requires the programs to be unable to modify or
manipulate the stack frame. This is implemented in java by not permitting direct access to
memory. Inspection starts from the most recently added entity towards the older entries or
the last in first out approach.
When an access to protected resources is requested checkPermissions() invokes stack
inspection and is immediately granted if doPrivileged() annotation is found in the stack
frame. If not found AccessControlException() is raised.
238
Protection and Security | Operating Systems Building Blocks
Protection
Type safety in Java is enforced by load time and run time checks which ensures that,
1.
Integers are not treated as pointers
2.
Arrays are not appended
3.
Memory is not accessed on arbitrary ways
These allow the encapsulation concept of Java ensuring the protection measures are carried
out.
Self-assessment Question
15) Which of the following requires the programs to be unable to modify or manipulate
the stack frame?
a) Access Bits
b) Stack Inspection
c) Kernel
d) Access Matrix
Operating Systems Building Blocks | Protection and Security
239
Protection
Summary
o Protection detects latent errors proactively at an early phase mainly focusing on
protecting unprotected resources from threats.
o One common principle of protection is the principle of less privilege which means
just the enough privilege or access is given to each component according to its
importance in carrying out processes.
o An operating system which follows less privilege principle defines specific finegrained access controls via system calls and services.
o Need to know the principle is another protection principle which limits potential
damage malicious access can do to the system.
o The protection domain defines the types of operations that can be invoked by
each object and the set of objects that can be defined.
o In UNIX, each user is associated with a domain and switching a domain means
the identification of the user is changed temporarily.
o Protection model can be plotted as a matrix which is called as the access matrix.
o The difficulty that arises while implementing the access matrix is that most of the
entries will be empty leading to space occupying data structures.
o Ordered triples with the format <domain, object, rights-set> constitute the global
table.
o Access list of objects allows extending a list adding the default set of access rights.
o Capability refers to the object represented by its address or physical name that is a
secure pointer.
o An operating system which deploys capability based security is called a capability
based system.
o In a compiler based environment, programming languages have a say in the
protection system.
o Stack inspection is the implementation approach in Java with JVM stacking the
current method calls or invocations.
o Stack inspection requires the programs to be unable to modify or manipulate the
stack frame.
240
Protection and Security | Operating Systems Building Blocks
Protection
Terminal Questions
1. Explain the goals and principles of protection in a modern computer system.
2. Describe Access Matrix and explain its implementation.
3. Explain the language based protection.
Operating Systems Building Blocks | Protection and Security
241
Protection
Answer Keys
Self-assessment Questions
242
Question No.
Answer
1
c
2
b
3
a
4
c
5
b
6
d
7
c
8
b
9
a
10
d
11
b
12
b
13
d
14
a
15
b
Protection and Security | Operating Systems Building Blocks
Protection
Activity
Activity Type: Offline/Online
Duration: 30 minutes
Description:
Research on the two issues of Language Based Protection:
•
•
Memory Protection
Secure System Services
Come out with a solution for them.
Case Study
Mosaic: Bromium’s Complete Endpoint Security Promotes Risk Posture
Mosaic is the leading producer and marketer of potash and concentrated phosphate. It has a
workforce of 9000 people spread across six countries. The company owns 2 lakh acres in
Central Florida. It quarries phosphate and potash from four different mines in North
America, mostly in Saskatchewan. Mosaic’s products are converted into crop nutrients. These
are then distributed to the customers from agricultural centres across the world.
The challenge: traditional security is futile against modern zero-day and targeted attacks
The director of security and risk, Paul Hershberger’s wishes to improve the company’s overall
risk posture by changing the computing environment and bringing in endpoint security. His
department deals with the risks faced by companies of the same sector - the threats brought
about by state-sponsored bad actors, activists, external cybercriminals and bad internal
actors. Paul took upon a comprehensive risk analysis of Mosaic’s present security
infrastructure, with major attention to endpoint security. At that time, Mosaic had a multivendor environment compromising traditional threat mitigation technology layers: intrusion
detection systems (IDS), intrusion prevention systems (IPS), the Web and email filters,
desktop firewalls, antivirus, antimalware and anti-spam. After the assessment, Paul started to
look for an effective endpoint security and was made known to the Bromium’s different
strategy through a channel partner. The team started their evaluation by identifying “residual
risk” (the threat that remains after security controls designed to recognise and eliminate any
possibility have been implemented) on secured laptops, desktops and mobile devices,
considering all the solutions they have at present to mitigate those risks. “We came to the
conclusion that we were putting a lot of time, effort and money into endpoint solutions that
Operating Systems Building Blocks | Protection and Security
243
Protection
were making only small, marginal effects on the risk. After we had evaluated all our solutions
and controls, we discovered that our residual risk was only one point away from our inherent
risk,” explains Paul. “Everything we were doing was fundamentally not working. We started
looking for ways to truly shift the risk dynamic, and that’s why we looked at Bromium.”
Bromium’s CPU-enforced security eliminates breaches
Before opting for Bromium, Paul and his team analysed a range of endpoint solutions. They
were impressed with the simplicity and completeness of the Bromium solution. They did not
find the way of operations of other endpoint security solutions to be feasible and befitting.
“Most of these solutions offer many configuration options, and the way they are configured
can mean the difference between effectiveness and a false sense of security,” says Paul. “When
we looked at where gaps could arise in the different tool sets, we found that Bromium has a
simple, yet complete, approach and provides us with a broader level of protection with fewer
opportunities to create gaps or a false sense of security. Whereas other solutions all too often
‘fail silently,’ we found that this was not the case with Bromium.”
Hershberger and his team recognise the effectiveness of Bromium to the way it isolates user
tasks in a secure micro-virtual machine that is enforced by the CPU on the device. Some
security products incorporate virtualisation to some extent—such as spinning up virtual
machines (VMs) to do analytics on the fly, detonating malware in a sandbox, and then
sending it through analysis. But technology at Bromium is way ahead. “Bromium’s
technology is sound. I like the fact that this is the only solution that leverages virtualisation
through the technology stack at the CPU and hardware layer,” affirms Paul.
A successful initial deployment
Began in early 2015, the first Bromium deployment wave included 2,000 endpoints. Paul
explains the technical deployment as “simple and straightforward —and the collaboration
with Bromium consultants was nothing short of stupendous.” The users of Mosaic with
Bromium installed on their systems appreciate the extra security, which does not intervene
with their productivity. They like the “freedom to click on anything without restriction and
worry,” says Paul. Apart from using Bromium Advanced Endpoint Security as a tool for
protection, Mosaic also started using it as an investigation and analysis tool, along with the
Bromium Threat Analysis module. “We have a lot of potentially suspicious code and files
coming into our environment, and we do not always know what they are. In the past, we’ve
had to move them to an isolated lab environment very carefully. With Bromium, we can start
interacting with them right away from the desktop,” says Hershberger. “It’s a liberating
244
Protection and Security | Operating Systems Building Blocks
Protection
feeling when one of my people can safely and confidently look at a piece of suspicious code
right on the desktop and make a decision as to how to handle it from there.”
Bromium defeats ransomware
Paul found that the major threat at Mosaic was ransomware. It locks down the computer and
the files until the victim pays a ransom to regain access. His team found several cases of
ransomware, which is extremely destructive and elusive. It is difficult to detect ransomware
by antivirus solutions and other defences.
After some time having Bromium functioning on the Mosaic environment, Hershberger and
his team could see Bromium was better at protecting the systems against ransomware than
the company’s traditional defences. The differences in experience to the user and the IT
support staff was made known on the basis of unprotected hosts and protected hosts with
reference to lost productivity, lost files and host re- imaging.
Paul concluded that Bromium
is effective against ransomwware.
Expanding the Bromium footprint
Paul has decided to deploy Bromium on all PCs worldwide this year. Mosaic is confident of
Bromium’s efficiency and effectiveness in reducing and mitigating risks and system threats.
Mosaic has motivated Bromium to replace legacy systems and build an updated, securable,
hybrid endpoint environment with both virtual desktops and physical systems, where
required. “Bromium has the potential to truly change the security dynamic,” says Paul
Discussion Questions:
1. Analyse the case study for its success factors.
Operating Systems Building Blocks | Protection and Security
245
Protection
Bibliography
e-References
•
Operating System Protection, Retrieved 21 Dec, 2016 from www.ida.liu.se/~TDDB
63/slides/2004/ch18_19_6.pdf
•
Operating System and Access Matrix, Retrieved on 21 Dec, 2016 from https://www
.cs.auckland.ac.nz/courses/compsci340s2c/lectures/lecture26.pdf
•
Language based Protection, Retrieved on 21 Dec, 2016 from https://surend
ar.chandrabrown.org/teach/spr06/cse30341/Lectures/Lecture31.pdf
Image Credits
•
Figure 4.1.1: https://www.cs.auckland.ac.nz/courses/compsci340s2c/lectures/lectu
re26.pdf
•
Figure 4.1.2: https://www.cs.auckland.ac.nz/courses/compsci340s2c/lectures/lectu
re26.pdf
•
Figure 4.1.3: https://www.cs.auckland.ac.nz/courses/compsci340s2c/lectures/lectu
re26.pdf
External Resources
•
Salmon, D. (2010). Elements of computer security. New York: Springer- Verlag.
•
Bic, L., & Shaw, A.C. (2003). Operating systems principles. Upper Saddle, NJ:
Prentice Hall.
•
Bragg, R. (2004). Hardening Windows Systems. New York, NY: McGraw
Hill/Osborne.
246
Protection and Security | Operating Systems Building Blocks
Protection
Video Links
Topic
Link
Protection and Security
https://www.youtube.com/watch?v=bMxosO_Jt6M
Access Matrix
https://www.youtube.com/watch?v=vVYvuQwy28M
OS Protection and Security
https://www.youtube.com/watch?v=-3RnEkkC9VY
Operating Systems Building Blocks | Protection and Security
247
Protection
Notes:
248
Protection and Security | Operating Systems Building Blocks
Security
Chapter Table of Contents
Chapter 4.2
Security
Aim
................................................................................................................................................ 249
Instructional Objectives................................................................................................................... 249
Learning Outcomes .......................................................................................................................... 249
4.2.1 The Security Problem ............................................................................................................ 250
Self-assessment Questions .................................................................................................... 252
4.2.2 User Authentication .............................................................................................................. 252
Self-assessment Questions .................................................................................................... 254
4.2.3 One Time Password ............................................................................................................... 255
4.2.4 Program Threats..................................................................................................................... 255
Self-assessment Questions .................................................................................................... 259
4.2.5 System Threats........................................................................................................................ 260
Self-assessment Questions .................................................................................................... 263
4.2.6 Cryptography .......................................................................................................................... 263
Self-assessment Questions .................................................................................................... 266
4.2.7 Computer Security Classification ........................................................................................ 266
Self-assessment Questions .................................................................................................... 268
Summary ........................................................................................................................................... 269
Terminal Questions.......................................................................................................................... 271
Answer Keys...................................................................................................................................... 272
Activities ............................................................................................................................................ 273
Case Study ......................................................................................................................................... 273
Bibliography ...................................................................................................................................... 277
e-References ...................................................................................................................................... 277
External Resources ........................................................................................................................... 277
Video Links ....................................................................................................................................... 277
Operating System | Protection and Security
Security
Aim
To provide students with the fundamentals of Security in Operating System
Instructional Objectives
After completing this chapter, you should be able to:
•
Explain the security problems of OS
•
List the fundamentals of encryption, authentication and hashing
•
Examine the way of usage of cryptography in protection
•
Evaluate measures to prevent security attack
Learning Outcomes
At the end of this chapter, you are expected to:
•
Outline the security problems of OS
•
Discuss the fundamentals of encryption, authentication and hashing
•
Interpret the advantage of the cryptography in protection
•
Validate the countermeasures to security attack
Operating System | Protection and Security
249
Security
4.2.1 The Security Problem
Security deals with protection mechanisms in the external environment in which the
operating system functions. All the resources with respect to the system – software and
hardware (memory, CPU and so on) are protected against threats (intentional and
accidental).
Commercial systems containing secretive, financial data or sensitive data need an extensive
security system to safeguard them. The loss, corruption or piracy of this data leads to a
terrible loss in the organisation. Systems are tagged secure if they continue functioning the
way they are intended to under all circumstances.
Types of security threats are:
•
Accidental: These unintentional threats are easy to prevent and protect the system
•
Intentional: Malicious in nature
Security can be violated as follows:
•
Denial of service(DOS): Preventing legitimate functioning of the system
•
Theft of service: Usage of resources by unauthorised sources
•
Breach of availability: Destruction of data by unauthorised sources
•
Breach of Integrity: Modification of data by unauthorised sources
•
Breach of confidentiality: Unauthorised read access to data especially sensitive
information of value
Methods commonly used to breach security are:
•
Masquerading: The invader pretends to be someone else to get access to resources,
which the intruder would not normally have.
•
Replay attack: Repeating an operation for the intruder’s benefit.
•
Message modification: Often clubbed with replay attack where the message is
modified by the intruder, for example, request to transfer funds with changed account
number.
250
Operating System | Protection and Security
Security
•
Man-in-the-middle attack: Intruder acts in between the sender and receiver making
sure that the sender and receiver do not communicate. The intruder masquerades to
both the parties.
•
Session Hijacking: An active session is intervened.
•
Phishing: Fake email or request, looking legitimate gains access to information.
•
Dumpster driving: Random attempt aimed at finding something useful like
searching the trash directory and so on.
To protect a system from security problems, security measures are imposed at all levels:
•
Physical: This level comprises of physical security arrangement where the computer
device and the other hardware components are placed. The workplace or terminals
are made secure from intruders’ entry.
•
Human: Authorisation of the human being who is provided access to the system.
There are also chances that the authorised personnel, allows another person access the
system for which proper security checks.
•
Operating System: The OS needs protection against security breaches which may be
a DOS attack or stack overflow resulting in an unauthorised process.
•
Network: Data which is shared via networks must be protected as data sharing via a
vulnerable network is very harmful.
The last two levels stated (network, OS level) can be made secure by the security mechanisms
of the OS.
Operating System | Protection and Security
251
Security
Self-assessment Questions
1) Which of the following attack has an intruder, who pretends to be someone else to
access the information?
a) Denial of Service
b) Masquerading
c) Message Modification
d) Man-in-the-middle attack
2) Security deals with protection mechanisms in the external environment. State True or
False?
a) True
b) False
3) What is the intervention of an active session called?
a) Dumpster Driving
b) Phishing
c) Session Hijacking
d) Replay Attack
4) Which of the following is the random attempt aimed at finding something useful like
searching the trash directory, etc.?
a) Phishing
b) Masquerading
c) Session Hijacking
d) Dumpster Driving
4.2.2 User Authentication
Authentication of the user is mandatory and is done with the help of messages in sessions.
The primary step is to authenticate the user rather than validate the message.
The question that arises with respect to user authentication is how to validate the user which
can be done by means of:
•
Physical entity
•
Unique entity known only to the user
•
Attribute of the user
A. Password
Widely used technique for user authentication is verifying the identity with the help of a
password with different passwords associated with different access rights.
252
Operating System | Protection and Security
Security
If(password provided by the user is same as password in the
database)
/*access is permitted*/
Else
/*access denied*/
Password protection is employed when there is no complete protection mechanism in the
system.
Vulnerabilities
The password maybe:
•
Guessed by the intruder since the intruder has information about the user
•
Found by trying all possible combinations(specially short passwords)
•
Known to the intruder by shoulder surfing or peeping over the shoulder
•
Found by sniffing or snooping by residing seamlessly in the network
•
Known as a result of exposing the password voluntarily
•
Passwords can be set by the user or randomly generated by the system. While user
sets the passwords checks are done on the password to see if it can be guessable or
cracked easily. Certain systems periodically request the users to change their
passwords.
Challenge faced is storing the password in the system yet using it only for the authentication
process. Encryption of passwords is followed since it is considered the fact that calculating the
value f(x) is easier when x is known. However, the reverse is difficult (computing x when f(x)
is known). Passwords are encoded before storing so that even if the encoded passwords are
seen, finding out the password is difficult.
The problem with encrypted passwords is that they can be run against encryption routines if
the hacker has knowledge about it. The decryption process is becoming easier with the
evolution of the software tools.
Passwords in UNIX:
Password entries are declared accessible only to the super user. A random number is also
added to the password to make two passwords of same value result in different encrypted
values.
Operating System | Protection and Security
253
Security
B. Biometrics
Biometric measures such as fingerprint scan or retina scan is authenticated before access to
the system is granted. Though this type of user authentication is credible, the implementation
is expensive and the devices are larger as opposed to normal computer authentication.
C. Multifactor Authentication
More than one method of authentication is deployed as an extra measure of security.
•
For example, a system is given access to only if finger print scan, a PIN is entered and
a USB device is plugged in. Access is denied even if any one of the above results in a
mismatch as in ATM machines.
Self-assessment Questions
5) UNIX adds a random number to passwords of same value.
a) True
b) False
6) Which of the following is a measure to prevent password hacking?
a) Change password periodically
b) Keep password logically related to identity
c) Keep Password Short
d) Login and save passwords on public systems
254
Operating System | Protection and Security
Security
4.2.3 One Time Password
OTP or one-time passwords are generated each time the authentication process is done and
OTP or one-time passwords are generated each time the authentication process is done and
the user must be able to key in the right answer. The concept of paired passwords is
implemented where the user supplies one part and the system generates and challenges the
user about the knowledge of the other part.
Consider the OTP as an integer value, once the system generates the value and presents the
user with the value. On receiving the OTP, the user is supposed to apply the predefined
function and present the result to the system which should be same as the answer obtained in
the system.
OTP is different every time it is generated and varies from instance to instance, user to user
and session to session.
Procedure:
Consider ‘secret’ which the system and user already know and ‘seed’ is the random entity
generated by the system per session per instance and is sent to the user. The user and system
already have an agreed upon function which takes ‘seed’ and ‘secret’ as its parameters.
The condition checks if the value f(secret, seed)is same as that computed by the system and
the user. If the values match, access is provided. If they do not match the access is denied.
Implementation:
OTP is deployed commercially with SecureID using hardware calculators or with seed value
being the current time. PIN or Personal Identification Number is made use of as the first level
of access check after which OTP is generated in case of net banking. This type of two-factor
authentication reduces the chances of breaches than authentication processes which make use
of single factor. OTP uses a code book where the already used OTPs are ruled out or crossed
out for further use. This code book should remain protected.
4.2.4 Program Threats
A program that tries to change the behaviour of a normal process is a common threat and is
tagged a breach of security. Program threats aim at exploiting programs.
Operating System | Protection and Security
255
Security
The following are common threat caused by programs:
•
Trojan horse: Trojan horse is a program that misuses the environment it runs in.
There is a provision for users to write code and other users to execute it. The program
written by another user is made use of to access protected data.
For example, Text editor program: The text editor program includes provision for
searching keywords within the file, which on an occurrence of keyword copies the
entire file accessible to the user who created the text editor. Here the text editor
program is named a Trojan horse.
Spyware is a variation of Trojan horse which tags along with a program the user has
chosen to install which is mostly achieved in internet browsers using pop-up
windows. Spyware captures the required information to the central site from the
user’s system. Spyware generally accompanies the innocuous looking program.
Login program is a variation of Trojan horse which fakes a login page where the user
types his id and password but is shown an error message and the user is asked to
retype the credentials. This time, the user is redirected to the proper login page. The
user id and password are stolen in the first attempt by the login emulator. This type of
threat can be handled using a key sequence which cannot be trapped. The nontrappable key sequence used in Windows is Ctrl+Alt+Delete.
•
Trap Door: The loophole in the software which was not paid attention by the
developer is called a trapdoor. Trap door maybe a part of the compiler and can be
generated along with the compiled code and is very difficult to detect. Detection of
Trap doors is very difficult as they appear harmless.
For example, rounding off the money during money transfers and transferring the
rounded off value to their personal account.
•
Logic Bomb: When a normally harmless program becomes a threat in a particular
circumstance it is called a logic bomb. Detection of logic bomb is difficult because it
becomes a threat only in certain circumstances.
256
Operating System | Protection and Security
Security
•
Stack or Buffer Overflow: This is one of the common program threats in which the
attacker foreign to the system gains access to the target system to exploit or make use
of the protected resources. It exploits the minor bugs in the program like sending data
to a data structure whose bounds were not set by the developer.
Once the attacker detects the bug, one of the following can be done:

Develop a simple code and append it to the stack

Overwrite the current return address in the stack with the address desired by the
attacker

Bombard the input field with values till the field writes it to the stack
Structure of the stack: (Figure 4.2.1)
Frame pointer contains the address of the beginning of the stack frame. The variables that are
declared local to the stack are known as automatic variables.
Figure 4.2.1: Stack Structure
Stack overflow happens when a program tries to redirect the stack to a newly created
shell by overflowing the stack with values.
•
Viruses: A snippet of code embedded in a program which is otherwise legitimate is
known as virus. Viruses are self-replicating in nature and can cause high level damage
by infecting all the programs it has access to. Common sources of Virus are email, MS
Word documents which execute automatically deleting files on the system.
Operating System | Protection and Security
257
Security
Operation of Virus:
Once the Virus reaches the destined system, the host program known as Virus Dropper
infects the system with Virus and causes damage to the system.
Virus may be a:
•
File: A classic file virus attaches itself to a file and makes the execution of the file start
with its code. This type is called parasitic virus since it lets the host program function.
•
Boot: Boot sector is affected by the boot virus and is executed every time the OS is
loaded and targets bootable sectors such as floppy disks and infects them.
•
Macro: Unlike virus program which is normally returned in assembly language,
Macro viruses are written in high-level programming language and are triggered
when a program which can execute a macro is run. Host file to the macro virus can
even be spreadsheets.
•
Source code: As the name specifies source code virus targets source code and gets
appended to it and spreads.
•
Polymorphic: Polymorphic virus as the name suggests keeps changing every time it
installs in the system to prevent detection by antivirus software. Virus signature keeps
changing in case of polymorphic viruses.
•
Encrypted: Encrypted virus prevents detection and has a decryption code to decrypt
the code which is them executed in the target system.
•
Stealth: Stealth viruses are tricky virus program which modifies the programs which
detect it, targeting the antivirus programs.
•
Tunnelling: Tunnelling virus gets installed in the interrupt handler chain to bypass
antivirus scanners. Device drivers are also targeted since detection becomes difficult
in such cases.
•
Multipartite: Multipartite viruses target various parts of the system rather than a
single target making them even more difficult to handle or control. Generally, files,
boot sectors and memory are targeted.
258
Operating System | Protection and Security
Security
•
Armoured: Armoured virus is virus protected from antivirus modules and is complex
to understand and unravel. These viruses are compressed to make detection and
disinfection by antivirus programs difficult. The files part of virus infestation, have
file names which cannot be viewed or have hidden attributes.
Self-assessment Questions
7) Which of the following is a harmless program which becomes a threat in
circumstances?
a) Stack Overflow
b) Logic Bomb
c) Trap Door
d) Virus
8) In which virus, does the virus signature keeps changing?
a) Encrypted Virus
b) Stealth Virus
c) Multipartitie Virus
d) Polymorphic Virus
9) Which of the following is called parasitic virus?
a) Boot Virus
b) File Virus
c) Macro Virus
d) Source Code Virus
10) Which of the following virus is written in high- level language?
a) Boot Virus
b) File Virus
c) Macro Virus
d) Tunnelling Virus
11) Which of the following target various parts of the system rather than a single target?
a) Multipartite Virus
b) Stealth Virus
c) Armoured Virus
d) File Virus
Operating System | Protection and Security
259
Security
4.2.5 System Threats
System threat as the name suggests aims at misusing the services available and the network
connections. The loopholes or lesser security mechanisms in the Operating System fuel the
system threats. Attack surface or the entry to attack the OS is prevented by making it secure
by default. The common system threats are as follows;
•
Worms:
Reducing the system’s performance is aimed by the programs called worms. They are
named worms since they reproduce the copy of the program and try shutting down
the system or network it targeted.
Template of Worm:
Worm is made up of two programs, namely

Grappling hook: A small program which baits into the system finding loopholes in
the program. Grappling hook copies itself to the targeted system and makes it a
hooked system.

Main worm program: The main program works to infect all the other systems
possible from the hooked system.
A common example of Worn is the Morris Worm designed by Robert Morris which is
self-replicating aiding to the rapid distribution and reproduction of the program. The
worm searches for remote execution possibilities, even without the appropriate password
and uploads itself on the target system using the grappling code snippet which in this case
is 99 lines of code in C language. Morris’ worm makes use of any one technique from the
below:

Makes use of finger program, a daemon or background process which searches
the entire internet to find information associated with the system. Buffer-overflow
attack is raised on the finger making the finger to be redirected to a 536-byte
string giving the worm a remote shell on the targeted system.

Using sendmail option: The sendmail command tries to send emails, route or
receive emails. Since the debugging option in systems preferred the sendmail
option to be left turned on, Morris’ worms make use of this and with the set of
260
Operating System | Protection and Security
Security
commands, a copy of grappling hook is sent to the targeted system. Once the
grappling hook reaches the targeted system, it tries to find the password trying all
possible combinations.
The following Figure 4.2.2 illustrates the attack on Computer Systems by Morris
Internet Worm.
Figure 4.2.2: Morris Internet Worm
Only every seventh duplicate copy of the worm is permitted to be retained.
•
Port Scanning:
Port scanning is a system threat where the system’s vulnerabilities are detected and
exploited. This method aims at creating a connection in the system (TCP/IP) to an
entire range of ports or a specifically targeted port. Once the port is established, the
communication attempts via sendmail or any other method are deployed, and for
every service that is answered, creation of privileged command shell which is
otherwise impossible or buffer overflows are done.
Port scanning is detectable; hence these attacks are launched from independent
systems existing for these purposes, called zombie systems.
Operating System | Protection and Security
261
Security
For example, an open-source utility – nmap performs security auditing and network
exploration. nmap can be pointed at a target to identify valuable information like the
OS on the host, firewalls deployed, active services and applications.
•
Denial of Service
Denial of service is a system attack which is targeted at tampering the functioning of a
system or facility rather than the resources or data in the attacked system.
Denial of service attack can be of two types:

DOS keeps the system’s facility resource occupied so no other useful work can be
done. For example, utilising the CPU time.

DOS disrupting the network. For example, when a request for TCP/IP is launched,
a couple of partial connections are deployed which eat up the resources resulting
in nothing fruitful. These attacks take even days to recover from.
DDOS or distributed denial of service happens when the DOS attacks take place from
multiple systems or sites with a single system as the target.
A typical example of DOS attack is to block access to authentication of password after
several attempts to log in, where the DOS attack can aim at blocking all such logins.
Dealing with the attack: Denial of Service attacks is difficult to prevent or detect.
DDOS attacks are even more difficult to handle as attacks take place from multiple
systems. In certain cases, the DOS attack is mistook as slowdown in the performance
of the system.
262
Operating System | Protection and Security
Security
Self-assessment Questions
12) Which of the following is the aim of Denial of Service?
a) Messing up the functionality of the system
b) Stealing Data
c) Establishing connections illegitimately
d) Replicating Data
13) Which part of the worm is responsible for finding loopholes in the program?
a) Bait
b) Main worm program
c) Grappling Hook
d) Replicating Module
14) Which of the following is the aim of port Scanning?
a) Identify system’s vulnerabilities
b) Deny Service provided by system
c) Launch Attacks
d) Malicious usage of resources
4.2.6 Cryptography
Cryptography is deployed in computer systems as a defence against attacks by intruders and
allows for trusted transfer of data between systems. Cryptography is simple, which means the
transfer of data in coded language which cannot be interpreted by intruders.
Encryption
Encryption is made use of in modern systems to make sure that the encrypted message is
made use of only by the receiver which contains the key to read the message. Encryption
consists of:
•
K: A set of Keys
•
M: A set of messages
•
C: A set of cipher texts
Operating System | Protection and Security
263
Security
•
Function F: this function generates cipher texts working on M with the help of the set
of Keys K
•
Function D: this function is used for deciphering the original message M from the
cipher text C, where the Key K is known
Two types of encryption are as follows:
•
Symmetric Encryption: If the same key is used for encryption and decryption then
the algorithm is termed symmetric encryption.
i.e. E(k) -> D(k) /*Encryption*/
D(k) -> E(k) /*Decryption*/
The encryption in which the algorithm is hidden is termed as black box
transformations.
•
Asymmetric Encryption: Encryption is said to be asymmetric when the encryption
key is different from the decryption key. One of the commonly used asymmetric
encryption algorithms is RSA. It is expensive to execute since the mathematical
functions to be performed for decryption is quite complex and hence used only for
confidential processes such as authentication or key distribution.
Asymmetric encryption makes use of public and private keys where the private key is
published but the private key is kept secretive.
Authentication:
Authentication is the process of limiting the potential message senders deploying the concept
that for the message to be valid the sender must possess a key.
•
K: A set of Keys
•
M: A set of messages
•
A: A set of authenticators
•
Function S: this function generates authenticators A from the message M with the
help of key K
264
Operating System | Protection and Security
Security
Function V: this function is used for deciphering the original message M from the
•
cipher text C, where the Key K is known
Message digest also known as hash value created by the hash function H(m) is a fixed size
block of data. Hash functions works on n bit blocks to produce a hash value of n bits. H(m) is
collision resistant or unique in nature.
The types of authentication algorithm are,
Authentication using symmetric encryption: Message authentication code is created
•
and since S(k) is the same as V(k) the key is kept secretive.
Authentication using digital-signature algorithm: Digital signatures act as the
•
authenticators here.
Advantages of authentication algorithm are:
•
Requires fewer computations saving the resources and time especially when a large
amount of text is dealt with.
•
Authenticator is shorter in nature in comparison with the cipher text or message
increasing efficiency in terms of transmission time and decreasing space usage.
•
This algorithm allows authentication without confidentiality of the message.
Key Distribution:
Keys play a vital role in cryptography and maintaining the secrecy of the key is difficult. Key
management is very challenging with the increasing threats especially in symmetric
encryption. Another problem is that the system would need N keys when it needs to
communicate with N systems.
The above reasons make the user switch to asymmetric encryption where only one private key
is necessary, and the public keys can be stored in a structure called key ring. The identity of
the public key owner is identified using the digital certificate provided by a trusted party. The
certificate authorities include their public keys in internet browsers before distribution of the
same.
Operating System | Protection and Security
265
Security
Self-assessment Questions
15) Which of the following does an authentication algorithm provide?
a) Authentication of the user
b) More Computations
c) Waste resources compared to other algorithms
d) Replicating Data
16) Which of the following is an algorithm in which key for encryption is different from
the key for decryption?
a) Symmetric encryption
b) Authentication
c) Message digest authentication
d) Asymmetric authentication
4.2.7 Computer Security Classification
Computer systems are classified based on their security level for the following reasons:
•
To provide metrics for the evaluation of the degree of trust, the computer system
provides while processing sensitive information.
•
To provide an outline of security requirements during acquisition of computer
systems.
•
To set standards in terms of security features and trust requirements to be considered,
while manufacturing products.
Each security class has the following criteria based on which the classification was made:
266
•
Security Policy
•
Accountability
•
Assurance
•
Documentation
Operating System | Protection and Security
Security
According to the U.S. Department of Defence Trusted Computer System Evaluation Criteria,
computer security is classified into four types and several sub-types which are explained as
follows:
•
Minimal Protection division – D
•
Discretionary Protection Division – C
 Discretionary Security Protection Class - C1
 Controlled Access Protection Class - C2
•
Mandatory Protection Division – B
 Labelled Security Protection Class – B1
 Structured Protection Class - B2
 Security Domain Class - B3
•
Verified Protection Division – A
The above classification is explained as following:
•
D: Minimal protection level is class D, and the security systems which fail to adhere to
any other security class falls under D.
•
C: Class C has better security features than class D, using audit capabilities
accountability of users and protection is provided. Class C is further categorised as C1
and C2.

C1: The versions of UNIX mostly fall under the class C1 where the users are
cooperating with one another access data of the same level regarding sensitivity.
The access of files and classes by users is controlled by a Trusted Control Base
(TCB).

C2: Class C2 provides for individual levels of access to the respective user. The
actions of individual users are also monitored by the system administrator. Since
each user has a different level of access, the TCB protects the data and data
structures against modification. Secure versions of UNIX fall under this class.
Operating System | Protection and Security
267
Security
•
B: Class B makes the system more secure than C2 by adding a sensitivity label. Class B
is further categorised as

B1: Class B1 assigns a security label to each object and is verified mandatorily
before access is granted. TCB adds the sensitivity information in header and footer
of every readable file.

B2: Class B2 differs from class B1 by adding a sensitivity label to system resources
along with the objects. Covert channels are supported by class B2 along with the
audition of events which may cause potential harm.

B3: Access control lists are made use of by class B3 which stores the names of
object each group or a specific user does not have access to. Here the TCB is
employed to monitor events and proactively detect violation of security policies.
Such events are immediately terminated.
•
A: This class provides the top level of security similar to class B3 but varies using
formal specifications and design, makes sure that the TCB has been implemented
correctly providing assurance in terms of security.
Self-assessment Questions
17) Computer security is classified into ___ types.
a) Three
b) Four
c) Seven
d) Nine
18) Which of the following classes adds a sensitivity label to system resources and objects?
a) B2
b) B3
c) A
d) B1
19) Which of the following is not a criterion for computer system classification?
a) Accountability
b) Assurance
c) Documentation
d) Availability
268
Operating System | Protection and Security
Security
Summary
o Protection refers to protection of files and other resources from accidental
misuse by collaborating users sharing a system, generally using the computer
for usual purposes.
o Security refers to protecting systems from purposeful attacks, either internal or
external, from individuals with a motive to steal information, damage
information, or otherwise deliberately cause chaos in some manner.
o Phishing refers to sending an innocent-looking e-mail or web site intended to
fool people into revealing confidential information.
o User authentication is a procedure that allows a device to verify the identity of
someone who links to a network resource.
o A one-time password (OTP) is a password that is valid for only one login
session or transaction, on a computer system or other digital device.
o One-time passwords avoid shoulder surfing and other attacks where a witness is
able to capture a password typed in by a user.
o Trojan Horse tracks user login credentials and saves them to send to malicious
user who can later on login to computer and can have access to system
resources.
o A virus is mostly a small code embedded in a program.
o As user runs the program, the virus starts getting embedded in other files/
programs and tends to make system unusable for user.
o File viruses are termed parasitic, because they never leave any new files on the
system, and the original program is still entirely functional.
o A boot virus engages with the boot sector, and runs before the OS is loaded.
o Boot viruses also called as memory viruses, because in operation they reside is
memory, and does not exist in the file system.
Operating System | Protection and Security
269
Security
o Encrypted viruses travel in encoded form to escape detection. In practical they are
self-decrypting, which then allows them to attack other files.
o Stealth viruses try to escape detection by adjusting parts of the system that could
be used to detect it.
o A system threat refers to misapplication of system services and network
connections to put user in distress.
o System threats can be used to host program threats on a complete network called
as program attack.
o Cryptography includes creating written or generated codes that allow information
to be kept secret.
o Cryptography converts data into a format which is unreadable for an
unauthorised user, allowing it to be transmitted without anyone decoding it back
into a readable format, thus compromising the data. Information security uses
cryptography on several levels.
o Physical threat to a computer system can result to loss of the whole computer
system, damage to the computer software, theft of the computer system,
vandalism, natural disaster such as flood, fire, war, earthquakes, damage of
hardware, etc.
270
Operating System | Protection and Security
Security
Terminal Questions
1. Explain the security problems in Operating System.
2. Describe the concept of Program and System Threats.
3. Brief on the concept of Cryptography.
Operating System | Protection and Security
271
Security
Answer Keys
Self-assessment Questions
272
Question No.
Answer
1
b
2
a
3
c
4
d
5
a
6
a
7
b
8
d
9
b
10
c
11
a
12
a
13
c
14
a
15
a
16
d
17
b
18
a
19
d
Operating System | Protection and Security
Security
Activity
Activity Type: Offline/Online
Duration: 30 Minutes
Description:
Research on various program threats and system threats that have been identified and listed
so far and explain its threats on the OS. Also Research on how to counter such threats.
Case Study
DESIGN AND IMPLEMENTATION OF SECURITY OS
Abstract
The significance and seriousness of an operating system with security- enhancing mechanism
at the kernel level, such as a reference monitor and a cryptographic file system, has been
widely stressed upon because the weaknesses and shortcomings of mechanism at the user
level have been made public. When a system has only a reference monitor, the system is
exposed to a low-level diversion or a physical attack. Moreover, when a system has only a
cryptographic file system, protecting itself is problematic for the file system. In order to deal
with such problems, we designed and developed a security OS with a reference monitor, a
cryptographic file system, authentication limitation, and session limitation.
Introduction
A number of security breaches have happened since the development of computer systems.
Moreover, a huge number of systems have been connected lately which are more prone and
exposed to more threats. In order to avoid these dangers, technologies like fire- walls,
intrusion detection system (IDS), and audit trails have been developed. For example, a
firewall cannot defend a system against an internal intruder but an audit trail can help to trace
the intruder by leaving messages. Basically, none of this user- level mechanism can protect
themselves from the attackers who have attained an administrator privilege. The mechanisms
at the user- level are incapable to protect any system or its data.
Such problems led many researchers and vendors to focus on security OS with key
mechanisms such as a reference monitor, and a file system. The chief security OS element is
Operating System | Protection and Security
273
Security
security kernel. It includes a reference monitor which is often called access control. It helps to
identify which subject can execute which operations to which objects. The disadvantage of
reference monitor is that it has additional overheads and is exposed to both low- level and
physical attacks
As an example of overheads, one of access control implementations requires more than 5 %
overheads that are intolerable in some cases. As an example of both low-level and physical
attacks, if an intruder finds a way to access any information below the level of a reference
monitor, then the intruder can access all information. In addition, if an intruder can detach a
disk from a protected system, physically attach it to another system without a reference
monitor, and mount it, then all information can be disclosed.
The protection through a cryptographic file system also has shortcoming when protecting a
system and important data. For example, when a system administrator privilege is taken by
an intruder in the system with the cryptographic file system only, the intruder will have more
number of opportunities to read important data through finding the information on the
installed cryptographic file system such as mounted directories, encryption algorithms, and
key information because the system administrator can access all objects in the system. In
other words, the protection through a cryptographic file system only is not enough without
strong protection on system privileges.
Besides these problems, the aforementioned underlying mechanisms cannot block the various
network attacks. When a user privilege is taken by an intruder at the system that is protected
only by username and password, the intruder can always connect to the system at any hidden
locations and try to do harms to the system. In addition, when an intruder tries to open many
sessions, denial-of-service is caused and a normal user cannot connect to the system due to
excessive overheads.
Many afore mentioned problems can be solved by using them together. The low-level and
physical problems of the reference monitor can be solved by using together the cryptographic
file system. In this case, even if an intruder accesses information below the level of an access
control or steals the protected disk, the intruder cannot view the information because of
encryption. This privilege problem of the cryptographic file system can be solved by
separating the security administrator from the system administrator, and giving
cryptographic file system privileges to only the security administrator through a reference
monitor. In this case, the reference monitor minimises the intruder’s chances to find the
information on the cryptographic file system even if the intruder gains a system administrator
274
Operating System | Protection and Security
Security
privilege, and effectively protects the important data in cooperation with cryptographic file
systems. The network access problems of the underlying mechanisms can be solved by
authentication limitation and session limitation. By authentication limitation allowing the
specific user to use the protected system at a specific place only, the system is protected better
because any intruders cannot connect to the system if the intruders are not at the place. By
the session limitation allowing only the limited number of connection sessions, an intruder
cannot open more sessions to the protected system than the permitted number, which
disables the denial-of-service attack. The authentication limitation and the session limitation
are also protected because the reference monitor hides their information. In summary, when
a reference monitor is used with a cryptographic file system, authentication limitation and
session limitation, these mechanisms complement each other while making the system more
invulnerable to attacks.
Conclusion
We designed a security OS with a reference monitor, a cryptographic file system,
authentication limitation, and session limitation. The reference monitor was newly designed
and implemented so that it could be fast, simple, mandatory and fixed because the overheads
of supporting all of the RBAC features and flexibility were significant. It had RBAC elements,
DAC characteristics, and no need to change OS. The cryptographic file system,
authentication limitation, and session limitation were added in order to complement each
other and their overheads were measured. The average overheads were 1.2 % for the case only
with the reference monitor, the average overheads were 3.5 % for file operation cases without
encryption and decryption, and the average overheads were 42.8 % for file operation cases
with encryption and decryption due to the operation-checking of the reference monitor, and
the encryption and the decryption of the cryptographic file system. In most of the cases, the
overhead was only 1.2 % because the file operations with encryption and decryption were
applied to only the very important directories. It is concluded that the advantages compensate
for the slight overheads. Our security OS gives better system protection: i) by the least
privilege through the reference monitor; ii) by making it difficult to steal and access the
protected disk through the cryptographic file system; iii) by making it difficult for an intruder
to access the system in the not-permitted places; iv) by making it difficult for an intruder to
attack a system through opening many sessions; and v) by making it difficult to unload or kill
the developed modules or applications. As a research project, our model and implementation
of a security OS are valuable because, to the best of our knowledge, they represent the first
security OS that has the authentication limitation, the session limitation, the cryptographic
file system, and the reference monitor.
Operating System | Protection and Security
275
Security
Discussion Questions:
1. Describe the phase of designing and implementation of security OS mentioned in the
case.
2. Collect information on the success rate of implementation of steps mentioned in the
case in todays’ IT industry.
276
Operating System | Protection and Security
Security
Bibliography
e-References
•
Security Problem, Retrieved 21 Dec, 2016 from https://www.cs.uic.edu/~jbell/Co
urseNotes/OperatingSystems/15_Security.html
•
User Authentication, Retrieved 21 Dec, 2016 from https://www.cs.uic.edu/~jbell/C
ourseNotes/OperatingSystems/15_Security.html
•
One Time Password, Retrieved 21 Dec, 2016 from https://www.cs.uic.edu/~jbe
ll/CourseNotes/OperatingSystems/15_Security.html
Image Credits
•
Figure 4.2.1: Silberschatz, Abraham, Peter B. Galvin, and Greg Gagne. Operating
System Principles. Hoboken, NJ: J. Wiley & Sons, 2006. Print.
•
Figure 4.2.2: http://www.uobabylon.edu.iq/download/M.S%202013-2014/Operat
ing_System_Concepts,_8th_Edition%5BA4%5D.pdf
External Resources
•
Whitman, M.E., &Mattord, H.J. (2016). Principles of information Security.
Australia: Delmar.
•
Pfleeger, C.P., Pfleeger S. L., & Margulies, J.(2015). Security in Computing. Upper
Saddle River, NJ: Prentice Hall.
•
Jaeger, T. (2008). Operating System Security. San Rafael, CA: Morgan & Claypool
Video Links
Topic
Link
One Time Password
https://www.youtube.com/watch?v=-bnjqHe2rh0
Security Issues in Operating System
https://www.youtube.com/watch?v=5WHdDpXXLb0
Cyber Security
https://www.youtube.com/watch?v=0p3787JiFgQ
Operating System | Protection and Security
277
Security
Notes:
278
Operating System | Protection and Security
Download