Chapter 4: Processes

advertisement
Chapter 4 – Processes
Creating an Executable Program
Process
 Process – a program in execution.
 Related terms: Job, Step, Load Module, Task.
 Process execution must progress in sequential
fashion.
 A process is more than a program code - It includes
3 segments:
 Program: code/text.
 Data: program variables.
 Stack: for procedure calls and parameter
passing.
Prepared by Dr. Amjad Mahmood
4.1
 Note:
 A program is a passive entity whereas a
process is an active entity with a program
counter specifying what to do next and a set of
associated resources.
 All multiprogramming OSs are build around the
concept of processes.
Process States
 A process can be in one of many possible states:
 new: The process is being created but has not
been admitted to the pool of executable
processes by the operating system.
 running: Instructions are being executed.
 waiting: The process is waiting for some event to
occur.
 ready: The process is waiting to be assigned to a
processor.
 terminated: The process has finished execution.
Prepared by Dr. Amjad Mahmood
4.2
Process Transitions
 As a process executes, it changes its state
Process state transition diagram
 The above figure indicates the types of events that
lead to each state for a process; the possible
transitions are as follows:
o Null  New: A new process is created to
execute a program. This event occurs for any of
the following reasons:
 An interactive logon to the system by a
user
 Created by OS to provide a service on
behalf of a user program
 Spawn by existing program
 The OS is prepared to take on a new batch
job
o New  Ready: The OS moves a new process
to a ready state when it is prepared to take on
additional process (Most systems set some limit
on the number of existing processes)
Prepared by Dr. Amjad Mahmood
4.3
o Ready  Running: OS chooses one of the
processes in the ready state and assigns CPU
to it.
o Running  Terminated: The process is
terminated by the OS if it has completed or
aborted.
o Running  Ready: The most common reason
for this transition are:
 The running process has expired his time
slot.
 The running process gets interrupted
because a higher priority process is in the
ready state.
o Running  Waiting (Blocked): A process is put
to this state if it requests for some thing for
which it must wait:
 A service that the OS is not ready to
perform.
 An access to a resource not yet available.
 Initiates I/O and must wait for the result.
 Waiting for a process to provide input.
o Waiting  Ready: A process from a waiting
state is moved to a ready state when the event
for which it has been waiting occurs.
o Ready  terminated: Not shown on the
diagram. In some systems, a parent may
terminate a child process at any time. Also,
when a parent terminates, all child processes
are terminated.
o Blocked  terminated: Not shown. This
transition occurs for the reasons given above.
Prepared by Dr. Amjad Mahmood
4.4
 Another state, Suspend, can also be included in the
model. The operating system may move a process
from a blocked state to suspend state by temporarily
taking them out of memory.
Linux Process States
1.
2.
3.
4.
5.
6.
TASK_RUNNING
TASK_INTERRUPTIBLE
TASK_UNINTERRUPTIBLE
TASK_ZOMBIE
TASK_STOPPED
TASK_EXCLUSIVE
Process Control Block (PCB)
 Each process in the operating system is
represented by a process control block (PCB) – also
called a task control block.
 Information associated with each process includes:
 Process state – new, ready, running, waiting...
 Process identification information
 Unique process identifier (PID) - indexes
(directly or indirectly) into the process table.
 User identifier (UID) - the user who is
responsible for the job.
 Identifier of the process that created this
process (PPID).
 Program counter – To indicate the next instruction
to be executed for this process.
Prepared by Dr. Amjad Mahmood
4.5
 CPU registers – include index registers, general
purpose registers etc. so that the process can be
restarted correctly after an interrupt occurs.
 CPU scheduling information – Such as process
priority, pointers to scheduling queues etc.
 Memory-management information – Include base
and limit register, page tables etc.
 Accounting information – Amount of CPU and real
time used, time limits, account number, job or
process numbers and so on.
 I/O status information – List of I/O devices
allocated to this process, a list of open files etc.
Process Scheduling Queues
 Job queue – When a process enters a system, it is
put in a job queue.
 Ready queue – set of all processes residing in main
memory, ready and waiting to execute are kept in
ready queue.
Prepared by Dr. Amjad Mahmood
4.6
 Device queues – There may be many processes in
the system requesting for an I/O. Since only one I/O
request can be entertained for a particular device, a
process needing an I/O may have to wait. The list of
processes waiting for an I/O device is kept in a
device queue for that particular device.
 An example of a ready queue and various device
queues is shown below.
Prepared by Dr. Amjad Mahmood
4.7
Schedulers
 A process may migrate between the various
queues.
 The OS must select, for scheduling purposes,
processes from these queues.
 Long-term scheduler (or job scheduler) – selects
which processes should be brought into the ready
queue.
 It is invoked very infrequently (seconds,
minutes)  (may be slow).
 It controls the degree of multiprogramming.
 Short-term scheduler (or CPU scheduler) – selects
which process should be executed next and
allocates CPU.
 Short-term scheduler is invoked very frequently
(milliseconds) (must be fast).
 Midterm scheduler selects which partially executed
job, which has been swapped out, should be
brought into ready queue.
Prepared by Dr. Amjad Mahmood
4.8
Process Context
Prepared by Dr. Amjad Mahmood
4.9
Process Switch
 A process switch may occur whenever the OS has
gained control of CPU. i.e., when:
– Supervisor Call
• Explicit request by the program (ex: file
open). The process will probably be
blocked.
– Trap
• An error resulted from the last instruction. It
may cause the process to be moved to the
Exit state.
– Interrupt
• The cause is external to the execution of
the current instruction. Control is
transferred to Interrupt Handler.
Prepared by Dr. Amjad Mahmood
4.10
Context Switching
 When CPU switches to another process, the system
must save the state of the old process and load the
saved state for the new process - this is called
context switch.
 The time it takes is dependent on hardware support.
 Context-switch time is overhead; the system does
no useful work while switching.
Steps in Context Switching
 Save context of processor including program
counter and other registers.
 Update the PCB of the running process with its new
state and other associate information.
 Move PCB to appropriate queue - ready, blocked,
 Select another process for execution.
 Update PCB of the selected process.
Prepared by Dr. Amjad Mahmood
4.11
 Restore CPU context from that of the selected
process.
Operations on Processes
 OS should be able to create and delete processes
dynamically.
Process Creation
 When the OS or a user process decides to create a
new process, it can proceed as follows:
 Assign a new process identifier and add its
entry to the primary process table.
 Allocate space for the process (program+data)
and user stack. The amount of space required
can set to default values depending on the
process type. If a user process spawns a new
process, the parent process can pass these
values to the OS.
 Create process control block.
Prepared by Dr. Amjad Mahmood
4.12




 Set appropriate linkage to a queue (ready) is
set.
 Create other necessary data structures (e.g. to
store accounting information).
Parent process creates children processes, which,
in turn create other processes, forming a tree of
processes.
Resource sharing possibilities
 Parent and children share all resources.
 Children share subset of parent’s resources.
 Parent and child share no resources.
Execution possibilities
 Parent and children execute concurrently.
 Parent waits until children terminate.
Address space possibilities
 Child duplicate of parent.
 Child has a program loaded into it.
 UNIX examples
 In Unix, every process has a unique process
identifier (an integer).
 fork system call creates new process. The child
process consists of the copy of the address space
of the parent process. Both parent and child
processes continue execution at the instruction
after the fork.
 exec system call used after a fork to replace the
process’ memory space with a new program.
 wait system call moves a process off the ready
queue until the termination of the child.
Prepared by Dr. Amjad Mahmood
4.13
Process Termination
 A process terminates when it executes last
statement and asks the operating system to delete it
by using exit system call. At that time, the child
process
 Output data from child to parent (via wait).
 Process’ resources are deallocated by
operating system.
 Parent may terminate execution of children
processes via appropriate system called (e.g.
abort). A parent may terminate the execution of one
of its children for the following reasons:
 Child has exceeded allocated resources.
 Task assigned to child is no longer required.
 Parent is exiting.
 Operating system does not allow child to continue if
its parent terminates.
 Cascading termination.
Prepared by Dr. Amjad Mahmood
4.14
A Linux Example
#include <stdio.h>
void ChildProcess();
void main()
{
int pid, cid, r;
pid = getpid();
r = fork(); //create a new process
if (r == 0) //r=0 -> it’s child
{
cid = getpid(); //get process ID
printf("I am the child with cid = %d of pid =
%d \n", cid, pid);
ChildProcess();
exit(0);
}
else
{
printf("Parent waiting for the child...\n");
wait(NULL);
printf("Child finished, parent quitting
too!");
}
}
void ChildProcess()
{
int i;
for (i = 0; i < 5; i++)
{
printf(“%d ..\n", i);
sleep(1);
}
}
Prepared by Dr. Amjad Mahmood
4.15
Cooperating Processes
 The concurrent processes executing in the OS may
be either independent or cooperating.
 Independent process cannot affect or be affected by
the execution of another process. It does not share
data with any other process.
 Cooperating process can affect or be affected by
the execution of another process. It shares data with
other process(es).
 Advantages of process cooperation are:
 Information sharing
 Computation speed-up
 Modularity
 Convenience
 Producer-Consumer problem: An example of
Cooperating Processes
 Paradigm for cooperating processes, producer
process produces information that is consumed by a
consumer process.
 unbounded-buffer places no practical limit on
the size of the buffer.
 bounded-buffer assumes that there is a fixed
buffer size.
Prepared by Dr. Amjad Mahmood
4.16
Shared data
#define BUFFER_SIZE 10
Typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
Producer Process
item nextProduced;
while (1) {
while (((in + 1) % BUFFER_SIZE) == out)
; /* do nothing */
buffer[in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}
Consumer process
item nextConsumed;
while (1) {
while (in == out)
; /* do nothing */
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
}
Prepared by Dr. Amjad Mahmood
4.17
Interprocess Communication (IPC)
 Mechanism for processes to communicate and to
synchronize their actions.
 Message system – processes communicate with
each other without resorting to shared variables.
 IPC facility provides two operations:
 send(message) – message size fixed or
variable
 receive(message)
 If P and Q wish to communicate, they need to:
 Establish a communication link between them
 Exchange messages via send/receive
 Implementation of communication link
 Physical (e.g., shared memory, hardware bus)
 Logical (e.g., logical properties)
Prepared by Dr. Amjad Mahmood
4.18
Threads
 A thread, also called a lightweight process (LWP), is
the basic unit of CPU utilization.
 It has its own program counter, a register set, and
stack space.
 It shares with the pear threads its code section, data
section, and OS resources such as open files and
signals, collectively called a task.
 The idea of a thread is that a process has five
fundamental parts: code ("text"), data, stack, file I/O,
and signal tables. "Heavy-weight processes"
(HWPs) have a significant amount of overhead
when switching: all the tables have to be flushed
from the processor for each task switch. Also, the
only way to achieve shared information between
HWPs is through pipes and "shared memory". If a
HWP spawns a child HWP using fork(), the only part
that is shared is the text.
Prepared by Dr. Amjad Mahmood
4.19
 Threads reduce overhead by sharing fundamental
parts. By sharing these parts, switching happens
much more frequently and efficiently. Also, sharing
information is not so "difficult" anymore: everything
can be shared.
User-Level and Kernel-Level Threads
 There are tow types of thread: user-level and
kernel-level.
 User-level avoids the kernel and manages the
tables itself.
 These threads are implemented in user-level
libraries rather than via system calls.
 Often this is called "cooperative multitasking" where
the task defines a set of routines that get "switched
to" by manipulating the stack pointer.
 Typically each thread "gives-up" the CPU by calling
an explicit switch, sending a signal or doing an
operation that involves the switcher. Also, a timer
signal can force switches.
 User threads typically can switch faster than kernel
threads.
Thread States
 Threads can be in one of the several states: ready,
blocked, running, or terminated.
 Like process, threads share the CPU and only one
thread at a time is in running state.
Prepared by Dr. Amjad Mahmood
4.20
What kinds of things should be threaded?
 If you are a programmer and would like to take
advantage of multithreading, the natural question is
what parts of the program should/ should not be
threaded. Here are a few rules of thumb (if you say
"yes" to these, have fun!):
 Are there groups of lengthy operations that
don't necessarily depend on other processing
(like painting a window, printing a document,
responding to a mouse-click, calculating a
spreadsheet column, signal handling, etc.)?
 Will there be few locks on data (the amount of
shared data is identifiable and "small")?
 Are you prepared to worry about locking
(mutually excluding data regions from other
threads), deadlocks (a condition where two
COEs have locked data that other is trying to
get) and race conditions (a nasty, intractable
problem where data is not locked properly and
gets corrupted through threaded reads &
writes)?
 Could the task be broken into various
"responsibilities"? E.g. could one thread handle
the signals, another handle GUI stuff, etc.?
Prepared by Dr. Amjad Mahmood
4.21
Download