Programmer’s Perspective
an ordered set of instructions
O.S perspective
an executable file stored in secondary memory, typically on a disk.
From OS view, process relates to execution
Process creation involves 4 major steps:
setting up the process description
allocating an address space
loading the program into the allocated address space, and
passing the process description to the scheduler
Process Description
O.S describe a process by means of a description table known as process control block (PCB)
PCB contains all the information related to to the whole life cycle of a process like process identification, owner, process status, description of the allocated address space and so on.
Different O.S uses different names for process description table like process table in UNIX, Process
Information Block in OS/2, and Task control block in
IBM mainframe operating systems.
Allocating an Address Space
Allocation of an address space to a process can be divided into two steps:
sharing the address among the created processes
(shared memory).
Allocating address space to each process (per process address spaces).
Loading the Program Into
Allocated Address Space
The executable program file is loaded into the allocated address space and this is done using the different memory management schemes adopted by the O.S.
Passing the Process Description to the Scheduler
The created process is passed to scheduler which allocates the processor to the competing processes.
This is managed by setting up and manipulating queues of PCBs.
Process Creation Sub-models
Simpler model used by IBM/PCP, IBM/360/DOS did not allow process to further breakdown into sub-processes.
Process Spawning
a process may create a new process called a child process .
Child process can be created by the programmer by using standard process creation mechanisms.
Spawned processes form a process hierarchy known as process tree .
A
B C
D E
Process Scheduling Sub-models
Process Model
Scheduling is performed on per process basis, i.e. the smallest work to be schedules is a process.
Process-thread Model
in this model smaller units of work is known as thread and they are scheduled as entities.
Thread, like a process is a sequence of instructions.
smaller chunks of code (lightweight)
threads are created within and belong to process and share the resource of the process.
for parallel thread processing, scheduling is performed on a per-thread basis
finer-grain, less overhead on switching from thread to thread.
Most of the O.S released in the 1980’s and 1990’s are based on process thread model such as OS/2, Windows NT or
SunOS 5.0.
Threads have similar life cycle to the processes and are mainly managed in the same way.
Process
Threads
The Concepts of Concurrent
Execution (N-client 1-server)
•Concurrent execution is the temporal behavior of N-client 1 server model where one client is served at any given moment.
Clients t
Sequential nature
Server
Clients t
Simultaneous nature
Server
The concepts of concurrent execution (N-client 1-server)
Preemption rule
Non preemptive
Non pre-emptive preemptive
Time shared Pre-emptive
Prioritized
Time-shared Priotized t
Client
Server
Client
Server Priority
N-client N-server model
Synchronous: each server starts
service at the same moment.
Asynchronous: the servers do not work in concert.
Synchronous
(lock step) Asynchronous
Clients Servers Clients
Servers
Sequential languages
languages that do not support N-client model (C,
Pascal, Fortran, PROLOG, LISP)
Concurrent languages
employ constructs to implement the N-client 1 server model by specifying concurrent threads and processes but lack language constructs to describe Nserver model (Ada, Concurrent Pascal, Modula-2, concurrent PROLOG).
Data parallel language
introduces special data structures that are processed in parallel, element by element (high performance
Fortran, DAP Fortran, DAP PROLOG,Connection
Machine LISP)
Parallel languages
extends the specifications of N-client model of concurrent languages with processor allocation language constructs that enable use of N-server model (Occam-2, 3L Parallel C, Strand-88)
Available and utilized parallelism
available: in program or in the problem solutions
utilized: during execution
Types of available parallelism
functional
arises from the logic of a problem solution
data
arises from data structures that allow parallel operations on their elements, such as vectors or metrices in their problem solution.
Give rise to a parallel execution for data parallel part of computation.
Levels of Available Functional
Parallelism
Parallelism at the instruction level (fine-grained)
particular instructions of a program executed in parallel.
Parallelism at the loop level (middle-grained)
consecutive loop iterations are candidates for parallel execution.
May be restricted due to data dependencies between subsequent lop iterations called recurrences.
Parallelism at the procedure level (middle-grained)
parallel execution of procedure.
Parallelism at the program level (coarse-grained)
multiple independent users are performing executions in parallel.
Available and Utilized Levels of
Functional Parallelism
Available levels Utilized levels
User (program) level
Procedure level
Loop level
Instruction level
1. Exploited by architecture
2. Exploited by means of operating systems
User level
Process level
Thread level
Instruction level
1
2
Utilization of Functional
Parallelism
Available parallelism can be utilized by
architecture,
instruction-level functional parallel architectures or instruction level parallel architectures ( ILP-architectures).
compilers
parallel optimizing compiler
operating system
multitasking
Multithreading
concurrent execution at the thread level.
Multiple threads are generated for each process, and these threads are executed concurrently on a single processor under the control of one O.S.
Multitasking
process level concurrent execution.
Multiprogramming
user level concurrent execution.
Timesharing
user level concurrent execution.
User level
Process level
Thread level
Multiprogramming time-sharing
Multitasking
Multi-threading
Using data-parallel architecture
permits parallel or pipelined operations on data elements.
Flynn’s classification
SISD
SIMD
MISD (Multiple Instruction Single Date)
MIMD
Pipelining (time)
A number of functional units are employed in sequence to perform a single computation.
Each functional unit represent a certain stage of computation.
Pipeline allows overlapped execution of instructions.
It increases the overall processor’s throughput.
Car Wash Example
IN Soap cycle Rinse cycle Wax cycle Dry cycle
Chevy
IN
Chevy gets soap
Rinse station idle
Wax station Dry station idle idle
Soap station idle
Chevy gets Wax station Dry station rinse idle idle
Soap station idle
Rinse station idle
Chevy gets
Wax
Dry station idle
Ford
IN
Soap station idle
Rinse station idle
Wax station idle
Chevy gets dry
Ford gets soap
Rinse station idle
Wax station idle
Dry station idle
Chevy OUT
Car Wash Example
IN Soap cycle Rinse cycle Wax cycle Dry cycle
Chevy
IN
Chevy gets soap
Rinse station idle
Wax station Dry station idle idle
Ford
IN
Volvo
IN
Ford gets soap
Chevy gets rinse
Wax station Dry station idle idle
Volvo gets soap
Ford gets rinse
Chevy gets
Wax
Dry station idle
Saturn
IN
Saturn gets soap
Volvo gets rinse
Toyota
IN
Toyota gets soap
Saturn gets rinse
Ford gets
Wax
Volvo gets wax
Chevy gets dry
Ford gets dry
Chevy OUT
Replication
Replication (space)
a number of functional units perform multiply computation simultaneously
more processors
more memory
more I/O
more computers
e.g. array processors.