Processes - UniMAP Portal

advertisement
Threads, SMP, and Microkernels
Chapter 4
Threads, SMP, and Microkernels
1.
Processes and threads
2.
Symmetric MultiProcessing (SMP)
3.
Microkernel: structuring the O.S. to
support process management
Process Characteristics
 The Essence of a process:
 Unit of resource ownership - process is allocated:


a virtual address space to hold the process image
control of some resources (files, I/O devices...), OS
prevent interferences between processes with respect to
these resources
 Unit of execution - process is an execution path
through one or more programs


execution may be interleaved with other processes
the process has an execution state (Run,ready…) and a
priority
Process Characteristics
 These 2 characteristics are treated independently by some recent
OS
 The unit of execution is usually referred to a thread or a lightweight
process
 The unit of resource ownership is usually referred to as a process
or task,
Multithreading
 Operating system supports multiple threads of execution within a
single process
 MS-DOS supports a single thread
 UNIX supports multiple user processes but only supports one
thread per process
 Windows 2000, Solaris, Linux, Mach, and OS/2 support multiple
threads
Process
 In multithreading environment, a Process

Have a virtual address space which holds the process image

Protected access to processors, other processes (IPC), files,
and I/O resources.
Thread
• Within a process, there may be one or more
threads, each of which
–
Has an execution state (running, ready, etc.)
– Has a saved thread context when not running (an
independent PC)
– Has private storage for local variables and
execution stack
– Has shared access to the address space and
resources (files…) of their process
•
•
•
when one thread alters (non-private) data, all other
threads (of the process) can see this
threads communicate via shared variables
a file opened by one thread is available to others
Benefits of Threads
 Mainly because a thread is associated with less information.

Takes far less time to create a new thread than a process

Less time to terminate a thread than a process

Less time to switch between two threads within the same
process

Less communication overhead between threads than between
processes
Benefits of Threads
 Example: a file server on a LAN
 It needs to handle several file requests
over a short period
 Hence more efficient to create (and
destroy) a single thread for each request
 Such threads share files and variables
 In Symmetric Multiprocessing: different
threads can possibly execute
simultaneously on different processors
Uses of Threads in a Single-User
Multiprocessing System
 Foreground and background work (One
application can run many threads, e.g.
spreadsheet program: one display menu
and read user input, other executes
commands)
 Asynchronous processing (e.g., one thread
in text editor: saving RAM contents once
every 5 minutes)
 Speedy execution (different threads on
different CPUs)
 Modular program structure (if a program
has a variety of activities, using different
threads)
Threads
 Process-level operations

Suspending a process involves suspending all threads of the
process since all threads share the same address space

Termination of a process, terminates all threads within the
process
Thread Functionality
•
Two aspects: state and synchronization
•
Basic States: Running, Ready, Blocked
•
State change operations
–
Spawn: spawn a new thread, on ready queue,
–
Block: (needs to waiting an event, save reg., PC, stack
pointer,)
–
Unblock: move to ready queue
–
Finish: Deallocate register context and stacks when completes
Application benefits of threads
• Consider an application that consists of
several independent parts that do not need to
run in sequence
• Each part can be implemented as a thread
• Whenever one thread is blocked waiting for an
I/O, execution could possibly switch to another
thread of the same application (instead of
switching to another process)
• Therefore necessary to synchronize the
activities of various threads so that they do not
obtain inconsistent views of the data (detailed
in chap 5)
Remote Procedure Call Using
Threads
Remote Procedure Call Using
Threads
Multi threads on a uniprocessor
Two
processes 1,2, (three threads, A,B,C)
ULT, KLT
 Two broad categories of threads

User-level threads

Kernel-level threads (lightweight process)
User-Level Threads
ULT: All thread management is done by the application
 The kernel is not aware of the existence of threads
Threads library
 Any app can be programmed to be multithreaded using a thread
lib.
 Contains code for:

creating and destroying threads

passing messages and data between threads

scheduling thread execution

saving and restoring thread contexts
ULTs – Advantages
 Thread switching does not require kernel
mode privileges (all data structures are in
user space); save mode switch overhead;
faster
 Scheduling algorithm is user selected ;
flexible
 Platform independence (run under any OS)
ULT disadvantages
PROBLEMS
 As system calls are blocking, when one thread execute a system
call, not only the thread is blocked, the whole process is blocked
 The threads cannot run on multiple processors (As the kernel only
assign one processor for one process at a time)
Kernel-Level Threads
 W2K, Linux, and OS/2 are
examples of this approach
 All thread management is done by
kernel
 No thread library but an API to the
kernel thread facility
 Kernel maintains context
information for the process and
the threads
 Scheduling is done on a thread
basis
 Switching between threads
requires the kernel
KLTs – Advantages and disadvantages
 The threads in the same process can run on multiple processors
 When one thread is blocked, the kernel can schedule another
thread of the same process
 The kernel routines can be multithreaded
PROBLEMS
 Thread switching in the same process requires a mode switch to
the kernel (i.e. process needs to switch from user->kernel)
Combined Approaches
• Example is Solaris
• Thread creation done in the user
space
• Bulk of scheduling and
synchronization of threads done
in the user space
• The programmer may adjust the
number of KLTs
• Combines the best of both
approaches
Relationship Between Threads and
Processes
Threads:Process Description
1:1
Each thread of execution is a
unique process with its own
address space and resources.
M:1
A process defines an address
space and dynamic resource
ownership. Multiple threads
may be created and executed
within that process.
Example Systems
Traditional UNIX implementations
Windows NT, Solaris, OS/2,
OS/390, MACH
Relationship Between Threads and
Processes
Threads:Process Description
1:M
M:N
Example Systems
A thread may migrate from one
process environment to
another. This allows a thread
to be easily moved among
distinct systems.
Ra (Clouds), Emerald
Combines attributes of M:1
and 1:N cases
TRIX
Process and Thread
 VAX Benchmarks: (μs)
Operation
ULT
KLT
Processes
NullFork
34
948
11,300
signal-Wait
37
441
1,840
Categories of Computer Systems
 Single Instruction Single Data (SISD)

single processor executes a single instruction stream to
operate on data stored in a single memory
 Single Instruction Multiple Data (SIMD)

each instruction is executed on a different set of data by the
different processors
Categories of Computer Systems
 Multiple Instruction Single Data (MISD)

a sequence of data is transmitted to a set of processors, each
of which executes a different instruction sequence. Never
implemented
 Multiple Instruction Multiple Data (MIMD)
 a set of processors simultaneously execute different instruction
sequences on different data sets
One master is
responsible
for scheduling
processes or
threads
Each processor self scheduling
from the pool of processes
Symmetric Multiprocessing
 Kernel can execute on any processor
 Typically each processor does self-scheduling from the pool of
available process or threads
 Kernel can be constructed as multiple processes or threads,
allowing portions of the kernel to execute in parallel
Shared bus, processor
may exchange signals
directly or via memory
Multiprocessor Operating System Design
Considerations
 Managing kernel routines

they can be simultaneously in execution by several CPUs
 Scheduling

can be performed by any CPU
 Interprocess synchronization

becomes even more critical when several CPUs may
attempt to access the same data at the same time
 Memory management

becomes more difficult when several CPUs may be sharing
the same memory
 Reliability or fault tolerance

if one CPU fails, the others should be able to keep the
system in function
Microkernels
 Small operating system core
 Contains only essential operating systems
functions
 Many services traditionally included in the
operating system are now external
subsystems

device drivers
 file systems



virtual memory manager
windowing system
security services
Layered Kernel vs Microkernel
Client-server architecture in microkernels
 the user process is the client
 the various subsystems are servers
Benefits of a Microkernel Organization
 Uniform interface on request made by a process

All services are provided by means of message passing
 Extensibility
 Allows the addition of new services
 Flexibility
 New features added

Existing features can be subtracted
 Portability

Changes needed to port the system to a new processor is
changed in the microkernel - not in the other services
 Reliability

Modular design

Small microkernel can be rigorously tested
Benefits of Microkernel Organization
 Distributed system support

Message are sent without knowing what the target machine is
 Object-oriented operating system

Components are objects with clearly defined interfaces that can
be interconnected to form software
Microkernel Performance
 Microkernel architecture may be less performance than traditional
layered architecture

Communication between the various subsystems causes
overhead

Message-passing mechanisms less performant than simple
system call
Microkernel Design
 Low-level memory management
 Inter-process communication
 I/O and interrupt management
Low-level memory management
 mapping each virtual page to a physical page frame
 Paging and virtual memory management is done outside the
microkernel
Low-level memory management
Pager
Application
Page
fault
resume
Microkernel
4.1 Page fault processing
Address space
Function call
Inter-process communication
 Managing messages and access among processes
 A message includes a head and a body
 IPC based on ports associated with processes
I/O and interrupt management
 Recognizes the interrupt as a message, and requests a user-level
process to handle it
Windows 2000
Process Object
Windows 2000
Thread Object
Windows 2000
Thread States
 Ready
 Standby
 Running
 Waiting
 Transition
 Terminated
Solaris
• Process includes the user’s address space, stack, and
process control block
• User-level threads (threads library)
–
invisible to the OS
– are the interface for application parallelism
• Kernel threads
–
the unit that can be dispatched on a processor
• Lightweight processes(LWP)
–
each LWP supports one or more ULTs and maps to exactly
one KLT
– LWPs are visible to applications
– therefore LWP are the way the user sees the KLT
– constitute a sort of ’virtual CPU’
Process 2 is equivalent to a pure ULT approach ( = Unix)
Process 4 is equivalent to a pure KLT approach ( = Win-NT, OS/2)
We can specify a different degree of parallelism (process 3 and 5)
Solaris: versatility
 We can use ULTs when logical parallelism
does not need to be supported by
hardware parallelism (we save mode
switching)

Ex: Multiple windows but only one is active at
any one time
 If ULT threads can block then we can add
two or more LWPs to avoid blocking the
whole application
 Note versatility of SOLARIS that can
operate like Windows-NT or like
conventional Unix
Solaris: user-level thread execution
(threads library).
 Transitions among states are under the control of
the application:

they are caused by calls to the thread library
 It’s only when a ULT is in the active state that it is
attached to a LWP (so that it will run when the
kernel level thread runs)
 A thread may transfer to the sleeping state by
invoking a synchronization primitive (chap 5) and
later transfer to the runnable state when the event
waited for occurs
 A thread may force another thread to go to the stop
state
Solaris: user-level thread execution (threads
library).
 Synchronization

invoke a concurrency action and go into the sleeping state
 Suspension
 go into the stopped state
 Preemption
 go into the runnable state
 Yielding

yield to another runnable thread
Solaris: user-level thread states
(attached to a LWP)
Decomposition of user-level Active
state
 When a ULT is Active, it is associated to a
LWP and thus to a KLP.
(Synchronization, Suspension, Preemption, Yielding)
 Transitions among the LWP states is under
the exclusive control of the kernel
 A LWP can be in the following states:
 running: assigned to CPU = executing
 blocked because the KLT issued a blocking
system call (but the ULT remains bound to that
LWP and remains active)
 runnable: waiting to be dispatched to CPU
Solaris: Lightweight Process States
LWP states are independent of ULT states
(except for bound ULTs)
Solaris: Interrupts as Threads
 Employs a set of threads to handle interrupts
 The kernel controls access to data structures and synchronizes
among interrupt threads using mutual exclusion primitives
 Interrupt handles are assigned higher priorities
Linux Process
 State
 Scheduling information
 Identifiers
 Interprocess communication
 Links
 Times and timers
 File system
 Virtual memory
 Processor-specific context
Linux States of a Process
 Running
 Interruptable
 Uninterruptable
 Stopped
 Zombie
Terminated
, but still in
process
Tutorial 2
Page 168 Text Book
Question
2,3,4,5,6,11,13,15
Download