LN5-7

advertisement
CET 386
Summer 2002
Lecture Notes
Chapter 5 - Threads
What is a thread? It is basic unit of CPU utilization. It is comprised of a PC, register set,
& a stack. It shares, with the other threads associated with a process, everything else (see
pages 87-91 Section 4.1). Thread context switching takes a lot less effort than process
context switching.
Benefits: responsiveness, resource sharing, economy of resources, ability to effectively
utilize multiprocessor architectures.
User threads vs kernel threads
User threads block all threads on blocking system call, not so kernel threads
Kernel threads have greater management overhead
Thread models:
Many-2-one - many user level threads running in/on one kernel thread (efficient at
user level but still have blocking of all user threads associated with one kernel
thread when one of them is in the kernel)
1-to-1 – blocking issues go away, extra overhead of kernel threads
Many-2-many – blocking issues gone, user level management efficiency present,
greatest effectiveness depends on dispatching algorithm of kernel threads for user
threads
Solaris threads – permits all of the above, dispatching is partitionable, very flexible
and based on a mapping through an intermediary device called a lightweight
process
Linux and Windows 2000 threads.
Java threads – run method (code that executes); start method (needed to get running);
management methods: sleep, resume, [deprecated suspend, stop] more later on
synchronization
Chapter 6 – CPU Scheduling
The management system that controls the switching of the CPU between/among
processes in such a way as to the computer system effective/productive. Also called
process scheduling.
Objective of multiprogramming is to have some process running at all times. (on a
uniprocessor there can be only one at a time.) If a process runs until it is done when it
blocks for I/O the CPU is idle. If we instead switch to another process when it blocks for
I/O, the CPU utilization is much improved.
Programs use the CPU for awhile then do I/O for awhile (CPU bursts alternating with I/O
bursts). On average CPU bursts are short (see graph page 6.2 => 8 msec).
Short-term scheduler (CPU scheduler) selects next job to run.
When are scheduling decisions made?
1 - When a process temporarily releases the CPU (I/O, yield, wait, sleep).
2 - When an interrupt occurs.
3 - When a process becomes ready (& wasn’t).
4 - When a process terminates.
If the decisions are made at 2 or 3 then the system uses preemption.
CH 5-7
1
CET 386
Summer 2002
Lecture Notes
Preemption can interfere with access to shared data if the preemption occurs at an
inopportune time. => coordination see chapter 7. The shared data may be in the kernel
which leads to the need to accommodate interrupts and non preemption within the kernel.
The dispatcher is the module that actually transfers control of the CPU to the next
process. Time to do this (context switch, etc) is called the dispatch latency.
CPU Scheduling Criteria
How good is the algorithm? Possible metrics are:
CPU utilization – how busy is the CPU
Throughput – (measure of work) number of processes that are completed per time
unit; very dependent on job mix as to usability
Turnaround time – how long it takes to execute a process; submission time to
completion
Waiting time – time a process spends not doing I/O or computing
Response time – time from job submission until first response
Would like to maximize CPU utilization and throughput and minimize turnaround,
waiting, and response time.
Scheduling Algorithms
Understand Gantt charts
FCFS – first come first served
Non preemptive
Average waiting time often quite long.
Convoy effect - other processes waiting for 1 (or more) BIG process to complete;
results in reduced CPU and device utilization
SJF – shortest job first
Uses advanced knowledge or predictive statistics to select short jobs
Provably optimal. Frequently used in long term scheduling.
Preemptive SJF even better than non preemptive.
Priority scheduling
Each process has a priority. The process with the best priority goes first. When
processes have equal priorities then FCFS is used.
May be preemptive.
Problem: starvation (indefinite waiting). Solution: aging (improving priorities).
Round-Robin – sort of FCFS with preemption done on a short time quantum.
Performance depends on size of the time quantum: too large a quantum performance
is like FCFS, very small each process seems to have it own small slow processor
Time-slice should be large with respect to context switch time and large enough so
that most CPU bursts can fit within it
Multilevel queues
Processes classified into several groups, each group may have different scheduling
algorithm and then the queues have a scheduling algorithm to select between
queues. Processes assigned permanently to queues.
Multilevel-feedback queues
Much the same as multilevel queues. However, processes may move between queues.
A policy must exist for such movement as well as scheduling within and between
queues.
CH 5-7
2
CET 386
Summer 2002
Lecture Notes
Multiprocessor Scheduling
Symmetric and asymmetric multiprocessing
Real-time Scheduling
Hard realtime systems vs soft realtime systems
Java Thread Scheduling
JVM schedules threads when a thread leaves the runnable state (terminates, stops,
suspends or does blocking I/O) and preemptively when a higher priority thread
enters the runnable state.
Threads may also voluntarily turnover the CPU by yielding
Algorithm Evaluation Methodologies/Technologies
Chapter 7 – Process Synchronization
Necessary for cooperating processes that share access to objects/data. While processes or
threads that are interruptible only at specific points may be coded correctly to
conflicting access to data (race conditions), it ain’t easy and this situation doesn’t
arise often. The areas of code where race conditions may occur is called a “critical
section”. To prevent race conditions critical sections of code must be executed
mutually exclusively in time.
Critical section solutions must meet the following criteria:
1. Mutual exclusion – only 1 thread/process can be in a related critical section at the
same time
2. Progress – no process can hold the critical section indefinitely and once released
other processes are permitted to enter
3. Bounded waiting – no process may be excluded from the critical section indefinetly
(no starvation)
Section 7.2.1 – Two Task Solutions
Algorithm 1 – ensures mutual exclusion but not progress
Algorithm 2 – adds process state information; but still fails progress requirement
Algorithm 3 – works as long basic h/w load, store and test instructions are atomic
Section 7.3 – Synchronization H/w Requirements
In uniprocessor systems the critical section access problem is simplified if better h/w
support is available. In particular if a test-and-set instruction or swap instruction that
cannot be interrupted (is atomic) is available then these instructions can be used to
implement locks.
Section 7.4 – Semaphores
A generalization of locks and Algorithm 3 solutions for use in a wide selection of
scenarios.
P => test (wait) and decrement; V => increment; atomic/indivisible
CH 5-7
3
CET 386
Summer 2002
Lecture Notes
Regular usage => P(sem); critical section; V(sem)
Binary (values 1 and 0); Counting (values n to 0)
The first version of the semaphore implementation on page 184 used busy-waiting.
On page 187 the authors introduce “queuing semaphore” which are more efficient on the
CPU utilization side. However, the queuing involves a longer period of atomicity
(how long the P’s and V’s take).
Cooperating processes that only use one critical section at a time will progress as long as
the semaphores are used as intended. However, if more than 1 critical section is
being used by a set of cooperating processes deadlock can occur. Deadlock is a
situation in which a process is waiting to gain access to a resource which is held by
another process which is trying to gain access to a resource held by the first.
Sometimes called circular waiting or deadly embrace because it may involve many
processes.
FROM HERE ON WILL BE COVERED BY THE NEXT TEST
7.5 Classical Synchronization Problems
Bounded Buffer
Readers and Writers
Dining Philosophers
7.7 Monitors
It is easy to accidentally miscode Semaphores (forget a P or V or both) and cause havoc.
So “they” came up with a high-level-language construct called a monitor (and their
ancillary condition variables) to simplify access to critical section and reduce the
possibility of error in semaphore use. Monitors provide controlled access to critical
sections by declaring the critical sections appropriately. No coding of P or V is then
necessary (although the equivalent locking and blocking is still required in the
underlying technology support).
The condition variables are useful to threads/processes within a monitor if temporary
suspension is required or if the programmer wants to code custom synchronization
blocks. A thread can “wait” on a condition variable until another thread does a signal
on the variable.
Java Synchronization from Chapter 7
Thread.yield – voluntarily give up the CPU
Synchronized keyword – mutual exclusion and locking (monitor like) use with
Wait and notify and notifyall
CH 5-7
4
Download