Multiprocessor Scheduling - Systems and Computer Engineering

advertisement
Multiprocessor and Real-Time
Scheduling
Chapter 10
Real-Time scheduling will be covered in SYSC3303.
1
Classifications of Multiprocessor

Loosely coupled or distributed multiprocessor, or
cluster


each processor has its own memory and I/O channels
Functionally specialized processors
such as I/O processor
 controlled by a master processor


Tightly coupled multiprocessing
processors share main memory
 controlled by operating system


What is the main concern in general for
multiprocessing?

2
Processor utilization vs. throughput
Synchronization Granularity and
Processes - Summary
3
Independent Parallelism



Separate applications or processes
running
No synchronization among processes
Example is time sharing
 average
4
response time to users is less
Coarse and Very Coarse-Grained
Parallelism




Synchronization among processes at a very gross
level
Good for concurrent processes running on a
multiprogrammed uniprocessor or multiple
processors
Distributed processing across network nodes to
form a single computing environment
Good when there is infrequent interaction among
processes

5
overhead of network would slow down communications
Medium-Grained Parallelism



6
Parallel processing or multitasking within a
single application
Single application is a collection of threads
(or processes)
Threads usually interact frequently
Fine-Grained Parallelism

Highly parallel applications
 Usually
much more complex use of parallelism
than is found in the use of threads

7
Specialized area
Scheduling Design Issues
Scheduling on a multiprocessor involves:

Use of multiprogramming on individual
processors
 Similar
to uni-processor scheduling

Assignment of processes to processors

Actual dispatching of a process
8
Assignment of Processes to
Processors
Two approaches:

Treat processors as a pooled resource and
assign process to processors on demand


A common queue: Schedule to any available processor
Local queues: dynamic load balancing


Permanently assign a process to a processor



Allows group or gang scheduling
Dedicate short-term queue for each processor
Advantage and disadvantage?


9
processes or threads are moved from a queue for one processor to a queue
for another processor.
Less overhead in scheduling
A processor could be idle while another processor has a backlog
Process Scheduling


Usually processes are not dedicated to
processors
Queuing
A
single queue for all processors
 Multiple queues based on priorities
 All

queues feed to the common pool of processors
Specific scheduling disciplines is less
important with more than one processor
 Different
scheduling methods can be used for
different processors
10
Comparison of One and Two
Processors – An Example
11
Thread Scheduling


12
An application can consist a set of threads
that cooperate and execute concurrently in
the same address space
Threads running on separate processors
could yield a dramatic gain in performance
for some applications
Approaches to Thread Scheduling
processes are not
assigned to a
particular processor
a set of related threads
scheduled to run on a set of
processors at the same time, on
a one-to-one basis
Load Sharing
Gang Scheduling
Four approaches for
multiprocessor thread
scheduling and
processor assignment
are:
provides implicit scheduling
defined by the assignment of
threads to processors
Dedicated Processor
Assignment
13
the number of threads in
a process can be altered
during the course of
execution
Dynamic Scheduling
Load Sharing

Load is distributed evenly across the
processors
 Simplest
and carries over most directly from a
uni-processor system



14
Assures no processor is idle
No centralized scheduler required
Use global queues
Disadvantages of Load Sharing

Central queue needs mutual exclusion
 may
be a bottleneck when more than one
processor looks for work at the same time

Preemptive threads are unlikely to resume
execution on the same processor
 In
a loosely-coupled system, cache usage is
less efficient

15
If all threads are in the global queue, all
threads of a program will not gain access to
the processors at the same time
Gang Scheduling


Simultaneous scheduling of related threads
that make up a single process
Useful for applications where performance
severely degrades when any part of the
application is not running
 Better
for dedicated applications
 Lower scheduling overhead for those processes


16
Threads often need to synchronize with each
other
Number of processors may be smaller than
the number of threads on some machines.
Dedicated Processor Assignment



17
When application is scheduled, its threads
are assigned to a processor
Some processors may be idle
Avoids process switching
Dynamic Scheduling


Number of threads in a process are altered
dynamically by the application
Operating system adjusts the load to improve usage
assign idle processors
 new arrivals may be assigned to a processor that is used
by a job currently using more than one processor
 hold request until processor is available
 new arrivals will be given a processor before existing
running applications

18
Download