Compare Parallel Processing with Sequential Processing

advertisement
CSE8380 Parallel and Distributed
Processing
Presentation
Hong Yue
Department of Computer Science & Engineering
Southern Methodist University
April 26, 2005
1
Parallel Processing
Multianalysis
---
Compare Parallel Processing with
Sequential Processing
April 26, 2005
2
Why did I select this topic?
April 26, 2005
3
Outline
 Definition
 Characteristics of Parallel Processing and
Sequential Processing
 Implementation of Parallel Processing and
Sequential Processing
 Performance of Parallel Processing and
Sequential Processing
 Parallel Processing Evaluation
 Major Application of parallel processing
April 26, 2005
4
Definition
 Parallel Processing Definition
Parallel Processing refers to the simultaneous
use of multiple processors to execute the same
task in order to obtain faster results. These
processors either communicate each other to
solve a problem or work completely
independent, under the control of another
processor which divides the problem into a
number of parts to other processors and
collects results from them.
April 26, 2005
5
Definition .2
 Sequential Processing Definition
Sequential processing refers to a computer
architecture in which a single processor
carries out a single task by series of
operations in sequence. It is also called serial
processing.
April 26, 2005
6
Characteristics of Parallel Processing and
Sequential Processing
 Characteristics of Parallel Processing
● Each processor can perform tasks
concurrently.
● Tasks may need to be synchronized.
● Processors usually share resources, such
as data, disks, and other devices.
April 26, 2005
7
Characteristics of Parallel Processing and
Sequential Processing .2
 Characteristics of Sequential Processing
● Only one single processor performs task.
● The single processor performs a single
task.
● Task is executed in sequence.
April 26, 2005
8
Implementation of parallel processing and
sequential processing
 Executing single task
In sequential processing, the task is executed
as a single large task.
In parallel processing, the task is divided into
multiple smaller tasks, and each component
task is executed on a separate processor.
April 26, 2005
9
Implementation of parallel processing and
sequential processing.2
Total Elapsed Time
Processor 1
Task (runtime)
Figure 1 Sequential Processing of a Large Task
April 26, 2005
10
Implementation of parallel processing and
sequential processing .3
Total Elapsed Time
Processor 1
Processor 2
Processor 3
Processor 4
Processor 5
Processor 6
Processor 7
Component task (runtime)
Figure 2 Parallel Processing: Executing Component Tasks in Parallel
April 26, 2005
11
Implementation of parallel processing and
sequential processing.4
 Executing multiple independent task
● In sequential processing, independent tasks
compete for a single resource. Only task 1
runs without having to wait. Task 2 must wait
until task 1 has completed; task 3 must wait
until tasks 1 and 2 have completed, and so on.
April 26, 2005
12
Implementation of parallel processing and
sequential processing .5
 Executing multiple independent task
● By contrast, in parallel processing, for
example, a parallel server on a symmetric
multiprocessor, more CPU power is assigned
to the tasks. Each independent task executes
immediately on its own processor: no wait
time is involved.
April 26, 2005
13
Implementation of parallel processing and
sequential processing .6
Total Elapsed Time
Processor 1
Processor 2
Processor 3
Processor 4
Processor 5
Processor 6
Processor 7
Task (runtime)
Wait
Figure 3 Sequential Processing of Multiple Independent Tasks
April 26, 2005
14
Implementation of parallel processing and
sequential processing .7
Total Elapsed Time
Processor 1
Processor 2
Processor 3
Processor 4
Processor 5
Processor 6
Processor 7
Task (runtime)
Figure 4 Parallel Processing: Executing Independent Tasks in Parallel
April 26, 2005
15
Performance of parallel processing and
sequential processing
 Sequential Processing Performance
● Take long time to execute task.
● Can’t handle too large task.
● Can’t handle large loads well.
● Return is diminishing.
● More increasingly expensive to make a
single processor faster.
April 26, 2005
16
Performance of parallel processing and
sequential processing .2
 Solution:
using parallel processing - use lots of
relatively fast, cheap processors in parallel.
April 26, 2005
17
Performance of parallel processing and
sequential processing .3
Parallel Processing Performance
● Cheaper, in terms of price and performance.
● Faster than equivalently expensive uniprocessor
machines.
● Scalable. The performance of a particular program
may be improved by execution on a large machine.
April 26, 2005
18
Performance of parallel processing and
sequential processing .4
 Parallel Processing Performance
● Reliable. In theory if processors fail we can simply
use others.
● Can handle bigger problems.
● Communicate with each other readily, important in
calculations.
April 26, 2005
19
Parallel Processing Evaluation
 Several ways to evaluate the parallel
processing performance:
● Scale-up
● Speedup
● Efficiency
● Overall solution time
● Price/performance
April 26, 2005
20
Parallel Processing Evaluation .2
 Scale-up
Scale-up is enhanced throughput, refers to
the ability of a system n times larger to
perform an n times larger job, in the same
time period as the original system. With
added hardware, a formula for scale-up holds
the time constant, and measures the
increased size of the job which can be done.
April 26, 2005
21
Parallel Processing Evaluation .3
Sequential System:
100% Task
Hardware
Time
Parallel System:
Hardware
Time
200% Task
Hardware
Time
Figure 5 Scale-up
April 26, 2005
22
Parallel Processing Evaluation .4
 Scale-up measurement formula:
Transaction volume of multiprocessors
Scale-up =
Transaction volume of uniprocessor
April 26, 2005
23
Parallel Processing Evaluation .5
 For example, if the uniprocessor system can process
100 transactions in a given amount of time, and the
parallel system can process 200 transactions in this
amount of time, then the value of scale-up would be
equal to 200/100 = 2.
 Value 2 indicates the ideal of linear scale-up: when
twice as much, hardware can process twice the data
volume in the same amount of time.
April 26, 2005
24
Parallel Processing Evaluation .6
 Speedup
Speedup, the improved response time, defined as
the time it takes a program to execute in sequential
(with one processor) divided by the time it takes to
execute in parallel (with many processors). It can be
achieved by two ways: breaking up a large task into
many small fragments and reducing wait time.
April 26, 2005
25
Parallel Processing Evaluation .7
Sequential System:
100% Task
Hardware
Time
Parallel System:
50% Task
Hardware
Time
Hardware
50% Task
Time
Figure 6 Speedup
April 26, 2005
26
Parallel Processing Evaluation .8
 Speedup measurement formula:
Elapsed time of a uniprocessor
Speedup =
Elapsed time of the multiprocessors
April 26, 2005
27
Parallel Processing Evaluation .9
 For example, if the uniprocessor took 40 seconds
to perform a task, and two parallel systems took
20 seconds, then the value of speedup = 40 / 20 =
2.
 Value 2 indicates the ideal of linear speedup:
when twice as much, hardware can perform the
same task in half the time.
April 26, 2005
28
Parallel Processing Evaluation .10
Workload
Scale-up
Speedup
OLTP
Yes
No
DSS
Yes
Yes
Batch (Mixed)
Yes
Possible
Parallel Query
Yes
Yes
Table 1 Scale-up and Speedup for Different Types of Workload
April 26, 2005
29
Parallel Processing Evaluation .11
Figure 7 Linear and actual speedup
April 26, 2005
30
Parallel Processing Evaluation .12
Amdahl’s Law
Amdahl's Law is a law governing the
speedup of using parallel processors on a
problem, versus using only one sequential
processor. Amdahl’s law attempts to give a
maximum bound for speedup from the nature
of the algorithm:
April 26, 2005
31
Parallel Processing Evaluation .13
Amdahl’s Law
S+P
Maximum speedup =
P
S: purely sequential part
P: parallel part
S + P = 1 (for simplicity)
S+
n
1
=
P
S+
n
April 26, 2005
32
Parallel Processing Evaluation .14
Figure 8 Example speedup: Amdahl & Gustafson
April 26, 2005
33
Parallel Processing Evaluation .15
Gustafson’s Law
If the size of a problem is scaled up as the
number of processors increases, speedup
very close to the ideal speedup is possible.
That is, a problem size is virtually never
independent of the number of processors.
April 26, 2005
34
Parallel Processing Evaluation .16
Gustafson’s Law
S + (P * n)
Maximum speedup =
= n + (1 - n) * S
S+P
April 26, 2005
35
Parallel Processing Evaluation .17
Efficiency
The relative efficiency can be a useful
measure as to what percentage of a
processor’s time is being spent in useful
computation.
Speedup * 100
Efficiency =
Number of processors
April 26, 2005
36
Parallel Processing Evaluation .18
Figure 9 Optimum efficiency & actual efficiency
April 26, 2005
37
Parallel Processing Evaluation .19
Figure 10 Optimum number of processors in actual speedup
April 26, 2005
38
Parallel Processing Evaluation .20
 Problems in Parallel Processing
Parallel processing is like a dog’s walking on
its hind legs. It is not done well, but you
are surprised to find it done at all.
----Steve Fiddes (University of Bristol)
April 26, 2005
39
Parallel Processing Evaluation .21
 Problems in Parallel Processing
● Its software is heavily platform-dependent
and has to be written for a specific machine.
● It also requires a different, more difficult
method of programming, since the software
needs to appropriately, through algorithms,
divide the work across each processor.
April 26, 2005
40
Parallel Processing Evaluation .22
 Problems in Parallel Processing
● There isn't a wide array of shrink-wrapped software
ready for use with parallel machines.
● Parallelization is problem-dependent and cannot
be automated.
● Speedup is not guaranteed.
April 26, 2005
41
Parallel Processing Evaluation .23
 Solution 1:
● Decide which architecture is most appropriate for a
given application.
The characteristics of application should drive
decision as to how it should be parallelized; the form
of the parallelization should then determine what
kind of underlying system, both hardware and
software, is best suited to running your parallelized
application.
April 26, 2005
42
Parallel Processing Evaluation .24
Solution 2:
● Clustering
April 26, 2005
43
Major Applications of parallel processing
Clustering
● Clustering is a form of parallel processing
that takes a group of workstations connected
together in a local-area network and applies
middleware to make them act like a parallel
machine.
April 26, 2005
44
Major Applications of parallel processing .2
Clustering
Clustering is a form of parallel processing
that takes a group of workstations connected
together in a local-area network and applies
middleware to make them act like a parallel
machine.
April 26, 2005
45
Major Applications of parallel processing .3
Clustering
● Parallel processing using Linux Clusters can yield
supercomputer performance for some programs that
perform complex computations or operate on large
data sets. And it can accomplish this task by using
cheap hardware.
● Clustering can be used at night when networks
are idle, it is an inexpensive alternative to parallelprocessing machines.
April 26, 2005
46
Major Applications of parallel processing .4
 Clustering can work with two separate but similar
implementations:
● A Parallel Virtual Machine (PVM), is an
environment that allows messages to pass between
computers as it would in an actual parallel machine.
● A Message-Passing Interface (MPI), allows
programmers to create message-passing parallel
applications, using parallel input/output functions and
dynamic process management.
April 26, 2005
47
Reference
 Andrew Boucher, “Parallel Machines”
 Stephane vialle, “Past and Future Parallelism
Challenges
to
Encompass
sequential
Processor evolution”
April 26, 2005
48
The end
Thank you!
April 26, 2005
49
Download