Lecture 6 • Source:

advertisement
Computer Science Curriculum
(http://csinparallel.org/)
• Parallel Computing Concepts Module
• Wednesday, will look at threads and Pthread
library (Chapter 4 of textbook)
• Friday, will look at the Patternlets in Parallel
Computing Module (uses OpenMP)
CS 470 Operating Systems
• Source: CSinParallel: Parallel Computing in the
January 26, 2015
Lecture 6
1
CS 470 Operating Systems
• Motivation
• Terminology
• Parallel speedup
• Options for communication
• Issues in concurrency
January 26, 2015
Parallel Computing Concepts
2
CS 470 Operating Systems
• Moore’s "Law": an empirical observation
by Intel co-founder Gordon Moore in 1965
• Number of components in computer
circuits had doubled each year since 1958
• Four decades later, that number has
continued to double each two years or less
January 26, 2015
Motivation
3
CS 470 Operating Systems
• But the speedups in software performance
mostly were due to increasing clock
speeds
• Era of the "free lunch"; just wait 18-24
months and software goes faster!
• Increased clock speed => increased heat
January 26, 2015
Motivation
4
hot plate
Temperature
actual
projected
CS 470 Operating Systems
January 26, 2015
the sun
2020
5
Source: J. Adams, 2014 CCSC:MW Conference Keynote
CS 470 Operating Systems
• Around 2005, manufacturers stopped
increasing clock speed
• Instead, created multi-core CPUs; number
of cores per CPU chip is growing
exponentially
• Most software has been designed for a
single core; end of the "free lunch" era
• Future software development will require
understanding of how to take advantage of
multi-core systems
January 26, 2015
Motivation
6
CS 470 Operating Systems
• parallelism: multiple (computer) actions
physically taking place at the same time
• concurrency: programming in order to
take advantage of parallelism (or virtual
parallelism)
• Parallelism takes place in hardware,
concurrency takes place in software
January 26, 2015
Terminology
7
• sequential programming: programming
for a single core
• concurrent programming: programming
for multiple cores or multiple computers
CS 470 Operating Systems
• process: the execution of a program
• thread: a sequence of execution within a
program, each process has at least one
January 26, 2015
Terminology
8
CS 470 Operating Systems
• multi-core computing: computing with
systems that provide multiple
computational circuits per CPU package
• distributed computing: computing with
systems consisting of multiple computers
connected by computer network(s)
• Systems can have both
January 26, 2015
Terminology
9
CS 470 Operating Systems
• data parallelism: the same processing is
applied to multiple subsets of a large data
set in parallel
• task parallelism: different tasks or stages
of a computation are performed in parallel
January 26, 2015
Terminology
10
CS 470 Operating Systems
• shared memory multiprocessing: e.g.,
multi-core system, and/or multiple CPU
packages in a single computer, all sharing
the same main memory
• cluster: multiple networked computers
managed as a single resource and
designed for working as a unit on large
computational problems
January 26, 2015
Terminology
11
CS 470 Operating Systems
• grid computing: distributed systems at
multiple locations, typically with separate
management, coordinated for working on
large-scale problems
• cloud computing: computing services are
accessed via networking on large, centrally
managed clusters at data centers, typically
at unknown remote locations
January 26, 2015
Terminology
12
CS 470 Operating Systems
• The ratio of the compute time for a
sequential algorithm to the time for a
parallel algorithm is the speedup of a
parallel algorithm over a corresponding
sequential algorithm.
• If the speedup factor is n, then we say we
have n-fold speedup.
January 26, 2015
Parallel Speedup
13
• Number of processors
• Other processes running at the same time
• Communication overhead
• Synchronization overhead
• Inherently sequential computation
• Rarely does n processors give n-fold
speedup, but occasionally get better than
n-fold speedup
CS 470 Operating Systems
• The observed speedup depends on all
implementation factors
January 26, 2015
Parallel Speedup
14
overall speedup =
1
1−𝑃
𝑃
+
𝑆
• where P is the time proportion of the algorithm that can be
parallelized
• where S is the speedup factor for that portion of the algorithm
due to parallelization
• Note that the sequential portion has
disproportionate effect.
CS 470 Operating Systems
• Amdahl’s Law is a formula for estimating
the maximum speedup from an algorithm
that is part sequential and part parallel
January 26, 2015
Amdahl's Law
15
CS 470 Operating Systems
• message passing: communicating with
basic operations send and receive to
transmit information from one
computation to another.
• shared memory: communicating by
reading and writing from local memory
locations that are accessible by multiple
computations
January 26, 2015
Options for Communication
16
CS 470 Operating Systems
• distributed memory: some parallel
computing systems provide a service for
sharing memory locations on a remote
computer system, enabling non-local reads
and writes to a memory location for
communication.
January 26, 2015
Options for Communication
17
CS 470 Operating Systems
• Fault tolerance is the capacity of a
computing system to continue to satisfy its
spec in the presence of faults (causes of
error)
• Scheduling means assigning computations
(processes or threads) to processors
(cores, distributed computers, etc.)
according to time.
January 26, 2015
Issues in Concurrency
18
CS 470 Operating Systems
• Mutually exclusive access to shared
resources means that at most one
computation (process or thread) can
access a resource (such as a shared
memory location) at a time. This is one of
the requirements for correct IPC.
January 26, 2015
Issues in Concurrency
19
Download