# Graphs (part 2) - TAMU Computer Science Faculty Pages ```Graphs
Part 2
Shortest Paths
0
A
8
4
2
B
2
8
7
5
E
C
3
2
1
9
D
8
F
5
3
• Weighted graphs (&sect;13.5.1)
– Shortest path problem
– Shortest path properties
• Dijkstra’s algorithm (&sect;13.5.2)
– Algorithm
– Edge relaxation
Shortest Path Problem
• Given a weighted graph and two vertices u and v, we want to find a
path of minimum total weight between u and v.
– Length of a path is the sum of the weights of its edges.
• Example:
– Shortest path between Providence and Honolulu
• Applications
– Internet packet routing
– Flight reservations
– Driving directions
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Shortest Path Problem
• If there is no path from v to u, we denote
the distance between them by d(v, u)=+ 
• What if there is a negative-weight cycle in
the graph?
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Shortest Path Properties
Property 1:
A subpath of a shortest path is itself a shortest path
Property 2:
There is a tree of shortest paths from a start vertex to all the other
vertices
Example:
Tree of shortest paths from Providence
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Dijkstra’s Algorithm
• The distance of a vertex v
from a vertex s is the length
of a shortest path between
s and v
• Dijkstra’s algorithm
computes the distances of
all the vertices from a given
start vertex s
(single-source shortest
paths)
• Assumptions:
– the graph is connected
– the edges are undirected
– the edge weights are
nonnegative
• We grow a “cloud” of vertices,
beginning with s and eventually
covering all the vertices
• We store with each vertex v a
label D[v] representing the
distance of v from s in the
subgraph consisting of the cloud
• The label D[v] is initialized to
positive infinity
• At each step
– We add to the cloud the vertex u
outside the cloud with the smallest
distance label, D[v]
– We update the labels of the
vertices adjacent to u (i.e. edge
relaxation)
Edge Relaxation
• Consider an edge e =
(u,z) such that
– u is the vertex most
cloud
– z is not in the cloud
• The relaxation of edge e
follows:
D[z]min{D[z],D[u]+weight(e)}
D[u] = 50
s
u
e
D[z] = 75
z
D[y] = 20
y
D[u] = 50
u
s
D[y] = 20
y
D[z] = 60
e
z
Example
0
A
8
B
2
8
7
C
4
2
3

2
0
1
9
E
D

F
A
8
4
5
2
B
8
7
5
E
2
C
3
A
B
2
7
4
5
E
C
3
2
9
1
D
8
3
5
F
0
2
8
2
9
0
8
4
1
11
F
A
8
3
D
5
7
B
2
4
2
7
5
E
C
3
2
1
9
D
8
F
5
3
Example (cont.)
0
A
8
4
2
B
2
7
7
C
3
5
E
2
1
9
D
8
F
3
5
0
A
8
4
2
B
2
7
7
C
3
5
E
2
1
9
D
8
F
5
3
Exercise: Dijkstra’s alg
• Show how Dijkstra’s algorithm works on the
I.e., s=SFO.
• Show how the labels are updated in each iteration
(a separate figure for each iteration).
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Dijkstra’s Algorithm
Q  new heap-based priority queue
O(n) iter’s
for all v  G.vertices()
if v = s setDistance(v, 0)
– Key: distance
else
setDistance(v, )
– Element: vertex
l  Q.insert(getDistance(v), v)
O(logn)
setLocator(v,l)
Locator-based methods
while Q.isEmpty()
O(n) iter’s
– insert(k,e) returns a
{ pull a new vertex u into the cloud }
locator
O(logn)
u  Q.removeMin()
– replaceKey(l,k) changes
for all e  G.incidentEdges(u) ∑v deg(u) iter’s
the key of an item
z  G.opposite(u,e)
We store two labels with
r  getDistance(u) + weight(e)
each vertex:
if r &lt; getDistance(z)
– distance (D[v] label)
setDistance(z,r)
– locator in priority queue
O(logn)
Q.replaceKey(getLocator(z),r)
• A priority queue stores
the vertices outside the
cloud
•
•
Analysis
• Graph operations
– Method incidentEdges is called once for each vertex
• Label operations
– We set/get the distance and locator labels of vertex z O(deg(z)) times
– Setting/getting a label takes O(1) time
• Priority queue operations
– Each vertex is inserted once into and removed once from the priority
queue, where each insertion or removal takes O(log n) time
– The key of a vertex in the priority queue is modified at most deg(w) times,
where each key change takes O(log n) time
• Dijkstra’s algorithm runs in O((n + m) log n) time provided the graph is
represented by the adjacency list structure
– Recall that Sv deg(v) = 2m
• The running time can also be expressed as O(m log n) since the graph
is connected
• The running time can be expressed as a function of n, O(n2 log n)
Extension
• Using the template
method pattern, we can
extend Dijkstra’s
algorithm to return a
tree of shortest paths
from the start vertex to
all other vertices
• We store with each
vertex a third label:
– parent edge in the
shortest path tree
• In the edge relaxation
step, we update the
parent label
Algorithm DijkstraShortestPathsTree(G, s)
…
for all v  G.vertices()
…
setParent(v, )
…
for all e  G.incidentEdges(u)
{ relax edge e }
z  G.opposite(u,e)
r  getDistance(u) + weight(e)
if r &lt; getDistance(z)
setDistance(z,r)
setParent(z,e)
Q.replaceKey(getLocator(z),r)
Why It Doesn’t Work for NegativeWeight Edges
Dijkstra’s algorithm is based on
vertices by increasing distance.
 If a node with a negative
incident edge were to be
added late to the cloud, it
could mess up distances for
cloud.
0
A
8
B
2
8
7
2
3

2
C
4
-3
9
E
D

F
5
0
A
8
4
2
B
2
8
7
5
E
C
3
2
9
-3
11
F
D
5
4
C’s true distance
is 1, but it is
cloud with
D[C]=2!
0
Minimum Spanning Trees
2704
BOS
867
849
PVD
ORD
740
621
1846
LAX
1391
1464
1235
144
JFK
1258
184
802
SFO
337
187
BWI
1090
DFW
946
1121
MIA
2342
• Minimum Spanning Trees (&sect;13.6)
• Definitions
• A crucial fact
• The Prim-Jarnik Algorithm (&sect;13.6.2)
• Kruskal's Algorithm (&sect;13.6.1)
Reminder: Weighted Graphs
• In a weighted graph, each edge has an associated
numerical value, called the weight of the edge
• Edge weights may represent, distances, costs, etc.
• Example:
– In a flight route graph, the weight of an edge represents the
distance in miles between the endpoint airports
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Minimum Spanning Tree
• Spanning subgraph
• Subgraph of a graph G containing
all the vertices of G
ORD
1
• Spanning tree
• Spanning subgraph that is itself a
(free) tree
PIT
DEN
6
7
9
• Minimum spanning tree (MST)
• Spanning tree of a weighted
graph with minimum total edge
weight
10
3
DCA
STL
4
8
5
2
• Applications
• Communications networks
• Transportation networks
DFW
ATL
Exercise: MST
Show an MST of the following graph.
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Cycle Property
Cycle Property:
– Let T be a minimum
spanning tree of a weighted
graph G
– Let e be an edge of G that is
not in T and C let be the
cycle formed by e with T
– For every edge f of C,
weight(f)  weight(e)
Proof:
– If weight(f) &gt; weight(e) we
can get a spanning tree of
smaller weight by replacing
e with f
8
f
4
C
2
9
6
3
e
8
7
7
Replacing f with e yields
a better spanning tree
8
f
4
C
2
9
6
3
8
7
e
7
Partition Property
U
f
Partition Property:
– Consider a partition of the vertices of G
into subsets U and V
– Let e be an edge of minimum weight
across the partition
– There is a minimum spanning tree of G
containing edge e
Proof:
– Let T be an MST of G
– If T does not contain e, consider the
cycle C formed by e with T and let f be
an edge of C across the partition
– By the cycle property,
weight(f)  weight(e)
– Thus, weight(f) = weight(e)
– We obtain another MST by replacing f
with e
V
7
4
9
5
2
8
8
3
e
7
Replacing f with e yields
another MST
U
f
2
V
7
4
9
5
8
8
e
7
3
Prim-Jarnik’s Algorithm
• We pick an arbitrary vertex s and we grow the MST as a cloud
of vertices, starting from s
• We store with each vertex v a label d(v) = the smallest weight
of an edge connecting v to a vertex in the cloud
• At each step:
We add to the cloud the
vertex u outside the cloud
with the smallest distance
label
• We update the labels of
•
Prim-Jarnik’s Algorithm (cont.)
• A priority queue stores the
vertices outside the cloud
• Key: distance
• Element: vertex
• Locator-based methods
• insert(k,e) returns a
locator
• replaceKey(l,k) changes
the key of an item
• We store three labels with
each vertex:
• Distance
• Parent edge in MST
• Locator in priority queue
Algorithm PrimJarnikMST(G)
Q  new heap-based priority queue
s  a vertex of G
for all v  G.vertices()
if v = s
setDistance(v, 0)
else
setDistance(v, )
setParent(v, )
l  Q.insert(getDistance(v), v)
setLocator(v,l)
while Q.isEmpty()
u  Q.removeMin()
for all e  G.incidentEdges(u)
z  G.opposite(u,e)
r  weight(e)
if r &lt; getDistance(z)
setDistance(z,r)
setParent(z,e)
Q.replaceKey(getLocator(z),r)
Example
2
D
7
B
9
8
C
F
8
8
A
E
2
5
0
9
5
C
5
F
8
8
0
7
F

2
3
E
0
A
4
9
5
C
5
7
7
B
F
8
8
7
D
7
3
E
7

3
E
A
2
4
8
7
4
7
9
5
C
B
2
7
B
8
A
D
7

7
D
7
3
7
2
2
4
5
2
0

7
4
Example (contd.)
2
2
7
0
7
B
4
9
5
C
5
F
8
8
A
D
3
E
7
4
3
2
2
B
0
4
9
5
C
5
F
8
8
A
D
7
7
3
E
7
3
4
Exercise: Prim’s MST alg
• Show how Prim’s MST algorithm works on the following
• Show how the MST evolves in each iteration (a separate figure for
each iteration).
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Analysis
• Graph operations
• Method incidentEdges is called once for each vertex
• Label operations
• We set/get the distance, parent and locator labels of vertex z O(deg(z))
times
• Setting/getting a label takes O(1) time
• Priority queue operations
• Each vertex is inserted once into and removed once from the priority queue,
where each insertion or removal takes O(log n) time
• The key of a vertex w in the priority queue is modified at most deg(w) times,
where each key change takes O(log n) time
• Prim-Jarnik’s algorithm runs in O((n + m) log n) time provided the graph
is represented by the adjacency list structure
• Recall that Sv deg(v) = 2m
• The running time is O(m log n) since the graph is connected
Kruskal’s Algorithm
• A priority queue stores the
edges outside the cloud
•
•
Key: weight
Element: edge
• At the end of the algorithm
•
•
29
We are left with one cloud
that encompasses the
MST
A tree T which is our MST
Graphs
Algorithm KruskalMST(G)
for each vertex V in G do
define a Cloud(v) of  {v}
let Q be a priority queue.
Insert all edges into Q using their
weights as the key
T
while T has fewer than n-1 edges
edge e = T.removeMin()
Let u, v be the endpoints of e
if Cloud(v)  Cloud(u) then
Merge Cloud(v) and Cloud(u)
return T
Data Structure for Kruskal Algortihm
• The algorithm maintains a forest of trees
• An edge is accepted it if connects distinct trees
• We need a data structure that maintains a
partition, i.e., a collection of disjoint sets, with
the operations:
• find(u): return the set storing u
• union(u,v): replace the sets storing u and v with
their union
Representation of a Partition
• Each set is stored in a sequence
• Each element has a reference back to the set
• operation find(u) takes O(1) time, and returns the
set of which u is a member.
• in operation union(u,v), we move the elements of
the smaller set to the sequence of the larger set and
update their references
• the time for operation union(u,v) is min(nu,nv),
where nu and nv are the sizes of the sets storing u
and v
• Whenever an element is processed, it goes into a
set of size at least double, hence each element is
processed at most log n times
Partition-Based Implementation
• A partition-based version of Kruskal’s
Algorithm performs cloud merges as unions
and tests as finds.
Algorithm Kruskal(G):
Input: A weighted graph G.
Output: An MST T for G.
Let P be a partition of the vertices of G, where each vertex forms a separate set.
Let Q be a priority queue storing the edges of G, sorted by their weights
Let T be an initially-empty tree
while Q is not empty do
(u,v)  Q.removeMinElement()
if P.find(u) != P.find(v) then
P.union(u,v)
Running time: O((n+m) log n)
return T
Example
2704
BOS
867
849
ORD
LAX
1391
1464
1235
144
JFK
1258
184
802
SFO
337
187
740
621
1846
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
740
621
1846
337
LAX
1391
1464
1235
187
144
JFK
1258
184
802
SFO
PVD
BWI
1090
DFW
946
1121
MIA
2342
Example
2704
BOS
867
849
ORD
1846
LAX
1391
1464
1235
144
BWI
1090
946
DFW
1121
2342
JFK
184
802
SFO
337
740
621
187
PVD
MIA
1258
Exercise: Kruskal’s MST alg
• Show how Kruskal’s MST algorithm works on the following
graph.
• Show how the MST evolves in each iteration (a separate figure for
each iteration).
SFO
PVD
ORD
LGA
HNL
LAX
DFW
MIA
Bellman-Ford Algorithm
• Works even with negativeAlgorithm BellmanFord(G, s)
weight edges
for all v  G.vertices()
if v = s
• Must assume directed edges
setDistance(v, 0)
(for otherwise we would have
else
negative-weight cycles)
setDistance(v, )
• Iteration i finds all shortest
for i  1 to n-1 do
paths that use i edges.
for each e  G.edges()
• Running time: O(nm).
u  G.origin(e)
z  G.opposite(u,e)
• Can be extended to detect a
r  getDistance(u) + weight(e)
negative-weight cycle if it
if r &lt; getDistance(z)
exists
– How?
setDistance(z,r)
Bellman-Ford Example
Nodes are labeled with their d(v) values
First round
0
4
8
-2
-2
8
1
7



0
8
-2
7


3
-2
Second
round
9

3
5

0
8
-2
Third
round
4
7
1
-2
6




5
0
8
4
-2
1
-2
3
1
9
-2
5 8
4
9

4
9
-1
5
7
3
5
-2
1
1
-2
9
4
9
-1
5
4
Exercise: Bellman-Ford’s alg
• Show how Bellman-Ford’s algorithm works on
the top node
• Show how the labels are updated in each iteration
(a separate figure for each iteration).
0
8
4
-2

7

3
-5

1


5
9
DAG-based Algorithm
• Works even with negativeweight edges
• Uses topological order
• Is much faster than
Dijkstra’s algorithm
• Running time: O(n+m).
Algorithm DagDistances(G, s)
for all v  G.vertices()
if v = s
setDistance(v, 0)
else
setDistance(v, )
Perform a topological sort of the vertices
for u  1 to n do {in topological order}
for each e  G.outEdges(u)
{ relax edge e }
z  G.opposite(u,e)
r  getDistance(u) + weight(e)
if r &lt; getDistance(z)
setDistance(z,r)
DAG Example
Nodes are labeled with their d(v) values
1
1
0
8
4
0
8
-2
3
2
7

-2

4
1

3
-5
3


8
2
7
9
5

6
-5

5
0
8
-5
2
1

3
6
4
1

9

6
4
0
8
-2
7
-2
5
5
1
8
5

3
1
3
4
4
-2
4
1
-2
4
-1
3
5
2
7
9

7
5
5
-5
0
1
3
6
4
1
-2
9
4
7
(two steps)
-1
5
5
4
Exercize: DAG-based Alg
• Show how DAG-based algorithm works on the
second rightmost node
• Show how the labels are updated in each iteration
(a separate figure for
each
iteration).
6
1
∞
5
0
2
2
∞
7
∞
1
3
3
4
4
-1
∞
2
-2
∞
5
Summary of Shortest-Path Algs
• Dijkstra’s algorithm (&sect;13.5.2)
– Algorithm
– Edge relaxation
• The Bellman-Ford algorithm
• Shortest paths in DAGs
All-Pairs Shortest Paths
• Find the distance between
every pair of vertices in a
weighted directed graph G.
• We can make n calls to
Dijkstra’s algorithm (if no
negative edges), which
takes O(nmlog n) time.
• Likewise, n calls to BellmanFord would take O(n2m)
time.
• We can achieve O(n3) time
using dynamic
programming (similar to the
Floyd-Warshall algorithm).
Algorithm AllPair(G) {assumes vertices 1,…,n}
for all vertex pairs (i,j)
if i = j
D0[i,i]  0
else if (i,j) is an edge in G
D0[i,j]  weight of edge (i,j)
else
D0[i,j]  + 
for k  1 to n do
for i  1 to n do
for j  1 to n do
Dk[i,j]  min{Dk-1[i,j], Dk-1[i,k]+Dk-1[k,j]}
return Dn
i
Uses only vertices
numbered 1,…,k-1
Uses only vertices numbered 1,…,k
(compute weight of this edge)
j
k
Uses only vertices
numbered 1,…,k-1
Why Dijkstra’s Algorithm Works
• Dijkstra’s algorithm is based on the greedy
method. It adds vertices by increasing distance.




Suppose it didn’t find all shortest
distances. Let F be the first wrong
vertex the algorithm processed.
When the previous node, D, on the
true shortest path was considered, its
distance was correct.
But the edge (D,F) was relaxed at that
time!
Thus, so long as D[F]&gt;D[D], F’s
distance cannot be wrong. That is,
there is no wrong vertex.
0
s
8
4
2
B
2
7
7
C
3
5
E
2
9
1
D
8
F
5
3
```