Complexity

advertisement
I
ALGORITHM AND DESIGN
ANALYSIS
1. Write a program to implement
a)Insertion sort
b)Radix sort
c)Counting sort
and Find its worst case, best case and average case time complexity.
2.Write a program to implement merge sort and find its worst case, best
case and average case time complexity.
3. Write a program to implement quick sort and randomized quick sort
and find its worst case, best case and average case time complexity.
4. Write a program to implement Strassen’s Matrix Multiplication find its
worst case, best case and average case time complexity.
5. Write a program to implement randomized quick sort and find its worst
case, best case and average case time complexity.
6. Write a program to implement matrix chain multiplication problem
find its worst case, best case and average case time complexity.
7.WAP to implement Breadth First Search algorithm
8. WAP to implement Depth First Search algorithm.
9. Write a program to implement optimal binary search tree problem and
find its worst case, best case and average case time complexity.
10. Write a program to implement longest common subsequence problem
and find its worst case, best case and average case time complexity.
11. Write a program to implement activity selection problem and find its
worst case, best case and average case time complexity.
12. Write a program to implement merge Huffman coding and find its
worst case, best case and average case time complexity.
13. Write a program to implement task scheduling problem and find its
worst case, best case and average case time complexity.
14. Write a program to implement Prim’s algorithm and find its worst
case, best case and average case time complexity.
15. Write a program to implement Krushkal sort and find its worst case,
best case and average case time complexity.
16. Write a program to implement Single Source Shortest Path algorithm
and find its worst case, best case and average case time complexity:
a)Dijkastra’s Algorithm
b)Bellman Ford Algorithm
17. Write a program to implement all pair shortest path algorithm
algorithm and find its worst case, best case and average case time
complexity.
a)Floyd Warshall Algorithm
18. Write a program to implement Rabin Karp Algorithm and find its
worst case, best case and average case time complexity.
19. Write a program to implement Native string matching and find its
worst case, best case and average case time complexity.
20. Write a program to implement Knuth Morris Path and find its worst
case, best case and average case time complexity.
NSERTION SORT
Input : A sequence of n numbers <a1,a2,….,an>.
Output: A permutation (reordering) <a1’, a2’…an’> of the input
sequence such that a1’  a2’ …. an’
The numbers that we wish to sort are known as keys Insertion sort is
an efficient algorithm for sorting a small number of elements.
Insertion sort takes as a parameter an array A [1…n] containing a
sequence of length n that is to be sorted.(In the code,the number n of
elements in A is denoted by length [A] ).The input numbers are sorted
in place: the numbers are rearranges within the array A,with at most a
constant number of them stored outside the array at any time.The input
array A contains the sorted sequence when INSERTION-SORT is
finished.
ALGORITHM OF INSERTION SORT
1. insertionSort(array A)
2. begin
3. for i := 1 to length[A]-1 do
4. begin
a.
value := A[i];
b.
j := i - 1;
c.
while j >= 0 and A[j] > value do
d.
begin
i.
A[j + 1] := A[j];
ii.
j := j - 1;
e.
end;
f.
A[j + 1] := value;
5. end;
6. end;
COMPLEXITY
Best, worst, and average cases
The best case input is an array that is already sorted. In this case insertion sort has a
linear running time (i.e., Θ(n)). During each iteration, the first remaining element of
the input is only compared with the right-most element of the sorted subsection of the
array.
The worst case input is an array sorted in reverse order. In this case every iteration of
the inner loop will scan and shift the entire sorted subsection of the array before
inserting the next element. For this case insertion sort has a quadratic running time
(i.e., O(n2)).
The average case is also quadratic, which makes insertion sort impractical for sorting
large arrays. However, insertion sort is one of the fastest algorithms for sorting
MERGE SORT
The merge sort algorithm follows the divide and conquer paradigm.It
operates as follows;
Divide: Divide the n-element sequence to be sorted into two
subsequence of n/2 elements each.
Conquer: Sort the two subsequences recursively using merge sort.
Combine: Merge the two sorted subsequences to produce the sorted
answer.
The recursion “bottoms out” when the sequence to be sorted has
length 1.in which case there is no work to be done,since every
sequence of length 1 is already in sorted order.
The key operation of the merge sort algorithm is the merging of two
sorted sequences in the “combine”step.To perform the merging,we use
an auxiliary procedure MERGE(A,p,q,r),where A is an array and
p,q,andr are indices numbering elements of the array such that p  q
<r.the procedure assumes that the subarrays A[p…q] and a[q+1..r] are
in sorted order.It merges them to form a single sorted subarray that
replaces the current subarray A[p…r]
Our MERGE procedure takes time  (n).where n=r-p+1 is the number
of elements being merged.
ALGO FOR MERGE SORT
functionmerge_sort(m)
1. if length(m) ≤ 1
2. return m
3. varlist left, right, result
4. varinteger middle = length(m) / 2
5. for each x in m up to middle
6. add x to left
7. for each x in m after middle
8. add x to right
9. left = merge_sort(left)
10. right = merge_sort(right)
11. ifleft.last_item>right.first_item
12. result = merge(left, right)
13. else
14. result = append(left, right)
15. return result
function merge(left,right)
1. varlist result
2.
3.
while length(left) > 0 and length(right) > 0
if first(left) ≤ first(right)
a.
append first(left) to result
b.
left = rest(left)
4. else
a.
append first(right) to result
b.
right = rest(right)
5. end while
6. if length(left) > 0
7. append left to result
8. else
9. append right to result
10. return result
COMPLEXITY
Best, worst, and average cases
The straightforward version of function merge requires at most 2n steps
(n steps for copying the sequence to the intermediate array b, and at most
n steps for copying it back to array a). The time complexity of mergesort
is therefore
T(n)
2n + 2 T(n/2) and
T(1) = 0
The solution of this recursion yields
T(n)
2n log(n)
O(n log(n))
Thus, the mergesort algorithm is optimal, since the lower bound for the
sorting problem of Ω(n log(n)) is attained.
In the more efficient variant, function merge requires at most 1.5n steps
(n/2 steps for copying the first half of the sequence to the intermediate
array b, n/2 steps for copying it back to array a, and at most n/2 steps for
processing the second half). This yields a running time of mergesort of at
most 1.5nlog(n) steps.
QUICK SORT
Quick sort is based on the divide and conquer paradigm.
Divide: Partition (rearrange) the array A[p..r] into two(possibly empty)
subarrays A[p..q-1] and A[q+1…r] such that each element of A[p..q-1] is
less than or equal to A[q] which is in turn less than or equal to each
element of A[q+1..r].compute the index q as part of this partitioning
procedure.
Conquer: Sort the two subarraysA[p…q-1] and A[q+1…r] by recrsive
calls to quicksort.
Combine: Since the subarrays are sorted in place, no work is needed to
combine them:the entire arrayA[p…r] is now sorted.
PARTITION always selects an element x=a[r] as a pivot element around
which to partition the subarraya[p..r].As the procedure runs,the array is
partitioned into four regions.
ALGORITHM OF QUICK SORT
function quicksort(array)
1. varlist less, greater
2. if length(array) ≤ 1
3. return array
4. select and remove a pivot value pivot from array
5. for each x in array
6. if x ≤ pivot then append x to less
7. else append x to greater
8. return concatenate(quicksort(less), pivot, quicksort(greater))
function partition(array, left, right, pivotIndex)
1. pivotValue := array[pivotIndex]
2. swap array[pivotIndex] and array[right] // Move pivot to end
3. storeIndex := left
4. for i from left to right - 1 // left ≤ i < right
5. if array[i] ≤ pivotValue
a.
swap array[i] and array[storeIndex]
b.
storeIndex := storeIndex + 1
6. swap array[storeIndex] and array[right] // Move pivot to its final
place
7. return storeIndex
procedure quicksort(array, left, right)
1.
if right > left
2.
select a pivot index (e.g. pivotIndex := (left+right)/2)
3.
4.
5.
pivotNewIndex := partition(array, left, right, pivotIndex)
quicksort(array, left, pivotNewIndex - 1)
quicksort(array, pivotNewIndex + 1, right)
COMPLEXITY
AVERAGE COMPLEXITY
Even if pivots aren't chosen randomly, quicksort still requires only
Θ(nlogn) time over all possible permutations of its input. Because this
average is simply the sum of the times over all permutations of the input
divided by n factorial, it's equivalent to choosing a random permutation
of the input. When we do this, the pivot choices are essentially random,
leading to an algorithm with the same running time as randomized
quicksort.
More precisely, the average number of comparisons over all permutations
of the input sequence can be estimated accurately by solving the
recurrence relation:
Here, n − 1 is the number of comparisons the partition uses. Since the
pivot is equally likely to fall anywhere in the sorted list order, the sum is
averaging over all possible splits.
This means that, on average, quicksort performs only about 39% worse
than the ideal number of comparisons, which is its best case. In this sense
it is closer to the best case than the worst case. This fast average runtime
is another reason for quicksort's practical dominance over other sorting
algorithms.
STRASSEN ALGORITHM
This can be viewed as an application of a familiar design technique:divide
and conquer.Suppose we wish to compute the product C=AB,where each
of A,B,and C are n x n matrices.Assuming that n is an exact power of 2.we
divide each of A,b,and c into four n/2 x n/2 matrices,rewriting the
equation C=AB as followsss:
r s = a b e f
t u
c d g h
This equation corresponds to four equations
r = ae + bg
s = af + bh
t = ce + dg
u = cf + dh
Each of these four equations specifies two multiplication’s of n/2 x n/2
matrices and the addition of their n/2 x n/2 products.Using these
equations we derive the following recurrence for the time T(n) to
multiply two n x n matrices:
T(n) = 8T(n/2) + (n2)
Strassen’s gave a different recurrence for computing the same product
that requires only 7 recursive multiplications of n/2 x n/2 matrices and
(n2) scalar multiplications and additions:
T(n) = 7T(n/2) + (n2)
= (n lg 7 )
= (n 2.81)
Strassens method has four steps:
1.Divide the input matrices A and B into n/2 x n/2 sub matrices,a s in
equation alove.
2.Using O(n2) scalar additions and subtractions ,compute 14 matrices
A1,B1,A2,B2………A7,B7,each of which is n/2 x n/2.
3.Recursivley compute the seven matrix products Pi = AB for
i = 1,2……7.
4.Compute desired sub matrices r,s,t,u of the result matrix C by adding
and or subtracting various combinations of the Pimatrices,using O(n)
scalar additions and subtractions.
Different additions and subtractions involved are:
P = (a + d ) (e + h)
Q = (c + d) e
R = a ( f –h )
S = d ( g-e )
T=(a+b)h
U = (c – d ) ( e + f )
V = ( b-d ) ( g + h )
And
r=P+S-T+V
s=R+T
t=Q+S
u=P+R-Q+U
Complexity
T(n) = 7T(n2 )+cn2, where c is a fixed constant. The term
cn2 captures the time for the matrix additions and
subtractions needed to compute P1, ..., P7 and C11, ...,C22.
The solution works out to be:
T(n) = (-)(nlg7) = O(n2.81).
MATRIX CHAIN MULTIPLICATION
Matrix chain multiplication problem can be stated as follows: given a
chain <A1,A2…….An> of n matrices,where for i = 1,2,3…n, matrix Ai
has dimension p i-1 x pi fully parenthesize the product A1A2…..An in a
way that minimize the number of scalar multiplication’s.
This problem is solved with the help of dynamic programming in which
an optimal solution to the problem is constructed using the optimal
solutions for the sub problems.Consider Ai….j, where i<=j,to be the
matrix that results from evaluating the product AiAi+1….Aj.Let m[i,j] be
the minimum number of scalar multiplication’s needed to compute the
matrix Ai…j.Consider s[i,j] to be a value at which we can split the product
AiAi+1…Ajto obtain an optimal parenthesization.
ALGORITHM OF MATRIX CHAIN MULTIPLICATION
1. n length[p]-1
2. for i  1 to n
3. do m[i,j]  0
4. for l  2 to n l is the chain length
5. do for i  1 to n-l + 1
6.
do j  i+l-1
7.
m[i,j] 
8.
for k  i to j-1
9.
do q  m[ i,k ] + m[k+1,j] + pi-1 pkpj
10.
if q  m[i,j]
11.
then m[i,j]  q
12.
s[i,j]  k
13.return m and s
COMPLEXITY
The complexity of square matrix multiplication, if carried out naively, is
O(n3), but more efficient algorithms do exist. Strassen's algorithm,
devised by Volker Strassen in 1969 and often referred to as "fast matrix
multiplication", is based on a clever way of multiplying two 2 × 2
matrices which requires only 7 multiplications (instead of the usual 8), at
the expense of several additional addition and subtraction operations.
Applying this trick recursively gives an algorithm with a multiplicative
cost of
.
Strassen's algorithm is awkward to implement, compared to the naive
algorithm, and it lacks numerical stability. Nevertheless, it is beginning to
appear in libraries such as BLAS, where it is computationally interesting
for matrices with dimensions n > 100[2], and is very useful for large
matrices over exact domains such as finite fields, where numerical
stability is not an issue. The computational complexity for multiplying
rectangular matrices (one m×p-matrix with one p×n-matrix) is O(mnp).
The currently O(nk) algorithm with the lowest known exponent k is the
Coppersmith–Winograd algorithm. It was presented by Don Coppersmith
and Shmuel Winograd in 1990, has an asymptotic complexity of O(n2.376).
It is similar to Strassen's algorithm: a clever way is devised for
multiplying two k × k matrices with fewer than k3 multiplications, and
this technique is applied recursively. It improves on the constant factor in
Strassen's algorithm, reducing it to 4.537. However, the constant term
implied in the O(n2.376) result is so large that the Coppersmith–Winograd
algorithm is only worthwhile for matrices that are too large to handle on
present-day computers.
Breadth-first search
Breadth-first search is one of the simplest algorithms for searching a
graph and the archetype for many important graph algorithms.
Given a graph G = (V, E) and a distinguished source vertex s, breadthfirst search systematically explores the edges of G to "discover" every
vertex that is reachable from s. It computes the distance (fewest numbers
from s) to all such reachable vertices. It also produces a "breadth-first
tree" with root s that contains all such reachable vertices. For any vertex v
reachable from s, the path in the breadth-first tree from s to v corresponds
to a "shortest path" from s to v in G, that is, a path containing the fewest
number of edges. The algorithm works on both directed and undirected
graphs.
Breadth-first search is so named because it expands the frontier between
discovered and undiscovered vertices uniformly across the breadth of the
frontier. That is, the algorithm discovers all vertices at distance k from s
before discovering any vertices at distance k + 1.
ALGORITHM
BFS(G,s)
1 for each vertex u € V [G] - {s}
2
docolor[u]  WHITE
d[u]  ∞
3
4
[u]  NIL
5 color[s]  GRAY
6 d[s]  0
7 ∏ [s]  NIL
8QǾ
ENQUEUE (Q, S)
1.
2.
3.
4.
5.
6.
7.
8.
9.
whileQ ≠ Ǿ
Do uDEQUEUE [Q]
for each v € Adj[u]
do ifcolor[v] = WHITE
thencolor[v]  GRAY
d[v] d[u] + 1
∏ [v] u
ENQUEUE (Q,v)
color[u]  BLACK
PRINT-PATH(G,s,v)
1 ifv = s
2 then print s
3 else if ∏ [v] = NIL
4 then print "no path from" s "to" v "exists"
5
else PRINT-PATH (G,s,∏ [v])
6
print v
Complexity
Space complexity
Since all of the nodes of a level must be saved until their child nodes in
the next level have been generated, the space complexity is proportional
to the number of nodes at the deepest level. Given a branching factor b
and graph depth d the asymptotic space complexity is the number of
nodes at the deepest level, O(bd). When the number of vertices and edges
in the graph are known ahead of time, the space complexity can also be
expressed as O( | E | + | V | ) where | E | is the cardinality of the set of
edges (the number of edges), and | V | is the cardinality of the set of
vertices. In the worst case the graph has a depth of 1 and all vertices must
be stored. Since it is exponential in the depth of the graph, breadth-first
search is often impractical for large problems on systems with bounded
space.
Time complexity
Since in the worst case breadth-first search has to consider all paths to all
possible nodes the time complexity of breadth-first search is
which is O(bd). The time complexity can also be
expressed as O( | E | + | V | ) since every vertex and every edge will be
explored in the worst case.
Completeness
Breadth-first search is complete. This means that if there is a solution
breadth-first search will find it regardless of the kind of graph. However,
if the graph is infinite and there is no solution breadth-first search will
diverge.
Depth-first search
The strategy followed by depth-first search is, as its name implies, to
search "deeper" in the graph whenever possible. In depth-first search,
edges are explored out of the most recently discovered vertex v that still
has unexplored edges leaving it. When all of v's edges have been
explored, the search "backtracks" to explore edges leaving the vertex
from which v was discovered. This process continues until we have
discovered all the vertices that are reachable from the original source
vertex. If any undiscovered vertices remain, then one of them is selected
as a new source and the search is repeated from that source. This entire
process is repeated until all vertices are discovered.
The predecessor subgraph of a depth-first search forms a depth-first
forest composed of several depth-first trees. The edges in E∏ are called
tree edges.
This technique guarantees that each vertex ends up in exactly one depthfirst tree, so that these trees are disjoint.
Besides creating a depth-first forest, depth-first search also timestamps
each vertex. Each vertex v has two timestamps: the first timestamp d[v]
records when v is first discovered (and grayed), and the second timestamp
f[v] records when the search finishes examining v's adjacency list (and
blackens v).
DFS (G)
1 for each vertex u € V [G]
2
docolor[u]  WHITE
3
∏ [u]  NIL
4 time 0
5 for each vertex u € V [G]
6
do ifcolor[u] = WHITE
7
then DFS-VISIT (u)
DFS-VISIT (u)
1.
color[u]  GRAY
► White vertex u has just been discovered
2.
time time + 1
3.
d[u]  time
4.
for each v € Adj[u]
► Explore edge (u,v)
5.
do if color[v] = WHITE
6.
then ∏[v]  u
7.
DFS-VISIT(v)
8.
color[u]  BLACK
► Blacken u; it is finished.
9.
f[u]  time time + 1
COMPLEXITY
Assume that graph is connected. Depth-first search visits every vertex in
the graph and checks every edge its edge. Therefore, DFS complexity is
O(V + E). As it was mentioned before, if an adjacency matrix is used for
a graph representation, then all edges, adjacent to a vertex can't be found
efficiently, that results in O(V2) complexity.
LONGEST COMMON SUBSEQUENCE
In the longest common subsequence we are given tow sequences X= < x1,
x2,….xm> and y= <y1,y2,….yn> and wish to find a maximum-length
common subsequence of X and Y.
Procedure LCS-LENGTH takes two sequences X=<x1,x2…..xm> and Y=
<y1,y2…..yn> as inputs. It stores the c[i,j] values in a table c[0….m,0…n]
whose entries are computed in row-major order.It also maintains the table
b[1…m,1…n] to simplify construction of an optimal solution.Intuitively,
b[i,j] points to the table entry corresponding to the optimal subproblem
solution chosen when computing c[i,j].The procedure returns the b and c
tables;c[m,n] contains the length of an LCS of X and Y.
function LCSLength(X[1..m], Y[1..n])
C = array(0..m, 0..n)
for i := 0.to.m
C[i,0] = 0
for j := 0.to.n
C[0,j] = 0
for i := 1.to.m
for j := 1.to.n
if X[i] = Y[j]
C[i,j] := C[i-1,j-1] + 1
else:
C[i,j] := max(C[i,j-1], C[i-1,j])
return C[m,n]
Complexity
For the general case of an arbitrary number of input sequences, the
problem is NP-hard. When the number of sequences is constant, the
problem is solvable in polynomial time by dynamic programming.
Assume you have N sequences of lengths n1,...,nN. A naive search would
test each of the
subsequences of the first sequence to determine
whether they are also subsequences of the remaining sequences; each
subsequence may be tested in time linear in the lengths of the remaining
sequences, so the time for this algorithm would be
An activity-selection problem
A greedy algorithm always makes the choice that looks best at the
moment. That is, it makes a locally optimal choice in the hope that this
choice will lead to a globally optimal solution.
Our first example is the problem of scheduling a resource among several
competing activities
Suppose we have a set S = { 1, 2, . . . , n} of n proposed activities that
wish to use a resource, such as a lecture hall, which can be used by only
one activity at a time. Each activity i has a start timesiand a finish timeâi,
where si ≤âi. If selected, activity i takes place during the half-open time
interval [si,âi). Activities i and j are compatible if the intervals [si, âi) and
[sj,âj) do not overlap (i.e., i and j are compatible if si ≥âj or sj ≥âi). The
activity-selection problem is to select a maximum-size set of mutually
compatible activities.
We assume that the input activities are in order by increasing finishing
time:
â1<= â2<= . . . <= ân .
ALGORITHM
GREEDY-ACTIVITY-SELECTOR(s, f)
1 nlength[s]
2 A {1}
3 j 1
4 fori 2 ton
5 doifsi>= âj
6
thenAA u {i}
7
ji
8 returnA
Knapsack problem
The 0-1 knapsack problem is posed as follows. A thief robbing a store
finds n items; the ith item is worth vi dollars and weighs wi pounds, where
vi and wi are integers. He wants to take as valuable a load as possible, but
he can carry at most W pounds in his knapsack for some integer W. What
items should he take? (This is called the 0-1 knapsack problem because
each item must either be taken or left behind; the thief cannot take a
fractional amount of an item or take an item more than once.)
In the fractional knapsack problem, the setup is the same, but the thief
can take fractions of items, rather than having to make a binary (0-1)
choice for each item. You can think of an item in the 0-1 knapsack
problem as being like a gold ingot, while an item in the fractional
knapsack problem is more like gold dust.
Both knapsack problems exhibit the optimal-substructure property.
For the 0-1 problem, consider the most valuable load that weighs at most
W pounds. If we remove item j from this load, the remaining load must be
the most valuable load weighing at most W - wj that the thief can take
from the n - 1 original items excluding j. For the comparable fractional
problem, consider that if we remove a weight w of one item j from the
optimal load, the remaining load must be the most valuable load weighing
at most W - w that the thief can take from the n - 1 original items plus wj w pounds of item j.
Thus, by sorting the items by value per pound, the greedy algorithm runs
in O(n1gn) time.
OPTIMAL BINARY SEARCH TREE
Optimal binary search tree problem can be stated as follows:A binary
search tree is a tree where the keys values are stored in the internals
nodes,the external nodes (leaves) are null nodes,and the keys are ordered
lexicographically,i.e for each internal node all the keys,in the left subtree
are less than the keys in the node,and all the keys in the right subtree are
greater.
ALGORITHM
The following algorithm fills in the table e,w and root in a monner that
corresponds to solving the optimal binary search tree problem.
Procedure optimal-BST(p,q,n)
1. for i 1 to n + 1
2.
do e[i,i-1]  qi-2
3.
w[i,i-1]  qi-1
4. for i 1 to n
5.
6.
do for i 1 to n – l + 1
do j i + l - 1
7.
e[i,j]  INFINITE
8.
w[i,j]  w[i,j-1] + pj + qj
9.
for r  i to j
10.
do t e[i,r-1] + e[r+1,j] + w[i,j]
11.
12.
13.
if t  e[i,j]
then e[i,j]  t
root[i,j]  r
14. return e and root
Kruskal's Algorithm
Kruskal's algorithm is based directly on the generic minimum-spanningtree algorithm.
It finds a safe edge to add to the growing forest by finding, of all the
edges that connect any two trees in the forest, an edge (u, v) of least
weight. Let C1 and C2 denote the two trees that are connected by (u, v).
Since (u,v) must be a light edge connecting C1 to some other tree, implies
that (u, v) is a safe edge for C1. Kruskal's algorithm is a greedy algorithm,
because at each step it adds to the forest an edge of least possible weight.
It uses a disjoint-set data structure to maintain several disjoint sets of
elements. Each set contains the vertices in a tree of the current forest.
ALGORITHM
MST-KRUSKAL (G, w)
1AǾ
2 for each vertex v € V [G]
3 do MAKE-SET (v)
4 sort the edges of E by nondecreasing weight w
5 for each edge (u, v) € E in order by nondecreasing weight
6 do if FIND-SET (u)≠FIND-SET(v)
7
thenAA u {(u,v)}
8
UNION (u, v)
9 returns A
Prim's algorithm
Prim's algorithm is a special case of the generic minimum-spanning-tree
algorithm.Prim's algorithm operates much like Dijkstra's algorithm for
finding shortest paths in a graph.Prim's algorithm has the property that
the edges in the set A always form a single tree.
the tree starts from an arbitrary root vertex r and grows until the tree
spans all the vertices in V. At each step, a light edge connecting a vertex
in A to a vertex in V - A is added to the tree. this rule adds only edges that
are safe for A; therefore, when the algorithm terminates, the edges in A
form a minimum spanning tree. This strategy is "greedy" since the tree is
augmented at each step with an edge that contributes the minimum
amount possible to the tree's weight
ALGORITHM
MST-PRIM (G, w, r)
1. Q  v [G]
2. for each u € Q
3. dokey[u]  ∞
4. key [r]  0
5. ∏[r]  NIL
6. while Q ≠ Ǿ
7. dou EXTRACT-MIN (Q)
8. for each v €Adj[u]
9. doifv €
Q and w (u, v) <key[v]
10. then ∏ [v]  u
11. key[v] w(u,v)
Dijkstra's algorithm
Dijkstra's algorithm solves the single-source shortest-paths problem on a
weighted, directed graph G = (V, E) for the case in which all edge weights
are nonnegative. In this section, therefore, we assume that w(u, v)≥ 0 for
each edge (u, v) €E.
Dijkstra's algorithm maintains a set S of vertices whose final shortest-path
weights from the source s have already been determined. The algorithm
repeatedly selects the vertex u
V - S with the minimum shortest-path
estimate, inserts u into S, and relaxes all edges leaving u. In the following
implementation, we maintain a priority queue Q that contains all the
vertices in V - S, keyed by their d values. The implementation assumes
that graph G is represented by adjacency lists
ALGORITHM
DIJKSTRA(G,w,s)
1 INITIALIZE-SINGLE-SOURCE (G,s)
2SǾ
3 Q V[G]
4 while Q ≠ Ǿ
5
douEXTRACT-MIN (Q)
6
SS u {u}
7
for each vertex v €Adj[u]
8
do RELAX (u,v,w)
THE NATIVE STRING-MATCHING
The naive algorithm finds all valid shifts using a loop that checks the
condition P[1 . . m] = T[s + 1 . . s + m] for each of the n - m + 1 possible
values of s.
The naive string-matching procedure can be interpreted graphically as
sliding a "template" containing the pattern over the text, noting for which
shifts all of the characters on the template equal the corresponding
characters in the text m] for each of the n - m + 1 possible values of s.
The worst-case running time is thus Θ ((n - m + 1)m), which is Θ (n2) if m
= n/2
ALGORITHM
NAIVE-STRING-MATCHER (T, P)
1 nlength [T]
2 mlength [P]
3 fors 0 ton - m
4 do ifP [1 . . . m] = T[s + 1 . . . s + m]
5
then print "Pattern occurs with shift" s
THE RABIN-KARP ALGORITHM
Rabin and Karp have proposed a string-matching algorithm that performs
well in practice and that also generalizes to other algorithms for
related problems, such as two-dimensional pattern matching. The
worst-case running time of the Rabin-Karp algorithm is O((n - m +
1)m), but it has a good average-case running time.
This algorithm makes use of elementary number-theoretic notions such as
the equivalence of two numbers modulo a third number.
ALGORITHM
RABIN-KARP-MATCHER(T, P, d, q)
1 nlength[T]
2 m length[P]
3 hdm-1 mod q
4 p 0
5 t0 0
6 fori1 tom
7
dop (dp + P[i]) mod q
8
t0 (dt0 + T[i]) mod q
9 fors0 ton - m
10 do ifp = ts
11
then ifP[1 . . m] = T[s + 1 . . s + m]
12
then "Pattern occurs with shift" s
13
ifs<n - m
14
thents+1 (d(ts - T[s + 1]h) + T[s + m + 1]) mod q
Floyd-Warshall
We assume that there are no negative-weight cycles
In the Floyd-War shall algorithm, we use a different characterization of
the structure of a shortest path than we used in the matrix-multiplicationbased all-pairs algorithms. The algorithm considers the "intermediate"
vertices of a shortest path, where an intermediate vertex of a simple path
p = v1, v2, . . . ,
vl is any vertex of p other than v1 or vl, that is, any vertex in the set {v2,v3,
. . . , vl-1}.
ALGORITHM
Floyd-warshall(W)
1.
2.
3.
4.
5.
6.
7.
n rows[W]
D(0) W
for k  1 to n
do for I  1 to n
do for j  1 to n
do d(k)ij min(dij(k-1). Dik(k-1) + dkj(k-1) )
return D(n)
The Bellman-Ford algorithm
The Bellman-Ford algorithm solves the single-source shortest-paths
problem in the more general case in which edge weights can be negative.
Given a weighted, directed graph G = (V, E) with source s and weight
function w : ER , the Bellman-Ford algorithm returns a boolean value
indicating whether or not there is a negative-weight cycle that is
reachable from the source. If there is such a cycle, the algorithm indicates
that no solution exists. If there is no such cycle, the algorithm produces
the shortest paths and their weights.
Like Dijkstra's algorithm, the Bellman-Ford algorithm uses the technique
of relaxation, progressively decreasing an estimate d[v] on the weight of a
shortest path from the source s to each vertex v €V until it achieves the
actual shortest-path weight
(s, v). The algorithm returns TRUE if and
only if the graph contains no negative-weight cycles that are reachable
from the source.
ALGORITHM
BELLMAN-FORD(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 fori 1 to |V[G]| - 1
3 do for each edge (u, v) € E[G]
4
doRELAX(u, v, w)
5 for each edge (u, v)€E[G]
6 do ifd[v] >d[u] + w(u, v)
7
then return FALSE
8 return TRUE
Download