Lecture 8

advertisement
Numerical Computation
Lecture 8: Matrix Algorithm Analysis
United International College
Review Presentation
• 10 minutes for review presentation.
Review
• During our Last Class we covered:
– Gauss-Jordan Method for finding Inverses
Today
• We will cover:
– Operation count for Gaussian Elimination, LU
Factorization
– Accuracy of Matrix Methods
– Readings:
• Pav, section 3.4.1
• Moler, section 2.8
Operation Count for
Gaussian Elimination
• How many floating point operations (+-*/) are used
by the Gaussian Elimination algorithm?
• Definition: Flop = floating point operation. We will
consider a division to be equivalent to a
multiplication, and a subtraction equivalent to an
addition.
• Thus, 2/3 = 2*(1/3) will be considered a
multiplication.
• 2-3 = 2 + (-3) will be considered an addition.
Operation Count for
Gaussian Elimination
• In Gaussian Elimination we use row operations to
reduce
• to
Operation Count for
Gaussian Elimination
• Consider the number of flops needed to zero
out the entries below the first pivot a11 .
Operation Count for
Gaussian Elimination
• First a multiplier is computed for each row below the
first row. This requires (n-1) multiplies.
m = A(i,k)/A(k,k);
• Then in each row below row 1 the algorithm
performs n multiplies and n adds.
(A(i,j) = A(i,j) - m*A(k,j);)
• Thus, there is a total of (n-1) + (n-1)*2*n flops for
this step of Gaussian Elimination.
• For k=1 algorithm uses 2n2 –n -1 flops
Operation Count for
Gaussian Elimination
• For k =2, we zero out the column below a22 .
• There are (n-2) rows below this pivot, so this
takes 2(n-1)2 –(n-1) -1 flops.
• For k -3, we would have 2(n-2)2 –(n-2) -1 flops,
and so on.
• To complete Gaussian Elimination, it will take
In flops, where
Operation Count for
Gaussian Elimination
• Now,
• So, In = (2/6)n(n+1)(2n+1) – (1/2)n(n+1) – n
= [(1/3)(2n+1)-(1/2)]*n(n+1) – n
= [(2/3)n – (1/6)] * n(n+1) - n
= (2/3)n3 + (lower power terms in n)
• Thus, the number of flops for Gaussian
Elimination is O(n3).
Operation Count for
LU Factorization
• In the algorithm for LU Factorization, we only
do the calculations described above to
compute L and U. This is because we save the
multipliers (m) and store them to create L.
• So, the number of flops to create L and U is
O(n3).
Operation Count for
using LU to solve Ax = b
• Once we have factored A into LU, we do the
following to solve Ax = b:
• Solve the two equations:
• Lz = b
• Ux = z
• How many flops are needed to do this?
Operation Count for
using LU to solve Ax = b
• To solve Lz=b we use forward substitution
z=
z1 = b1 , so we use 0 flops to find z1.
z2 = b2 – l21 *z1 , so we use 2 flops to find z2 .
z3 = b3 – l31 *z1 – l32 *z2 , so we use 4 flops to find z2 ,
and so on.
Operation Count for
using LU to solve Ax = b
• To solve Lz=b we use forward substitution
z=
• Totally, 0+2+4+ … + 2*(n-1)= 2*(1+2+…+(n-1))
= 2*(1/2)*(n-1)(n) = n2 – n.
• So, the number of flops for forward substitution is
O(n2).
Operation Count for
using LU to solve Ax = b
• To solve Ux=z we use backward substitution
• A similar analysis to that of forward
substitution shows that the number of flops
for backward substitution is also O(n2).
• Thus, the number of flops for using LU to
solve Ax=b is O(n2).
Summary of Two Methods
• Gaussian Elimination requires O(n3) flops to
solve the linear system Ax = b.
• To factor A = LU requires O(n3) flops
• Once we have factored A = LU, then, using L
and U to solve Ax = b requires O(n2) flops.
• Suppose we have to solve Ax = b for a given
matrix A, but for many different b vectors.
What is the most efficient way to do this?
Accuracy of Matrix Methods
• Our algorithms are used to find the solution x
to the system Ax=b.
• But, how close to the exact solution is the
computed solution?
• Let x* be the computed solution and x be the
exact solution.
Accuracy of Matrix Methods
• Definition: The error e is defined to be
e = x - x*
• Definition: The residual r is defined to be
r = b – Ax*
• Note: These two quantities may be quite
different!
Accuracy of Matrix Methods
• Consider the system:
 0.780 0.563 x1   0.217

   

 0.913 0.659 x2   0.254
• Using Gaussian Elimination with partial
pivoting, we swap rows 1 and 2
 0.913 0.659 x1   0.254

   

 0.780 0.563 x2   0.217
Accuracy of Matrix Methods
• Suppose we had a computer with just 3 digit
accuracy. The first multiplier would be:
• Subtracting 0.854*row 1 from row 2, we get
Accuracy of Matrix Methods
• Solving this using back substitution gives
• So,
Accuracy of Matrix Methods
• The exact solution is
• Thus, the error is
1.000 (0.443)   0.557 
  

e  x  x*  
  1.000 1.000    2.000
• The error is bigger than the solution!!
Accuracy of Matrix Methods
• The residual is
• The residual is very small, but the error is very
large!
Accuracy of Matrix Methods
• Theorem (sort of): Gaussian elimination with
partial pivoting is guaranteed to produce small
residuals.
• Why is the error in our example so large?
Accuracy of Matrix Methods
• If we did Gaussian Elimination with much higher
accuracy (more than 3 digits) we would see that the
row reduction produces:
• This matrix is very close to being singular (why?)
• The relationship between the size of the residual and
the size of the error is determined in part by a
quantity known as the condition number of the
matrix, which is the subject of our next lecture.
Download