Numerical Methods I: Numerical linear algebra Georg Stadler, Jonathan Goodman

advertisement
Numerical Methods I: Numerical linear algebra
Georg Stadler, Jonathan Goodman
Courant Institute, NYU
stadler@cims.nyu.edu
Sep. 10/17/24, 2015
1 / 12
Sources/reading
I assume familiarity with basic linear algebra (vectors, matrices,
norms, matrix norms). See Section 1 of Quarteroni/Sacci/Saleri.
Main reference:
Section 3 in Quarteroni/Sacci/Saleri and
Section 1 in Deuflhard/Hohmann
2 / 12
Solving linear systems
We study the solution of linear systems of the form
Ax = b
with A ∈ Rn×n , x, b ∈ Rn . We assume that this system has a
unique solution, i.e., A is invertible.
Solving linear systems is needed in many applications. Often, we
have to solve
I
large systems (can be up to millions of unknowns, and more)
I
as fast as possible, and
I
accurately and reliably.
There exist explicit formulas for solving linear systems but they are
extremely expensive (e.g., Kramer’s rule requires computing
determinants).
3 / 12
Kinds of linear systems
Solvers take advantage of special properties of the matrix.
sparse/dense matrix
I
Dense: the best way to compute Ax is direct summation
I
Fast algorithms for computing Ax, FFT, FMM, ...
I
Most aij = 0: store only non-zeros, avoid fill in.
I
Structured/unstructured: is the sparsity pattern easy to
describe without storing it explicitly?
Symmetry, etc.
I
Special factorizations for symmetric or skew symmetric
matrices
I
Special factorizations for positive definite matrices.
I
Diagonally dominant matrices don’t need pivoting.
4 / 12
Solving linear systems
Triangular systems:
Forward and backward substitution, requires
n(n + 1)
multiplications/divisions,
2
n(n − 1)
additions.
2
Overall: ∼ n2 floating point operations (flops).
We count flops to estimate the computational time/effort. Besides
floating point operations, computer memory access has a
significant influence on the efficiency of numerical methods (see
experiments in homework #2).
5 / 12
Solving linear systems
Gaussian elimination—LU factorization

a11
 ..
 .
···
an1 · · ·
   
a1n
x1
b1
..   ..  =  .. 
.  .   . 
ann
Gaussian elimination: “new row =

a11 · · · · · · · · ·
 0 a0
22 · · · · · ·

 ..
..
 .
.

 ..
..
 .
.
0 a0n2 · · · · · ·
New system matrix/rhs is: A(2) =
xn
bn
row i - li1 row 1”
   
a1n
x1
b1
0


a2n   x2 
b02 
 

 ..  
.. 
 . =
b03 
. 


 
.. 
..   ..  
.  .   . 
0
b0n
ann
xn
L1 A, b(2) =
L1 b.
6 / 12
Solving linear systems
Gaussian elimination—LU factorization

a11
 ..
 .
···
an1 · · ·
   
a1n
x1
b1
..   ..  =  .. 
.  .   . 
ann
Gaussian elimination: “new row =

a11 · · · · · · · · ·
 0 a0
···
22 · · ·

 ..
00
 .
0 a33 · · ·

 ..
..
 .
.
0
0 a00n3 · · ·
xn
bn
row i - li1 row 1”
   
a1n
x1
b1
0


0
a2n   x2 
 
 b2 



.
 .  b003 
a003n 

 .  = 
.. 
..   ..  
.  .   . 
00
b00n
ann
xn
New system matrix/rhs is: A(3) = L2 L1 A, b(3) = L2 L1 b.
6 / 12
Solving linear systems
Gaussian elimination—LU factorization
We obtain:
A(n) = Ln−1 · · · L1 A,
b(n) = Ln−1 · · · L1 b,
with the Frobenius matrices


1
 ..



.




1


Lk = 

−l
1
k+1,k




..
..

. 
.
−ln,k
1
Note that L−1
k are also Frobenius matrices, but with different sign
for the lj,i ’s.
7 / 12
Solving linear systems
Gaussian elimination—LU factorization
We obtain:
A(n) = Ln−1 · · · L1 A,
b(n) = Ln−1 · · · L1 b,
with the Frobenius matrices


1
 ..



.




1


Lk = 

−l
1
k+1,k




..
..

. 
.
−ln,k
1
Note that L−1
k are also Frobenius matrices, but with different sign
for the lj,i ’s.
8 / 12
Solving linear systems

a11
 a21

 ..
 .

1
 l21

= .
 ..
ln1
a12
a22
···

a1n
a2n 

.. 
. 
an1 an2
ann

0 · · · 0 u11 u12 · · ·

1
0
  0 u22
 .

.
..
..
.
. ..   ..
0
0 ···
ln2 · · · 1

u1n
u2n 

.. 
. 
unn
u11 = a11
u12 = a12
l21 u11 = a21
l21 u12 + u22 = a22
..
.
9 / 12
Solving linear systems
Gaussian elimination—LU factorization
Algorithm for solving linear system Ax = b (assuming diagonal
elements do not vanish):
1. Compute triangular factorization A = LU .
2. Solve Lz = b (forward substitution).
3. Solve U x = z (backward substitution).
10 / 12
Solving linear systems
Gaussian elimination—LU factorization
Algorithm for solving linear system Ax = b (assuming diagonal
elements do not vanish):
1. Compute triangular factorization A = LU .
2. Solve Lz = b (forward substitution).
3. Solve U x = z (backward substitution).
Notes:
I
Main cost is LU factorization.
I
Factorization can be reused for different right hand sides b.
10 / 12
Solving linear systems
LU with pivoting
If diagonal “pivoting” element is zero (or very small), one has to
exchange rows and/or columns–otherwise the LU factorization fails.
Basic idea:
Choose largest element (in absolute value) in the row that is
eliminated as pivot.
This can be expressed by permutation matrices Pπ , resulting in the
LU decomposition (the permutation π also affects L and U ):
Pπ A = LU.
11 / 12
Solving linear systems
Choleski factorization
A matrix is symmetric positive definite (spd), if A = AT and for all
x ∈ Rn , x 6= 0, the inner product hAx, xi > 0.
For spd matrices, we can compute the factorization:
A = LDLT ,
where L is a lower triangular matrix with 1’s on the diagonal, and
D is a positive diagonal matrix.
The Choleski factorization is obtained by multiplying the square
root of D (which exists!) with L:
A = L̄L̄T .
Choleski factorization requires ∼
roots.
n3
6
multiplications and n square
12 / 12
Download