Lecture 2 in format

advertisement
Gaussian Elimination
• Simplest example
• Gaussian elimination as multiplication by elementary lower triangular and
permutation matrices
• Lower/Upper triangular, Permutation matrices. Invariance properties.
• Solvability vs. no solution.
Solve a linear system


−x2 + x3 = 1,
x1 + 2x2 + 3x3 = 2, (∗)


x1 + x2 = 3,
Main observations:
1. A row-echelon system, for example


−x1 + 2x2 + x3 = 1,
2x2 + 3x3 = 2,


2x3 = 3,
is easy to solve by back substitution. That is solve the last equation for x3 , then
substitute x3 into the second equation and solve for x2 , then substitute x2 and
x3 into the first equation and solve for x1 .
In a row-echelon matrix for two successive rows the leading nonzero entry in
the higher row appears farther to the left than the leading nonzero entry in the
lower row:


∗
∗
∗
∗ ...
∗
∗
 0
∗
∗
∗ ...
∗
∗ 


 0
0
0
∗ ...
∗
∗ 


 0
0
0
0 ...
∗
∗ 


. . . . . . . . . . . . . . . . . . . . . 
0
0
0
0 ...
0
∗
2. Permutation of equations does not change the solution vector x.
3. If x satisfies each of the three equations, then then x satisfies linear combinations of these equations. For example, multiply the second equation in (∗) by
1/2 and subtract from the last, then x must satisfy
1/2x1 + 3/2x3 = 2.
1
Gaussian elimination is an algorithm that by linear combinations and
permutation of rows converts every matrix to a row-echelon form.
Solution to the simplest example. Rewrite the system in the matrix form
Ax = b,
where

0 −1
A = 1 2
1 1

 
1
1
3 , b = 2 .
0
3
Form an augmented matrix pertaining to the

0 −1 1 |
A|b = 1 2 3 |
1 1 0 |
Permute the first and the second row

1 2 3
0 −1 1
1 1 0
system

1
2 .
3
of the augmented matrix

| 2
| 1 .
| 3
Subtract the first row from the last row


1 2
3 | 2
0 −1 1 | 1 .
0 −1 −3 | 1
Subtract the second row from the last row

1 2
3 |
0 −1 1 |
0 0 −4 |

2
1 .
0
Solve by back substitution


4
x = −1 .
0
Check that the found x solves the original system.
Left multiplication of A by an elementary permutation matrix Pij , for example


1
0
0
0 ...
0
0
 0
0
1
0 ...
0
0 


 0
1
0
0 ...
0
0 


P23 = 
0
0
1 ...
0
0 
 0

. . . . . . . . . . . . . . . . . . . . . 
0
0
0
0 ...
0
1
2
permutes the ith and the jth row of A
Left multiplication of A by an elementary lower triangular (transvection)
matrix Lij,λ ,i < j for example


1
0
0
0 ...
0
0
 0
1
0
0 ...
0
0 


 0 −.1 1
0
.
.
.
0
0 


L23,−.1 == 
0
0
1 ...
0
0 

 0
. . . . . . . . . . . . . . . . . . . . . 
0
0
0
0 ...
0
1
adds the ith row, multiplied by λ to the jth row.
Gaussian elimination is multiplication by permutation and lower triangular
matrices, for example


1 2
3 | 2
L23,−1 L13,−1 P12 × A|b = 0 −1 1 | 1 .
0 0 −4 | 0
It is useful to study “class” properties of elementary matrices!
1. For a pair of elementary permutation and lower triangular matrices P and L
there exists another pair of such matrices L0 and P 0 so that
L0 P 0 = P L
if L corresponds to an elimination step in the Gaussian method, P corresponds
to a permutation step in the Gaussian method, the elimination step precedes
the permutation step.
2. A product of lower triangular matrices is a lower triangular matrix, where a
(general) lower triangular matrix is


1
0
0
0 ...
0
0
 ∗
1
0
0 ...
0
0 


 ∗
∗
1
0
.
.
.
0
0 


 ∗
∗
∗
1 ...
0
0 


. . . . . . . . . . . . . . . . . . . . . 
∗
∗
∗
∗ ...
∗
1
3. Lij,λ Lij,−λ = I, where I is the identity matrix

1
0
0
0 ...
0
 0
1
0
0 ...
0

 0
0
1
0 ...
0

I=
0
0
1 ...
0
 0
. . . . . . . . . . . . . . . . . .
0
0
0
0 ...
0

0
0 

0 

0 

...
1
4. A product of permutation matrices is a permutation matrix, where a (general)
permutation matrix in each row/column has only one 1, and the rest is 0.
3
5. Pij Pij = I.
Note: In general part 2 is not true. Counterexample: let
0 1
1 0
P =
, and L =
1 0
λ 1
then
A = PL =
λ
1
1
0
cannot be represented as A = L0 P 0 .
Theorem LP A = E, where P is a permutation matrix, L is a lower triangular
matrix, E has a row-echelon form.
Multiplication is simple, inverses is a delicate matter!
Definition A matrix B is a left inverse of a matrix A if BA = I. A matrix B is
a right inverse of a matrix A if AB = I.
Note It is possible that left/right inverse exists, but right/left does not- think
of rectangular (nonsquare) matrices.
Lemma If B, a right inverse of A, exists then there is at least one solution of
Ax = b.
Proof: take x = Bb, then Ax = ABb = b.
Lemma If B, a left inverse of A, exists then there is at most one solution of
Ax = b.
Proof: Suppose Ax1 = b and Ax2 = b, then x1 = BAx1 = Bb = BAx2 = x2 .
Note Left inverses are useful for understanding least squares method. Left inverse might exist, but there may be no solutions:


1 0 | 1
A|b = 0 1 | 2 .
0 1 | 3
Left inverse:
B=
1
0
0 0
1 0
Note Left inverse can be defined as a (linear) map B such that for any x if
y = Ax then By = x. Right inverse can be defined as a (linear) map B such
that for any y if By = x then y = Ax.
Definition A matrix, denoted by A−1 is the inverse of a matrix A, if A−1 A =
AA−1 = I.
The inverse matrix is unique: Suppose B and C are two different inverses of A,
then B = B(AC) = (BA)C = C.
Note The inverse A−1 also can be defined as a (linear) map.
Lemma If A−1 exists, then there the solution of Ax = b is unique.
Note If a left/right inverse of A exists, then A is an injective/surjective map. If
the (dual) inverse of A exists then A is a bijective map.
Theorem A = P LE, where P is a permutation matrix, L is a lower triangular
matrix, E has a row-echelon form. A−1 = E −1 L−1 P −1 if and only if E −1 exists.
4
Theorem E −1 exists if and only if
E = DU,
where in the diagonal matrix

λ1
 0

 0
D=
 0

. . .
0
...
...
...
...
...
...
0
0
0
0
...
0

0
0 

0 

0 

...
λn
all λi 6= 0 and U is the upper triangular matrix

1
∗
∗
∗ ...
 0
1
∗
∗ ...

 0
0
1
∗ ...
U =
 0
0
0
1 ...

. . . . . . . . . . . . . . .
0
0
0
0 ...
∗
∗
∗
∗
...
0

∗
∗ 

∗ 

∗ 

...
1
0
λ2
0
0
...
0
0
0
λ3
0
...
0
0
0
0
λ4
...
0
Proof: if E = DU , then for the problem Ax = y for any y by back substitution
we can find the unique solution x. E 6= DU if case 1: there are two consequtive
rows of E such that the leading nonzero entry in the higher row appears two or
more entries farther to the left than the leading nonzero entry in the lower row.

 

1 3 5 | 1
1 3 5 | 1 0 0 3 | 2 , 0 0 3 | 2 , 1 3 5 7 | 1 ,
0 0 1 1 | 2
0 0 0 | 2
0 0 0 | 0
or case 2 E has nonzero entries
columns:

1 3 |
0 1 |
0 0 |
or case 3 E has nonzero entries
rows:

1
0
0
on the diagonal,
 
1 3 |
1
2 , 0 1 |
0 0 |
2
but it has more rows than

1
2 ,
0
on the diagonal, but it has more columns than

3 5 7 | 1
1 3 3 | 2 .
0 1 1 | 2
In case 1, by backward elimination there either no solutions (left) or there are
more than 1 solution (center, right). In the case 2 there either no solutions (left)
or there is a unique solution (right). In the third case, there are (always) more
than 1 solution.
5
Download