Linear Algebra – A Summary

advertisement
Linear Algebra – A Summary
Definition: A real vector space is a set V that is provided with an addition and a multiplication such that
(a) u ∈ V and v ∈ V ⇒ u + v ∈ V ,
(1) u + v = v + u for all u ∈ V en v ∈ V ,
(2) u + (v + w) = (u + v) + w for all u ∈ V , v ∈ V , and w ∈ V ,
(3) there exists 0 ∈ V such that u + 0 = u for all u ∈ V ,
(4) for all u ∈ V there exists −u ∈ V such that u + (−u) = 0,
(b) u ∈ V and c ∈ R ⇒ cu ∈ V ,
(5) c(u + v) = cu + cv for all u ∈ V , v ∈ V , and c ∈ R,
(6) (c + d)u = cu + du for all u ∈ V , c ∈ R en d ∈ R,
(7) c(du) = (cd)u for all u ∈ V , c ∈ R, and d ∈ R,
(8) 1u = u for all u ∈ V .
Definition: Let V be a vector space and W ⊂ V . W is a subspace of V if W is a vector
space with respect to the operations in V .
Theorem: Let V be a vector space and W ⊂ V , where W is not empty. If
(a) u ∈ W and v ∈ W ⇒ u + v ∈ W ,
(b) u ∈ W and c ∈ R ⇒ cu ∈ W ,
then W is a subspace of V .
Definition: Let V be a vector space and v1 , v2 , . . . , vn ∈ V . A vector v is a linear combination of v1 , v2 , . . . , vn if
v = a1 v1 + a2 v2 + . . . + an vn
for some a1 , a2 , . . . , an ∈ R.
Definition: Let V be a vector space, v1 , v2 , . . . , vn ∈ V and S = {v1 , v2 , . . . , vn }, then span S
is the span of S, i.e.
span S = {a1 v1 + a2 v2 + . . . + an vn | a1 , a2 , . . . , an ∈ R}.
1
Definition: Let V be a vector space and v1 , v2 , . . . , vn ∈ V . The vectors v1 , v2 , . . . , vn are
linearly independent if
a1 v1 + a2 v2 + . . . + an vn = 0 ⇒ a1 = a2 = . . . = an = 0.
Theorem: Let V be a vector space and S and T be finite subsets of V , where S ⊂ T , then
T is linearly independent ⇒ S is linearly independent.
Definition: Let V be a vector space. A basis of V is a set {v1 , v2 , . . . , vn }, where v1 , v2 , . . . , vn ∈
V , such that
(a) V = span{v1 , v2 , . . . , vn },
(b) v1 , v2 , . . . , vn are linearly independent.
Theorem: Let V be a vector space and S be a basis of V , then each vector v in V can be
written as a unique linear combination of vectors in S.
Note: We only consider vector spaces V that have a basis, or V = {0}.
Theorem: Let V be a vector space and S be a finite subset of V , where span S = V , then
some subset of S is a basis of V .
Theorem: Let V be a vector space. If {v1 , v2 , . . . , vn } and {w1 , w2 , . . . , wm } are bases of V ,
then n = m.
Definition: Let V be a vector space, where V 6= {0}, then the dimension of V with notation
dim V is the number of vectors of a basis of V . We define dim {0} = 0.
Definition: Let V be a vector space and S = {v1 , v2 , . . . , vn } be an ordered basis of V . Let
v ∈ V , then


a1
 a2 


[v]S =  .  ,
 .. 
an
where v = a1 v1 + a2 v2 + . . . + an vn , is the coordinate vector of v with respect to the ordered
basis S. The elements of [v]S are the coordinates of v with respect to the ordered basis S.
Definition: An m × n-matrix A is a rectangular array of real numbers arranged in m rows
and n columns, i.e.


a11 a12 . . . a1n
 a21 a22 . . . a2n 


A= .
..
..  .
 ..
.
. 
am1 am2 . . . amn
2
For j = 1, 2, . . . , n

a1j
 a2j

aj =  .
 ..
the j-th column of A is given by



.

amj
Theorem: The set of m × n-matrices that is provided with an additive and a multiplicative
operation

 

b11 b12 . . . b1n
a11 a12 . . . a1n
 a21 a22 . . . a2n   b21 b22 . . . b2n 

 

 ..
..
.. 
..
..  +  ..
 .
.
. 
.
.   .
bm1 bm2 . . . bmn
am1 am2 . . . amn


a11 + b11
a12 + b12 . . . a1n + b1n
 a21 + b21
a22 + b22 . . . a2n + b2n 


=
,
..
..
..


.
.
.



c

am1 + bm1 am2 + bm2 . . . amn + bmn
 
a11 a12 . . . a1n
ca11 ca12 . . . ca1n
 ca21 ca22 . . . ca2n
a21 a22 . . . a2n 
 
..
..
..  =  ..
..
..
.
.
.   .
.
.
cam1 cam2 . . . camn
am1 am2 . . . amn



,

is a vector space. The notation of this vector space is Rm×n . Also we denote Rm = Rm×1 .
Definition: Let A = (aij ) be an m × k-matrix and B = (bij ) be a k × n-matrix, then the
matrix product AB is the m × n-matrix C = (cij ) with the entries
cij = ai1 b1j + ai2 b2j + · · · + aik bkj
for i = 1, 2, . . . , m and j = 1, 2, . . . , n.
Definition: Let A =
by
a1 a2 . . . an
be an m × n-matrix, then the range of A is given
range A = span {a1 , a2 , . . . , an },
and the rank of A is given by
rank A = dim range A.
Definition: Let A be an m × n-matrix, then the kernel of A is given by
ker A = {x ∈ Rn | Ax = 0},
and the nullity of A given by
null A = dim ker A.
3
Theorem: Let A be an m × n-matrix, then
rank A + null A = n.
Definition: An n × n-matrix A is invertible (non-singular) if
Ax = 0 ⇒ x = 0 for all x ∈ Rn .
The inverse matrix A−1 is given by
x = A−1 y ⇔ y = Ax.
Definition: Let V be a vector space with ordered bases S and T , then the transition matrix
PS←T from T to S is given by
[v]S = PS←T [v]T for all v ∈ V.
Theorem: Let V be a vector space with ordered bases S = {v1 , v2 , . . . , vn } and T = {w1 , w2 , . . . , wn },
then
PS←T = [w1 ]S [w2 ]S . . . [wn ]S .
The matrix PS←T is invertible, where
−1
= PT ←S .
PS←T
Definition: Let V be a vector space. A function k • k : V → R is a norm on V if
(a) kuk ≥ 0 for all u ∈ V ; kuk = 0 ⇒ u = 0,
(b) ku + vk ≤ kuk + kvk for all u ∈ V and v ∈ V ,
(c) kcuk = |c|kuk for all u ∈ V and c ∈ R.
A normed vector space is a vector space provided with a norm.
Definition: Let V be a vector space. A function (•, •) : V × V → R is an inner product on V
if
(a) (u, u) ≥ 0 for all u ∈ V ; (u, u) = 0 ⇒ u = 0,
(b) (v, u) = (u, v) for all u ∈ V and v ∈ V ,
(c) (u + v, w) = (u, w) + (v, w) for all u ∈ V , v ∈ V , and w ∈ V ,
(d) (cu, v) = c(u, v) for all u ∈ V , v ∈ V , and c ∈ R.
4
An inner product space is a vector space provided with an inner product.
Theorem: Let V be an inner product space, then the Cauchy-Schwarz inequality holds:
p
|(u, v)| ≤ (u, u)(v, v) for all u ∈ V en v ∈ V.
Theorem: Let V be an inner product space and
p
kuk = (u, u) for all u ∈ V,
then V is a normed vector space.
Definition: Let V be an inner product space with an ordered basis S = {v1 , v2 , . . . , vn }, then
the inner product matrix A = (aij ) with respect to S is given by
aij = (vj , vi ) for i, j = 1, 2, . . . , n.
Definition: Let A = (aij ) be an m×n-matrix, then the transposed matrix is the n×m-matrix
AT = (aji ).
Definition: A square matrix A is symmetric if AT = A.
Theorem: Let V be an inner product space with an ordered basis S and A the inner product
matrix with respect to S, then
(a) A is symmetric,
(b) (v, w) = [v]TS A[w]S for all v ∈ V and w ∈ V .
Definition: Let V be an inner product space, then the vectors u and v in V are orthogonal
if (u, v) = 0.
Definition: Let V be an inner product space with a basis S = {v1 , v2 , . . . , vn }, then S is
orthonormal if
(uj , ui ) = δij for i, j = 1, 2, . . . , n.
Theorem: Let V be an inner product space with an ordered basis S, then
(u, v) = [u]TS [v]S for all u ∈ V en v ∈ V.
Definition: Let V be an inner product space with a basis {u1 , u2 , . . . , un }, then the modified
Gram-Schmidt process is given by v1 = u1 /ku1 k and
vk =
uk − (uk , v1 )v1 − (uk , v2 )v2 − . . . − (uk , vk−1 )vk−1
kuk − (uk , v1 )v1 − (uk , v2 )v2 − . . . − (uk , vk−1 )vk−1 k
5
for k = 2, . . . , n.
Theorem: Let V be an inner product space with a basis {u1 , u2 , . . . , un }, then the modified
Gram-Schmidt process results into an orthonormal basis {v1 , v2 , . . . , vn }.
Definition: Let V be an inner product space and W a subspace of V . The orthogonal
complement W ⊥ of W is given by
u ∈ W ⊥ ⇔ (u, v) = 0 for all v ∈ W.
Definition: Let V be a vector space and W1 and W2 subspaces of V , where W1 ∩ W2 = {0},
then the direct sum of W1 and W2 is given by
W1 ⊕ W2 = {w1 + w2 | w1 ∈ W1 and w2 ∈ W2 }.
Theorem: Let V be an inner product space and W a subspace of V , then
V = W ⊕ W ⊥.
Theorem: Let A be an m × n-matrix, then
(a) ker AT = (range A)⊥ ,
(b) range AT = (ker A)⊥ .
Definition: Let A ∈ Rm×n and b ∈ Rm , then Ax = b with unknown vector x ∈ Rn is a
system of linear equations. This system is consistent if Ax = b for some vector x ∈ Rn .
The solution of the system is the set {x ∈ Rn | Ax = b}.
Theorem: Let A ∈ Rm×n and b ∈ Rm , then
Ax = b is consistent ⇔ b ∈ range A.
Definition: A matrix is in the reduced row echelon form if:
(a) There are only zero rows at the bottom of the matrix.
(b) The first nonzero entry of a nonzero row is called the pivot of the row.
(c) All entries left and under a pivot are equal to 0.
Definition: An elementary row operation of a matrix is one of the following operations:
(a) Interchange two rows.
6
(b) Multiply a row with a nonzero number.
(c) Add a multiple of a row to another row.
Definition: A matrix A is row equivalent with a matrix B if B can be obtained from A by
elementary row operations.
Theorem: A m × n-matrix A is row equivalent to a matrix B if B = P A for some invertible
m × m-matrix P .
Theorem: If the matrices A and B are row equivalent, then
(a) ker A = ker B,
(b) rank A = rank B.
Theorem: Let A be a matrix in reduced row echelon form, then the norm of A is equal to the
number of pivots.
Definition: Let A ∈ Rm×n and b ∈Rm , then the system Ax = bis row equivalent to the
system Bx = c if the matrix B c is row equivalent to A x .
Theorem: Row equivalent systems of linear equations have the same solution.
Definition: Let S =
elements of S.
1 2 . . . n , then a permutation of S is a rearrangement of the
Theorem: Let S = 1 2 . . . n , then each permutation of S can be obtained from S by
successive interchanges of elements.
Definition: Let S = 1 2 . . . n and a permutation of S is obtained by n successive
interchanges of rows, then the permutation is even or odd respectively if n is even or odd.
Definition: Let A = (aij ) be an n × n-matrix, then the determinant of A is given by
X
det A =
(±)a1j1 a2j2 . . . anjn ,
where
the summation
is over all permutations j1 j2 . . . jn of the set
1 2 . . . n . The sign is + or − if the permutation j1 j2 . . . jn respectively is even
or odd.
Theorem: Let A be an n × n-matrix, then
det AT = det A.
Definition: An n × n-matrix A = (aij ) is an upper triangular matrix if
i > j ⇒ aij = 0.
7
Theorem: Let the n × n-matrix A = (aij ) be an upper triangular matrix, then
det A = a11 a22 . . . ann .
Theorem: Let the n × n-matrix A be row equivalent with a matrix B, where B can be obtained
from A by elementary row operations without row multiplications and with k row interchanges,
then
det A = (−1)k det B.
Theorem: Let A be an n × n-matrix, then
A is singular ⇔ det A = 0.
Definition: An n × n-matrix A = (aij ) is a diagonal matrix if
i 6= j ⇒ aij = 0.
Theorem: Each invertible matrix is row equivalent to a diagonal matrix with nonzero diagonal
entries.
Theorem Let A and B be n × n-matrices, then
det (AB) = det A det B.
Theorem: Let A be an invertible matrix, dan
det A−1 =
1
.
det A
Definition: The Euclidean norm on Rn is given by
√
kxk2 = xT x for all x ∈ Rn .
b is a least squares solution of the
Definition: Let A ∈ Rm×n and b ∈ Rm , then the vector x
system Ax = b if
kb − Ab
xk2 ≤ kb − Axk2 for all x ∈ Rn .
Theorem: Let A ∈ Rm×n and b ∈ Rm , then
b is a least squares solution of Ax = b ⇔ AT Ab
x
x = AT b.
Theorem: Let A ∈ Rm×n and b ∈ Rm . If rank A = n, then AT A is invertible and Ax = b has
a unique least squares solution
b = (AT A)−1 AT b.
x
8
Definition: Let A be an n×n-matrix, then the number λ is an eigenvalue of A corresponding
to an eigenvector x, where x 6= 0 if
Ax = λx.
Definition: An identity matrix is an n × n-matrix I = (δij ).
Definition: Let A be an n × n-matrix, then the characteristic polynomial of A is given by
p(λ) = det (λI − A) for all λ ∈ R.
Theorem: Let A be an n×n-matrix, then the eigenvalues of A are the roots of the characteristic
polynomial of A.
Definition: Let A and B be n × n-matrices, then B is similar to A if B = P −1 AP for some
invertible n × n-matrix P .
Theorem: Similar matrices have equal eigenvalues.
Definition: An n × n-matrix A is diagonalizable if A is similar to a diagonal matrix.
Theorem: Let A be an n × n-matrix, then
A is diagonalizable ⇔ A has n linearly independent eigenvectors.
Theorem: If the characteristic polynomial of an n × n-matrix n has different roots, then A i
diagonalizable.
Theorem: A symmetric n × n-matrix has n orthogonal eigenvectors.
Definition: An n × n-matrix is orthogonal if AT A = I.
Theorem: Let A be a symmetric matrix, then there exists a diagonal matrix D and an orthogonal matrix P such that
AP = P D.
9
Download