Uploaded by takumifujim+studylib

Basics of Linear Algebra

advertisement
Basics of Linear Algebra
Properties of matrix calculation (𝐴, 𝐵 are matrices, 𝑥⃑ is a vector/matrix with one
column)
 (𝐴𝐵)𝑥⃑ = 𝐴(𝐵𝑥⃑)
 𝐼𝐴 = 𝐴𝐼 = 𝐴
 (𝐴𝐵)𝐶 = 𝐴(𝐵𝐶)
 𝐴(𝐵 + 𝐶) = 𝐴𝐵 + 𝐴𝐶
 𝐴𝐵 ≠ 𝐵𝐴
 (𝐴𝐵)−1 = 𝐵 −1 𝐴−1
 𝐼 −1 = 𝐼
Gaussian Elimination
𝑥 + 2𝑦 + 𝑧 = 5
{𝑥 + 𝑦 + 2𝑧 = 3
2𝑥 + 𝑦 + 𝑧 = 4
1 2 15
1 2
1 5
1 2
1 5
(1 1 2|3) ~ (0 −1 1 |−2) ~ (0 −1 1 |−2)
2 1 14
0 −3 −1 −6
0 0 −4 0
𝑅2 → 𝑅2 − 𝑅1 , 𝑅3 → 𝑅3 − 2𝑅1 , 𝑅3 → 𝑅3 + 3𝑅2
 A leading entry in a row is the left-most non-zero entry in that row
 Gaussian elimination is the conversion of a matrix into echelon form (each
leading entry is to the right of the leading entry in the row above it).
 Leading entries in echelon form are called pivots
 A system of linear equations is consistent if it has solutions
 A system has a unique solution iff an echelon form has a pivot in all columns
Elementary row operations (can be applied in Gaussian elimination)
 Multiplying a row by a non-zero constant
 Exchanging rows
 Substituting a row by that row minus a multiple of another row 𝑅𝑖 → 𝑅𝑖 − 𝑐𝑅𝑗
Identity matrix 𝐼𝑛
1 0 0
(0 1 0 )
0 0 1
Elementary matrix 𝐸𝑖𝑗
1 0 0
(−𝑙 1 0)
0 0 1
Diagonal matrix 𝐷
1 0 0
(0 2 0)
0 0 3
Symmetric matrix 𝑆
2 6 4
(6 1 9 )
4 9 3
Upper triangular matrix 𝑈
1 7 6
(0 3 4)
0 0 1
Lower triangular matrix 𝐿
−1 0 0
( 7 3 0)
4 2 1
 Row exchange matrix 𝑃 is obtained by exchanging the rows of 𝐼, and 𝑃𝐴 is the
rows of 𝐴 exchanged in the same way
 𝐴 is invertible iff the inverse 𝐴−1 such that 𝐴𝐴−1 = 𝐴−1 𝐴 = 𝐼 exists
LDU Factorization
 𝐴 = 𝐿𝐷𝑈 or 𝑃𝐴 = 𝐿𝐷𝑈
 Example: if the steps of Gaussian elimination for 𝐴3×3 are 𝑅2 → 𝑅2 − 𝑅1 , 𝑅3 →
𝑅3 − 2𝑅1 , 𝑅3 → 𝑅3 + 3𝑅2 then
1 0 0
𝐿 = (1 1 0)
2 −3 1
𝐷 is a matrix with pivots, and 𝑈 is the echelon form with rows divided by their
pivots
 If 𝐴 is invertible, then the 𝐿𝐷𝑈 factorization is unique
 The transpose matrix 𝐴𝑇 is obtained by switching the rows and columns of 𝐴
 𝐴 is symmetric if 𝐴𝑇 = 𝐴
Properties of transpose matrices
 (𝐴𝑇 )𝑇 = 𝐴
 (𝐴𝐵)𝑇 = 𝐵 𝑇 𝐴𝑇
 (𝐴𝑇 )−1 = (𝐴−1 )𝑇 if 𝐴 is invertible
 If 𝐴 is symmetric and invertible, then 𝐴−1 is symmetric as well
 𝐴𝑇 𝐴 and 𝐴𝐴𝑇 are symmetric
 If 𝐴 = 𝐿𝐷𝑈 is symmetric, then 𝑈 = 𝐿𝑇 i.e. 𝐴 = 𝐿𝐷𝐿𝑇
 𝑁(𝐴𝑇 𝐴) = 𝑁(𝐴)
Vector spaces
 If 𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 are in ℂ𝑛 then 𝑐1 𝑣⃑1 , 𝑐2 𝑣⃑2 , … , 𝑐𝑘 𝑣⃑𝑘 is a linear combination of
𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 , where 𝑐1 , 𝑐2 , … , 𝑐𝑘 are constants
 A subspace of a vector space is a non-empty subset of that space that contains
the origin (so it’s non-empty), and any sum of two of its elements, and any
scalar multiplication of its elements
 The set of linear combinations of 𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 in a vector space 𝑉 is a subspace
of 𝑉
 The column space of 𝐴 or 𝐶(𝐴) is the set of all linear combinations of the
columns of 𝐴
 The nullspace of 𝐴 or 𝑁(𝐴) is the set of vectors 𝑥⃑ such that 𝐴𝑥⃑ = ⃑0⃑
 Finding the nullspace of 𝐴 from its echelon form:
1 2 30
(0 1 2|0)
0 0 00
𝑥3 = 𝛼 (free variable)
𝑥2 = −2𝑥3 = −2𝛼
𝑥1 = −2𝑥2 − 3𝑥3 = 4𝛼 − 3𝛼 = 𝛼
1
𝑁(𝐴) = {𝛼 (2)}
1
 𝐴𝑛×𝑛 is invertible ⟺ 𝐴 has 𝑛 pivots ⟺ 𝐴𝑥⃑ = 𝑏⃑⃑ has exactly one solution 𝑥⃑ =
𝐴−1 𝑏⃑⃑
 The solutions to 𝐴𝑥⃑ = 𝑏⃑⃑ have the form 𝑥⃑ = 𝑥⃑𝑛 + 𝑥⃑𝑝 where 𝑥⃑𝑛 is the general
solution of the homogeneous equation (= ⃑0⃑), 𝐴𝑥⃑ = ⃑0⃑ and 𝑥⃑𝑝 is a particular
solution of the equation 𝐴𝑥⃑ = 𝑏⃑⃑
 𝐶(𝐴𝐵) ⊆ 𝐶(𝐴), 𝑁(𝐵) ⊆ 𝑁(𝐴𝐵)
 In a reduced echelon form, all the pivots are 1 and all the elements above the
pivots are zero
 The rank of 𝐴 is the number of pivots in its echelon form
Linear independence, basis and dimension
 If 𝑐1 𝑣⃑1 , 𝑐2 𝑣⃑2 , … , 𝑐𝑘 𝑣⃑𝑘 = ⃑⃑
0 is true only for 𝑐1 , 𝑐2 , … , 𝑐𝑘 = 0, then 𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 are
linearly independent
 The columns of 𝐴 are linearly independent ⟺ its echelon form has a pivot in
⃑⃑}
every column ⟺ 𝑁(𝐴) = {0
 If the vectors 𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 are linearly dependent, then one of these vectors is
a linear combination of the others
 If all elements in 𝑉 are linear combinations of 𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑘 , then 𝑉 is a span of
those vectors.
 A basis for a vector space 𝑉 is a set of vectors that are linearly independent
and span 𝑉.
 The standard basis of ℝ𝑛 can be expressed as {𝑒⃑1 , 𝑒⃑2 , … , 𝑒⃑𝑛 } where 𝑒⃑𝑖 is the
ith column of 𝐼𝑛
 The dimension of 𝑉 is the cardinality of its basis (which is the same for all
bases)
 Any linearly independent set of vectors in 𝑉 can be extended to a basis by
adding more vectors (if necessary)
 𝑑𝑖𝑚𝐶(𝐴) = 𝑟, 𝑑𝑖𝑚𝑁(𝐴) = 𝑛 − 𝑟 where n is the number of columns and r is the
rank of 𝐴
 The row space 𝐶(𝐴𝑇 ) is the span of the rows of 𝐴. 𝑑𝑖𝑚𝐶(𝐴𝑇 ) = 𝑟
 The left nullspace 𝑁(𝐴𝑇 ) consists of all the row vectors 𝑦⃑ such that 𝑦⃑𝐴 = ⃑⃑
0
𝑇 𝑇
i.e. 𝐴 𝑦⃑ = ⃑0⃑
Linear Transformations
 A function 𝑇: 𝑉 → 𝑊 is a linear transformation iff 𝑇(𝑐1 𝑣⃑1 + 𝑐2 𝑣⃑2 ) =
𝑐1 𝑇(𝑣⃑1 ) + 𝑐2 𝑇(𝑣⃑2 )
 The set of polynomials with real coefficients of degree at most 𝑛 is denoted as
𝑃𝑛 . Its standard basis is {1, 𝑡, 𝑡 2 , … , 𝑡 𝑛 }
 Suppose ℬ1 = {𝑣⃑1 , 𝑣⃑2 , … , 𝑣⃑𝑛 } is a basis for 𝑉 and ℬ2 = {𝑤
⃑⃑⃑1 , 𝑤
⃑⃑⃑2 , … , 𝑤
⃑⃑⃑𝑛 } is a basis
for 𝑊. Each linear transformation from 𝑉 to 𝑊 is represented by the matrix
𝑎1,1 𝑎1,2 ⋯ 𝑎1,𝑛
𝑎2,1 𝑎2,2 ⋯ 𝑎2,𝑛
𝐴=( ⋮
⋮
⋱
⋮ )
𝑎𝑚,1 𝑎𝑚,2 ⋯ 𝑎𝑚,𝑛
Where 𝑇(𝑣⃑𝑗 ) = 𝑎1,𝑗 𝑤
⃑⃑⃑1 + 𝑎2,𝑗 𝑤
⃑⃑⃑2 + ⋯ + 𝑎𝑚,𝑗 𝑤
⃑⃑⃑𝑚
 The kernel for a linear transformation 𝑇: 𝑉 → 𝑊: 𝐾𝑒𝑟𝑇 = {𝑣⃑ ∈ 𝑉|𝑇(𝑣⃑) = ⃑0⃑}.
𝐾𝑒𝑟𝑇 is a subspace of 𝑉
 The range for 𝑇: 𝑅𝑎𝑛𝑔𝑒𝑇 = {𝑇(𝑣⃑)|𝑣⃑ ∈ 𝑉}. 𝑅𝑎𝑛𝑔𝑒𝑇 is a subspace of 𝑊
 𝑇 is surjective ⟺ 𝐶(𝐴) = 𝔽𝑚 ⟺ 𝑅𝑎𝑛𝑔𝑒𝑇 = 𝑊
⃑⃑} ⟺
 𝑇 is injective ⟺ the columns of 𝐴 are linearly independent ⟺ 𝑁(𝐴) = {0
⃑⃑}
𝐾𝑒𝑟𝑇 = {0
 𝑇: 𝑉 → 𝑊 is invertible iff there exists 𝑇 −1 : 𝑊 → 𝑉 s.t. 𝑇(𝑇 −1 (𝑤
⃑⃑⃑)) = 𝑤
⃑⃑⃑,
−1
𝑇 (𝑇(𝑣⃑)) = 𝑣⃑
 If 𝑇: 𝑉 → 𝑊 is invertible and has matrix 𝐴 then the matrix of 𝑇 −1 is 𝐴−1
 Let 𝑉, 𝑈, 𝑊 be vector spaces with bases ℬ𝑣 , ℬ𝑢 , ℬ𝑤 . Let 𝑇1 : 𝑉 → 𝑈 and 𝑇2 : 𝑈 →
𝑊 be linear transformations. Let 𝐴1 and 𝐴2 be the matrices for 𝑇1 and 𝑇2 with
ℬ𝑣 , ℬ𝑢 and ℬ𝑢 , ℬ𝑤 , respectively. Then 𝑇2 𝑇1 : 𝑉 → 𝑊 is a linear transformation,
and the linear transformation matrix 𝐴 = 𝐴2 𝐴1 with ℬ𝑣 , ℬ𝑤
Orthogonality in ℝ𝑛
 For any 𝑥⃑ ∈ ℝ𝑛 , ||𝑥⃑|| = √(𝑥1 )2 + (𝑥2 )2 + ⋯ + (𝑥𝑛 )2
 Vectors 𝑥⃑ and 𝑦⃑ are orthogonal (perpendicular) iff 𝑥⃑ 𝑇 𝑦⃑ = 0 (𝑥⃑ 𝑇 𝑦⃑ = 𝑦⃑ 𝑇 𝑥⃑)
 Let 𝑉 be a subspace of ℝ𝑛 . A basis for 𝑉 that consists of mutually orthogonal
vectors is called an orthogonal basis
 Two subspaces 𝑉 and 𝑊 of ℝ𝑛 are orthogonal iff 𝑣⃑ 𝑇 𝑤
⃑⃑⃑ = 0 for all 𝑣⃑ ∈ 𝑉, 𝑤
⃑⃑⃑ ∈ 𝑊,
denoted as 𝑉 ⊥ 𝑊
 𝐶(𝐴𝑇 ) is orthogonal to 𝑁(𝐴) and 𝐶(𝐴) is orthogonal to 𝑁(𝐴𝑇 )
 Given a subspace 𝑉 of ℝ𝑛 , its orthogonal complement 𝑉 ⊥ is the set of all the
vectors orthogonal to 𝑉. 𝑉 ⊥ = 𝑈 ⟺ 𝑈 ⊥ = 𝑉
Projections
 The projection 𝑝⃑ of 𝑏⃑⃑ onto the line through 𝑎⃑ is
𝑎⃑𝑇 𝑏⃑⃑
𝑝⃑ = 𝑇 𝑎⃑
𝑎⃑ 𝑎⃑
𝑎⃑𝑎⃑𝑇
= 𝑇 𝑏⃑⃑
𝑎⃑ 𝑎⃑
= 𝑃𝑏⃑⃑
where 𝑃 is the n by n projection matrix.
 For any projection matrix 𝑃, 𝑃 = 𝑃𝑇 = 𝑃2
 Consider 𝐴𝑥⃑=𝑏⃑⃑ where there is no solution for 𝑥⃑. Let 𝑝⃑ = 𝐴𝑥̂ is the projection
of 𝑏⃑⃑ onto 𝐶(𝐴). Then the normal equation: 𝐴𝑇 𝐴𝑥̂ = 𝐴𝑇 𝑏⃑⃑ holds
 If 𝐴𝑇 𝐴 is invertible, then 𝑝⃑ = 𝑃𝑏⃑⃑ where 𝑃 = 𝐴(𝐴𝑇 𝐴)−1 𝐴𝑇
 𝐴𝑇 𝐴 is symmetric and is invertible when the columns of 𝐴 are linearly
independent
⊥
 If 𝑏⃑⃑ is in (𝐶(𝐴)) = 𝑁(𝐴), then its projection onto 𝐶(𝐴) is 0
 If 𝐴 is square, then 𝑝⃑ = 𝑏⃑⃑ since the columns of 𝐴 spans ℝ𝑛
Orthonormal bases and Gram-Schmidt
 The vectors 𝑞⃑1 , 𝑞⃑2 , … , 𝑞⃑𝑘 are orthonormal in ℝ𝑛 if
0, 𝑖 ≠ 𝑗
𝑞⃑𝑖𝑇 𝑞⃑𝑗 = {
1, 𝑖 = 𝑗
 A real matrix 𝑄 has orthonormal columns ⟺ 𝑄 𝑇 𝑄 = 𝐼
 A real matrix 𝑄 that is square and has orthonormal columns is called
orthogonal.
 𝑄 is orthogonal ⟺ 𝑄 𝑇 𝑄 = 𝐼 and 𝑄 is square ⟺ 𝑄 −1 = 𝑄 𝑇
 Matrix multiplication by a real matrix 𝑄 preserves lengths and angles
 If 𝑄 is orthogonal then it has orthonormal rows
 If 𝑄 is real and has orthonormal columns then the least-squares solution of 𝑄𝑥⃑
=𝑏⃑⃑ is 𝑥̂ = 𝑄 𝑇 𝑏⃑⃑. The projection of 𝑏⃑⃑ onto 𝐶(𝑄) is 𝑝⃑ = 𝑃𝑏⃑⃑ where 𝑃 = 𝑄𝑄 𝑇
 Gram-Schmidt Process takes linearly independent vectors 𝑎⃑1 , 𝑎⃑2 , … , 𝑎⃑𝑘 ∈ ℝ𝑛
that span 𝑉, and gives orthonormal vectors 𝑞⃑1 , 𝑞⃑2 , … , 𝑞⃑𝑘 that form a basis for 𝑉
𝐴⃑
𝑇
 𝑞⃑𝑖 = 𝑖 , where 𝐴⃑𝑖 = 𝑎⃑𝑖 − (𝑞⃑1𝑇 𝑎⃑𝑖 )𝑞⃑1 − (𝑞⃑2𝑇 𝑎⃑𝑖 )𝑞⃑2 − ⋯ − (𝑞⃑𝑖−1
𝑎⃑𝑖 )𝑞⃑𝑖−1 . Subtracting
|𝐴⃑𝑖 |
the projection onto a line through a vector gives a vector orthogonal to it.
 Every m by n matrix 𝐴 with real entries and linearly independent columns can
be factored into 𝐴 = 𝑄𝑅, where 𝑄 has orthonormal columns and 𝑅 is n by n
upper-triangular and invertible
Determinants
 Determinant is a value such that 𝑑𝑒𝑡𝐼 = 1, −𝑑𝑒𝑡𝐵 = 𝑑𝑒𝑡𝐴 where 𝐵 is obtained
from 𝐴 by exchanging two rows, and depends linearly on the rows of 𝐴:
𝑐𝑎1,1 + 𝑑𝑏1,1 𝑐𝑎1,1 + 𝑑𝑏1,1 ⋯ 𝑐𝑎1,1 + 𝑑𝑏1,1
𝑎2,1
𝑎2,2
⋯
𝑎2,𝑛
det (
)
⋮
⋮
⋱
⋮
𝑎𝑛,1
𝑎𝑛,2
⋯
𝑎𝑛,𝑛
𝑎1,1 𝑎1,2 ⋯ 𝑎1,𝑛
𝑏1,1 𝑏1,2 ⋯ 𝑏1,𝑛
𝑎2,1 𝑎2,2 ⋯ 𝑎2,𝑛
𝑎2,1 𝑎2,2 ⋯ 𝑎2,𝑛
= 𝑐det ( ⋮
)
⋮
⋱
⋮ ) + 𝑑det ( ⋮
⋮
⋱
⋮
𝑎𝑛,1 𝑎𝑛,2 ⋯ 𝑎𝑛,𝑛
𝑎𝑛,1 𝑎𝑛,2 ⋯ 𝑎𝑛,𝑛
 The determinant of an upper/lower triangular matrix is the product of the
entries on the main diagonal
 Matrix 𝐴 is invertible ⟺ 𝑑𝑒𝑡𝐴 ≠ 0
 det(𝐴𝐵) = 𝑑𝑒𝑡𝐴 ∙ 𝑑𝑒𝑡𝐵
 𝑑𝑒𝑡𝐴 = 𝑑𝑒𝑡𝐴𝑇
 The cofactor 𝐶𝑖𝑗 = (−1)𝑖+𝑗 𝑑𝑒𝑡𝑀𝑖𝑗 where the minor 𝑀𝑖𝑗 is the determinant of
the matrix formed by removing the ith row and jth column of 𝐴
 The determinant of 𝐴 is a combination of any row times its cofactors: 𝑑𝑒𝑡𝐴 =
𝑎𝑖1 𝐶𝑖1 + 𝑎𝑖2 𝐶𝑖2 + ⋯ + 𝑎𝑖𝑛 𝐶𝑖𝑛
1
 If 𝐴 is invertible then 𝐴−1 = 𝑑𝑒𝑡𝐴 𝐶 𝑇 where 𝐶 is the cofactor matrix of 𝐴 with
𝐶𝑖𝑗 on the ith row and jth column
𝑎1,1 𝑎1,2 ⋯ 𝑎1,𝑛
𝑎2,1 𝑎2,2 ⋯ 𝑎2,𝑛
 Cramer’s Rule: suppose 𝐴 = ( ⋮
⋮
⋱
⋮ ) is invertible. We can
𝑎𝑛,1 𝑎𝑛,2 ⋯ 𝑎𝑛,𝑛
𝑥1
𝑥2
𝑑𝑒𝑡𝐵
solve for 𝑥𝑖 in 𝐴 ( ⋮ ) = 𝑏⃑⃑ by 𝑥𝑖 = 𝑑𝑒𝑡𝐴𝑖 where 𝐵𝑖 is obtained by substituting
𝑥𝑛
the ith column of 𝐴 with 𝑏⃑⃑
Eigenvalues and eigenvectors
 Let 𝐴 be an n by n matrix and suppose that 𝑥⃑ ≠ 0, and 𝐴𝑥⃑ = 𝜆𝑥⃑. Then 𝜆 is
called an eigenvalue of 𝐴, and 𝑥⃑ an eigenvector of 𝐴.
 The eigenvalues of 𝐴 are the solutions of det(𝐴 − 𝜆𝐼) = 0
 The eigenvalues can be calculated by solving for the values of 𝜆 that makes
det(𝐴 − 𝜆𝐼) = 0, then solve for the eigenvectors by substituting 𝜆 into
(𝐴 − 𝜆𝐼)𝑥⃑ = ⃑⃑
0
 The eigenvalues of a projection matrix are 1 or 0
 The eigenvalues of 𝐴 are the roots of the characteristic polynomial 𝑝(𝜆) =
det(𝐴 − 𝜆𝐼)
 The trace of a square matrix 𝐴 is the sum of the elements on its main diagonal
 𝑑𝑒𝑡𝐴 = 𝜆1 𝜆2 … 𝜆𝑛
 𝑡𝑟𝑎𝑐𝑒𝐴 = 𝜆1 + 𝜆2 + ⋯ + 𝜆𝑛
 If a square matrix 𝐴 is upper or lower triangular, then its eigenvalues are on
its main diagonal
 If 𝑥⃑ is an eigenvector of 𝐴 with corresponding eigenvalue 𝜆, then inverse 𝑥⃑ is
also an eigenvector with corresponding eigenvalue 𝜆̅
Diagonalization
 Suppose an n by n matrix 𝐴 has n linearly independent eigenvectors. Let 𝑆 be
a matrix whose columns are the linearly independent eigenvectors of 𝐴. Then
𝐴 = 𝑆𝐷𝑆 −1 where 𝐷 is the diagonal matrix with the eigenvalues of 𝐴 on the
main diagonal:
𝜆1 0 ⋯ 0
0 𝜆2 ⋯ 0
𝐴 = 𝑆𝐷𝑆 −1 = (𝑣⃑1 𝑣⃑2 ⋯ 𝑣⃑𝑛 ) (
) (𝑣⃑1 𝑣⃑2 ⋯ 𝑣⃑𝑛 )−1
⋮
⋮ ⋱ ⋮
0 0 ⋯ 𝜆𝑛
 If eigenvectors 𝑥⃑1 , 𝑥⃑2 , … , 𝑥⃑𝑘 correspond to distinct eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑘 of
𝐴, then 𝑥⃑1 , 𝑥⃑2 , … , 𝑥⃑𝑘 are linearly independent
 Suppose 𝐴 is n by n and has n distinct eigenvalues. Then 𝐴 is diagonalizable.
 Not all square matrices are diagonalizable.
 Let 𝜆1 be an eigenvalue of 𝐴. The algebraic multiplicity of 𝜆1 is the greatest
positive integer 𝑘 such that (𝜆 − 𝜆1 )𝑘 divides 𝑝(𝜆) = det(𝐴 − 𝜆𝐼)
 The geometric multiplicity of 𝜆1 is the number of linearly independent
eigenvectors that correspond to 𝜆1 , i.e. dim(𝐴 − 𝜆𝐼). It is equal to the number
of free variables in the echelon form for 𝐴 − 𝜆𝐼
 To be diagonalizable, an n by n matrix 𝐴 has to have n linearly independent
eigenvectors. To be invertible, none of eigenvalues can be 0.
 Let 𝐴 have eigenvectors 𝑥⃑1 , 𝑥⃑2 , … , 𝑥⃑𝑘 corresponding to eigenvalues
𝜆1 , 𝜆2 , … , 𝜆𝑘 . Then 𝑥⃑1 , 𝑥⃑2 , … , 𝑥⃑𝑘 are linearly independent
 𝐴 is diagonalizable if for each eigenvalue of 𝐴, its algebraic multiplicity equals
the geometric multiplicity.
 If 𝐴 has eigenvalues 𝜆1 , 𝜆2 , … , 𝜆𝑛 , then 𝐴𝑘 has eigenvalues 𝜆1𝑘 , 𝜆𝑘2 , … , 𝜆𝑘𝑛 . If 𝐴 is
1 1
1
invertible, then 𝐴−1 has eigenvalues 𝜆 , 𝜆 , … , 𝜆
1
2
𝑛
 If 𝐴 is square 𝐴 and 𝐴𝑇 have same characteristic polynomial
 If 𝐴 = 𝑆𝐷𝑆 −1, then 𝐴𝑘 = 𝑆𝐷𝑘 𝑆 −1 for some natural number 𝑘
 To solve a recurrence 𝑎𝑘+2 = 𝑏𝑘+1 𝑎𝑘+1 + 𝑏𝑘 𝑎𝑘 , find the eigenvalues of 𝐴 such
𝑎𝑘+2
𝑎𝑘+1
that (𝑎 ) = 𝐴 ( 𝑎 ), then solve 𝑎𝑘 = 𝑐1 𝜆𝑘1 + 𝑐2 𝜆𝑘2 for the constants by
𝑘+1
𝑘
substituting given values of 𝑎
 A probability vector 𝑝⃑ has non-negative entries that add up to one
 A Markov matrix is a square matrix whose columns are probability vectors
 Any Markov matrix has 𝜆 = 1 as an eigenvalue
 A probability vector 𝑝⃑ is a steady-state vector for a Markov matrix 𝑀 if 𝑀𝑝⃑ =
𝑝⃑. It can be calculated by solving (𝑀 − 𝐼)𝑝⃑ = ⃑0⃑
 If lim 𝑀𝑛 (= 𝑆𝐷𝑛 𝑆 −1 ) = 𝐿, then all the columns of 𝐿 are steady-state vectors
𝑛→∞
 A Markov matrix 𝑀 is regular if there exists a positive integer 𝑝 such that 𝑀𝑝
has all positive entries.
 If a Markov matrix is regular, it has a unique steady-state vector
 If 𝐴 and 𝐵 are 𝑛x𝑛 matrices, then 𝐴 and 𝐵 −1 𝐴𝐵 are similar
 Similar matrices have the same eigenvalues
1
1
 If the elements of each row of 𝐴 sum up to 𝑟, then 𝑥⃑ = ( ) is an eigenvector
⋮
1
with 𝜆 = 𝑟
 For any eigenvalue, its geometric multiplicity is less than or equal to its
algebraic multiplicity
Complex Space
 The conjugate transpose of 𝐴 is 𝐴𝐻 = ̅̅
𝐴̅̅𝑇
 The inner product of vectors 𝑥⃑, 𝑦⃑ in ℂ𝑛 is 𝑥⃑ 𝐻 𝑦⃑
 The length of a complex vector 𝑥⃑ is ||𝑥|| = √𝑥⃑ 𝐻 𝑥⃑
 Vectors 𝑥⃑, 𝑦⃑ are orthogonal if 𝑥⃑ 𝐻 𝑦⃑ = 0
 (𝐴𝐵)𝐻 = 𝐵 𝐻 𝐴𝐻
 Matrix 𝐴 is Hermitian if 𝐴 = 𝐴𝐻
 If 𝐴 is Hermitian, the elements on the main diagonal are real
 A real symmetric matrix is Hermitian
 If 𝐴 is Hermitian and 𝑥⃑ is a complex vector, 𝑥⃑ 𝐻 𝐴𝑥⃑ is a real number
 The eigenvalues of a Hermitian matrix are real
 Real symmetric matrices have real eigenvalues
 Two eigenvectors of a Hermitian matrix are orthogonal to each other if they
come from different eigenvalues
 The Spectral Theorem: A real symmetric matrix can be factored into 𝐴 =
𝑄𝐷𝑄 𝑇 where 𝐷 is diagonal and 𝑄 is an orthogonal matrix
 If 𝐴 is symmetric, it can be represented as a linear combination of projection
matrices of rank 1: 𝐴 = 𝜆1 𝑞⃑1 𝑞⃑1𝑇 + 𝜆2 𝑞⃑2 𝑞⃑2𝑇 + ⋯ + 𝜆𝑛 𝑞⃑𝑛 𝑞⃑𝑛𝑇
 The normal equation in complex space: 𝐴𝐻 𝐴𝑥̂ = 𝐴𝐻 𝑏⃑⃑
𝐻
𝑎⃑⃑𝑎⃑⃑
 The projection of 𝑏⃑⃑ onto 𝐶(𝐴) in complex space is 𝑝⃑ = 𝑎⃑⃑𝐻𝑎⃑⃑ 𝑏⃑⃑
 𝑁(𝐴𝐻 𝐴) = 𝑁(𝐴), 𝑁(𝐴)⊥ = 𝐶(𝐴𝐻 ), 𝐶(𝐴)⊥ = 𝑁(𝐴𝐻 )
Download