Uploaded by Jason Costanzo

Course Notes to Lay's Linear Algebra and its Applications

advertisement
M303 Linear Algebra and its Applications NOTES
Symbols
ℕ = natural numbers (1, 2, 3, 4)
ℤ = integers (-3, -2, -1, 0, 1, 2, 3)
ℚ = rational numbers (p/q)
ℝ = real numbers
ℂ = complex numbers
∞ = infinity
Ø = empty set
⊈ = not subset
Note: The interval (1, ∞ ) is closed set since infinity is technically not a real number and
therefore the set contains all of its boundary points, in this case only 1.
Section 1.1. Systems of Linear Equations
BASIC TERMS
Linear Equation: an equation of the form 𝑎1𝑥1 + 𝑎2𝑥2 +... + 𝑎𝑛𝑥𝑛 = 𝑏.
System of linear equations (or a linear system): a collection of linear equations.
Solution Set: Set of all possible solutions of a linear system
Equivalent: When two or more linear systems have the same solution set.
TYPES OF LINEAR SYSTEMS
A system of linear equations has three possibilities:
MATRICES
Coefficient matrix: matrix with coefficients only
Augmented matrix: matrix with coefficients AND solutions
THREE STRATEGIES FOR SOLVING
Method. Use these strategies to simplify a linear system.
Verify. Then check by placing results into original equations.
1. Replacement. Replace one equation by
a. the sum of itself, and,
b. a multiple of another row
2. Interchange. Interchange two rows.
3. Scaling. Multiply all entries in a row by a nonzero constant.
Row equivalent: if there is a sequence of elementary row operations that transforms
one matrix into the other.
● Result: If the augmented matrices of two linear systems are row equivalent, then
the two systems have the same solution set
EXISTENCE AND UNIQUENESS
1. Existence. Is the system consistent; that is, does at least one solution exist?
2. Uniqueness. If a solution exists, is it the only one; that is, is the solution unique?
Section 1.2. Row Reduction and Echelon Forms
Nonzero row/column. A row/column with at least one nonzero entry.
Leading Entry. The leftmost nonzero entry in a nonzero row.
ECHELON FORM [Matrix]
Echelon Form, satisfies three criteria,
1. Nonzero rows above zero rows
2. Diagonal form with clear pivots
3. Zeros below pivots
Reduced Echelon Form, satisfies additional criteria,
4. Pivots have entry of 1
5. Zeros below and above pivots, except first.
Row reduced. Transforming a matrix into more than one matrix of echelon form.
Uniqueness of echelon form: matrices reduce to unique echelon forms
Echelon / reduced echelon form: The form U of a matrix A
PIVOT POSITIONS
● Pivot position: a position corresponding to a leading 1 in reduced form.
● Pivot column: a column with a pivot position.
Reducing Echelon Method
1. Begin with the leftmost column [Forward Phase]
2. Find nonzero entry and use as pivot
3. Create zeros below pivot
4. Go to next row and do the same
5. From rightmost pivot, work back up to create zeros (Backward Phase)
Basic vs. Free Variables
- Basic: Pivot variables
- Free: No pivot beneath it
- Free = you may choose any value for x
- Each choice corresponds to a different solution
General solution: when the free variable remains a variable
Parametric description: where free variables act as parameters for solutions
Section 1.3. Vector Equations
Vector: a single column / row matrix, or ordered pair or list of numbers, e.g., v = [1, 2]
1
2
3
● ℝ : [1]; ℝ : [1, 2]; ℝ : [1, 2, 3]
𝑛
ordered n-tuple: ℝ … vectors with n entries
1
2
3
Zero vector: vector with 0 entires, viz.,ℝ : [0]; ℝ : [0, 0]; ℝ : [0, 0, 0]
Equal: Vector entries are equal, e.g., v = [1, 2] = w = [2, 1]
Sum: sum of the vector values, v = [1, 2], r = [3, 4], v + r = [1+3, 3+4]
Scalar multiple: multiple of vector, v = [1, 2], 5 x v = [5x1, 5x2] = [5, 10]
Notes
● The scalar multiples of a nonzero vector form a line through the origin.
● It can be described by Pythagorean theorem, relative to x-axis.
Geometric description: v = [x, y], with line from [0, 0] to [x, y].
Parallelogram rule for addition
2
If u and v in ℝ are represented as points
in the plane, then u+v corresponds to the
fourth vertex of the parallelogram whose
other vertices are u, 0, and v.
𝑛
Algebraic properties of ℝ
Linear combination: combination of vector and scalars, viz.,
Result: A vector equation has the same linear system as its corresponding matrix (c1..)
𝑛
Span: The set of all linear combinations (as above) of ℝ , viz., Span {𝑣1, 𝑣2, ..., 𝑣𝑝}.
Notes
● Asking whether b is in the span is equivalent to asking if their is a solution to a
linear system.
● A span contains every scalar multiple.
● The zero vector is also in the span.
Section 1.4. The Matrix Equation
Matrix Equation, Ax = b
Note: Ax is defined iff number columns of A is equal to the number of rows of x.
Vector equation
Matrix Equation [Ax = b]
Result. Ax=b has a solution iff b is a linear combination of the columns of A.
Identity matrix (Ix=x): the matrix with 1's on diagonal and 0's elsewhere.
Section 1.5. Solution Sets of Linear Equations
Homogenous: system of linear equations that can written in form Ax=0.
● Has at least one solution—the trivial solution x = 0.
● Does a nontrivial solution exists, viz,, a nonzero vector satisfies Ax=0?
Notes
●
●
●
●
●
●
●
●
Always has the trivial solution x = 0.
If free variable, then nontrivial solution [by Theorem 2ii]
Every solution of Ax=0 is a scalar multiple of v (line through origin).
A nontrivial solution x can have some zero entries.
Can be expressed explicitly as Span u, v, etc. of vectors.
If the only solution is the zero vector, then the solution set is Span {0}.
If it has only one free variable, the solution set is a line through the origin.
A plane through origin can represent more than one free variable, but not always.
Explicit vs. Implicit equation
Implicit:
Explicit: x = su + tv [parametric vector form]
Vector translation tv to p+tv: results in a line parallel to v
x = p + tv is the equation of the line through p parallel to v
Thus the solution set of Ax=b is a line through p parallel to the solution set of Ax=b.
Rules for writing a (consistent) solution set in parametric form
1. Row reduce the augmented matrix to reduced echelon form.
2. Express each basic variable in terms of any free variables.
3. Write a typical solution x as a vector whose entries depend on any free variables.
4. Decompose x into a linear combination of vectors (with numeric entries) using
the free variables as parameters.
Note: When Ax=b has no solution, the solution set is empty
Section 1.7. Linear Independence
For vectors
An indexed vector set {𝑣1, 𝑣2, ..., 𝑣𝑝}is:
● Linearly independent: only one (trivial) solution, where
{𝑥1𝑣 + 𝑥2𝑣 +... + 𝑥𝑝𝑣 }.
1
2
𝑝
● Linearly dependent: more than one solution, where 𝑐1, 𝑐2, not all zero in
{𝑐1𝑣 + 𝑐2𝑣 +... + 𝑐𝑝𝑣 }.
1
2
𝑝
○ Also called a linear dependence relation
For matrices
● The columns of a matrix A are linearly independent iff the equation Ax=0 has only
the trivial solution. [= no free variables]
Linearly Independent
Vectors are not multiples of one another
Linearly Dependent
One vector is a multiple of the other
Can be written as a linear combination, e.g., x2 + x5 [T7]
Contain more columns (vectors) than rows [T8], e.g., 2x3
Contains zero vector (row all 0) [T9]
If w is plane spanned by u and v
Zero vector always dependent, since x10=0 has many nontrivial solutions
Section 1.8. Introduction to Linear Transformations
Definitions
𝑛
𝑚
● Transformation (function or mapping): T from ℝ to ℝ is a rule that assigns to
𝑛
𝑚
each vector x in ℝ a vector T(x) in ℝ .
𝑛
○ Domain of T = ℝ
𝑚
○ Codomain of T = ℝ
𝑛
𝑚
○ ℝ → ℝ , indicates domain is n and codomain m
○ Image of x = T(x)
○ Range = set of all images T(x)
● X |-> Matrix transformation
2
2
Shear transformation (ℝ → ℝ ): A transformation in which all points along a given line
remain fixed while other points are shifted parallel to by a distance proportional to their
perpendicular distance.
Linear transformation (Definition), A transformation or mapping is linear if
1. T(u + v) = T(u) + T(v) for all u, v in the domain of T;
2. T(cu) = cT(u) for all u and all scalars c.
Results
3. T(0)=0
4. T(cu + dv)=cT(u)+dT(v), for all vectors u,v in T with c and d scalars.
Notes
● If c, d = 1, then 4 is just 1; so if 4 then linear!
● Linear T preserve operations of vector addition and scalar multiplication.
● Every matrix transformation is a linear transformation.
Contraction: 0 ≤ r ≤ 1
Dilation: r > 1
for T(x) = rx
for T(x) = rx
Section 2.1. Matrix Operations
𝑚 × 𝑛 matrix: For a matrix A with m rows and n columns, the scalar entry in the ith row
and jth column is denoted by 𝑎𝑖𝑗, called the (i, j)-entry of A.
Diagonal entries: entries in the main diagonal of a square matrix, viz., 𝑎11, 𝑎22, etc.
● All non-diagonal entries are 0.
● Diagonal entires are such that i = j.
Zero matrix: An 𝑚 × 𝑛 matrix whose entries are all 0.
Equal matrices: have same size, i.e., same number rows and columns (entries)
Sum A + B: Defined only when A and B are the same size.
Scalar multiple: For rA, a matrix whose columns are r times columns of A.
● Note: the usual rules of algebra apply to sums and scalar multiples of matrices.
Matrix multiplication: Matrix B multiplies a vector x, it transforms x into the vector Bx.
𝑝
● Definition makes A(Bx) = (AB)x true for all x in ℝ .
● Shows that multiplication of matrices corresponds to composition of linear
transformations.
● Each column of AB is a linear combination of the columns of A using weights
from the corresponding column of B.
● Columns of B must match rows of A to be defined.
Row-Column rule (Dot product): AB is sum of products of rows of A w/ columns of B.
Warnings
𝑘
𝑘
Powers of Matrix: 𝐴 = 𝐴 ... 𝐴, and also if k = 0, then 𝐴 𝑥 = 𝑥
0
Identity matrix: 𝐴 = 𝐼
Transpose: switch rows with columns
Section 2.2. The Inverse of a Matrix
Determinant: det A = ad - bc
Invertible matrix: matrix when multiplied by inverse results in identity matrix
−1
● 𝐶𝐴 = 𝐼 𝑎𝑛𝑑 𝐴𝐶 = 𝐼, 𝑠𝑜 𝑡ℎ𝑎𝑡 𝐴 𝐴 = 𝐼 𝑎𝑛𝑑 𝐴𝐴
● Invertible or nonsingular
● Non-invertible or singular
−1
=𝐼
Elementary matrix: Obtained by performing a single elementary row operation on an
identity matrix.
Algorithm for finding 𝐴
−1
Section 2.8. Subspaces
𝑛
Def. Subspace, a subspace of ℝ is any set H that has three properties:
a. The zero vector is in H. [zero vector]
b. For each u and v in H, the sum u + v is in H. [addition]
c. For each u in H, and each scalar c, the vector cu is in H. [scalar]
Notes
● A subspace is closed under addition and multiplication.
● A line L not through the origin is not a subspace, as it does not contain the origin.
𝑛
● The set of all linear combinations 𝑣1, 𝑣2,..., 𝑣𝑝 are in ℝ .
● Span {𝑣1, 𝑣2,..., 𝑣𝑝} is the subspace spanned (or generated) by 𝑣1, 𝑣2,..., 𝑣𝑝.
𝑛
○ If H = Span {𝑣1, 𝑣2}, then H is a subspace of ℝ .
𝑛
● ℝ is a subspace of itself (all three properties preserved).
𝑛
● The zero vector is a subspace of ℝ , called the zero subspace.
Def. Column space: of a matrix A is the set Col A of all linear combinations of the
columns of A.
○ C(A) = Span {𝑣1, 𝑣2, …, 𝑛}
Notes
● The span of the rows of a matrix is called the row space of the matrix. The
dimension of the row space is the rank of the matrix.
● The span of the columns of a matrix is called the range or the column space of
the matrix. The row space & the column space always have the same dimension.
𝑚
● The column space of an 𝑚 × 𝑛 matrix is a subspace of ℝ .
𝑚
𝑚
● Col A equals ℝ only when the columns of A span ℝ . Otherwise Col A is just a
𝑚
part of ℝ .
● A vector b is a linear combination of the columns of A iff the equation Ax=b has a
solution.
● Consistent: (i) zero rows below nonzero rows; (ii) number of nonzero rows must
be < than the number of variables.
Def., Null space: of a matrix A is the set Nul A of all solutions to the homogeneous
𝑛
equation Ax=0. It has properties of a subspace of ℝ .
Notes
● Nul A / null space is defined implicitly, since it must be checked for each vector.
● Col A / column space is defined explicitly, since vectors in Col A can be
constructed (by linear combinations) from the columns of A.
● To make Nul A explicit, solve A x=0 and write it in parametric vector form.
Basis: a set of linearly independent vectors that spans the full space.
● Notes: An invertible 𝑛 × 𝑛 matrix is a basis of H by the invertible matrix theorem
[e and h].
𝑛
Def. Standard basis: The set {𝑒1, 𝑒,..., 𝑒𝑛 in ℝ that results in an identity matrix.
Section 3.1. Determinants
Recursive definition (determinant)
(i,j)-Cofactor:
Section 3.2. Properties of Determinants
Note: r = the number of row interchanges to get to echelon form U.
Section 3.3. Cramer’s Rule and Inverse Matrix
Cramer’s Rule
Inverse rule
Adjugate (classical adjunct) of A: the matrix cofactors C
Section 4.1. Vector Spaces
Vector space. A nonempty set V of objects (vectors) defined by the two operations of
addition and multiplication by scalars (real numbers), subject to the ten axioms (or rules)
listed below. [= abelian group]
1.
2.
3.
4.
The sum of u and v, denoted by u + v, is in V.
u + v = v + u.
(u + v) + w = u + (v + w)
There is a zero vector 0 in V such that u + 0 = u.
5. For each u in V , there is a vector -u in V such that u + (-u) = 0.
6. The scalar multiple of u by c, denoted by cu, is in V.
7. c(u + v) = cu + cv.
8. (c + d)u = cu + du
9. c(du) = (cd)u
10. 1u = u.
Note: The zero vector (ax. 4) is unique
- u0 = 0, c0 = 0
- -u0 = (-1)u
Subspace. A subset V of a larger vector space H, has three properties:
a. The zero vector of V is in H. [ax. 1]
b. H is closed under vector addition, that is, u + v is in H. [ax. 4]
c. H is closed under multiplication by scalars, that is, cu is in H. [ax. 6]
Notes
- Every subspace is a vector space, and every vector space is a subspace either
of itself or of some larger vector space.
- Subspace applies to two or more vectors spaces or when one is inside the other.
- Zero subspace: The zero vector is always a subspace of some vector V
Section 4.2. Linear Transformations, Null Spaces, Column Space
Linear Transformation
Kernel (or null space) T: The set of all u in V such that T(u)=0.
Range of T: The set of all vectors in W of the form T(x) for some x in V.
Null Space: The set of x that satisfy Ax=0.
-
𝑛
𝑚
Also: The set of all x in ℝ that is mapped to the zero vector in ℝ .
Showing that a set of vectors is a subspace you can use null space. If the set is
not homogenous then it cannot be a subspace, since null space must be init.
Nul A is implicit. Solving Ax=0 is explicit.
The spanning set produced by this method is automatically linearly independent
because the free variables are the weights on the spanning vectors.
When Nul A contains nonzero vectors, the number of vectors in the spanning set
for Nul A equals the number of free variables in the equation Ax=0.
Column space (Col A): set of all linear combinations of columns of m x n matrix A.
-
𝑚
The column space of an m x n matrix A is all of ℝ iff Ax=b has a solution for
𝑚
each b in ℝ .
Note
- When matrix not square Nul A and Col A have no common vectors.
- When square and both have zero vector, then some nonzero vectors may be the
same.
- If A is a 4x3 matrix, then the rank of Col A = 3, and Null A = 4
Contrast between Nul A and Col A for an m x n matrix A
Section 4.3. Linearly Independent Sets / Bases
Given a set of vectors:
Linearly Independent: only trivial solution to above equation exists (c1=0,..., cp=0)
● A set with one vector v is linearly independent if v ≠0.
Linearly Dependent: nontrivial solutions exist (if c1≠0, etc.) [= linear dependence relation]
● KEY: The vectors must be multiples of one another
● Any set with the zero vector is linearly dependent
Basis: a set of vectors 𝐵 = {𝑏1,..., 𝑏𝑝}in V is a basis for H if:
(1) B is a linearly independent set (= not multiples)
(2) The subspace spanned by B is H, that is, 𝐻 = 𝑆𝑝𝑎𝑛{𝑏1,..., 𝑏𝑝}.
(3) If some vk is a combination of other vectors, then the set minus vk is a basis. (CH4, T5)
Notes
● Also applies in the case such that H=V, since any vector space is a subspace of itself.
● Check with Invertible Matrix Theorem (CH2, T8)
Section 4.4. Coordinate System
For a basis B = {b1..bn}, a coordinate mapping would be x -> [x]B
Section 4.5. Dimension of vector space
Definition. Finite dimensional / Dimension = basis of V
R dimensions
0
ℝ = the zero subspace. (at origin)
1
ℝ = a subspace spanned by a single nonzero vector. (line through origin)
2
ℝ = a subspace spanned by two linearly independent vectors. (planes through origin)
3
ℝ = a subspace spanned by three linearly independent vectors. (cubes through origin)
Dimension of Col A and Nul A: The dimension of Nul A is the number of free variables in the
equation Ax = 0. The dimension of Col A is the number of pivot columns in the matrix A.
Section 4.6. Rank
Row A. The set of all linear combinations of the row vectors is called the row space of A
Rank. The dimension of the column space of A.
- The pivot columns determine rank.
Section 5.1. Eigenvectors
Eigenvector. For a square n x n matrix, it is a nonzero vector x such that Ax=λx.
Eigenvalue. The scalar λ associated with the eigenvector.
Eigenvector corresponding to lambda. The vector x corresponding with λ.
𝑛
Eigenspace. The set of all solutions to the null space for A - λI, being a subspace of ℝ .
Notes
● It is basically a multiple of x.
● To check, see if it the result is a multiple of x
● λ is an eigenvalue iff (A - λI)x = 0 has a nontrivial solution.
Section 5.2. The Characteristic Equation
Determinants
The Characteristic Equation
Note: If A is an n x n matrix, then det (A - λI) is a characteristic polynomial of degree n.
Section 5.3. Diagonalization
Powers of diagonal matrix
−1
−1
Similarity: For n x n matrices, A is similar to B if 𝑃 𝐴𝑃 = 𝐵 and 𝐴 = 𝑄 𝐵𝑄.
● The change from A to B is called a similarity transformation.
−1
Diagonalizable: A square matrix that is similar to a diagonal matrix, that is, if 𝐴 = 𝑃𝐷𝑃 .
●
●
𝑛
A is diagonalizable if and only if there are enough eigenvectors to form a basis of ℝ .
We call this an eigenvector basis.
To diagonalize a matrix (if possible)
1. Find the eigenvalues, viz., det(A - λI)=0.
2. For n x n matrix, find n linearly independent eigenvectors
Theorems, Chapter 1
Theorem 1. Existence and Uniqueness Theorem.
Each matrix is row equivalent to one and only one reduced echelon matrix form.
Theorem 2. Existence and Uniqueness Theorem
A linear system is consistent iff rightmost column of augmented matrix is not a pivot
column, i.e., iff an echelon form is not [0, …, b].
If consistent, then the solution set contains either (i) a unique solution (no free
variables), or (ii) infinitely many solutions (at least one free variable).
Theorem 3: matrix equation = vector equation = solution set linear equation
Theorem 4. Some results [note: coefficient matrix only, not augmented]
Theorem 5. Results with matrix, vectors and scalars
Theorem 6. Solution set for Ax=b results by translating solution set of Ax=0.
Theorem 7: Characterization of Linearly Dependent Sets
Note: Theorem 7 does not say that every vector in a linearly dependent set is a linear
combination of the preceding vectors.
Theorem 8: More vectors than entries
Note: The matrix will have more columns than rows
Theorem 9: Zero vector
Theorems, Chapter 2
Theorem 1: Let A, B, and C be matrices of the same size, and let r and s be
scalars.
A. A + B = B + A
commutativity
B. (A + B) + C = A + (B + C)
association of addition
C. A + 0 = A
identity
D. r(A + B) = rA + rB
distribution across addition
E. (r + s)A = rA + sA
distribution across multiplication
F. r(sA) = (rs)A
association of multiplication
Notes
● Commutativity doesn’t hold unless AB = BA
● Each equality in Theorem 1 is verified by showing that the matrix on the left side
has the same size as that on the right, and corresponding columns are equal.
Theorem 2: Properties of Matrix Multiplication
● Left-to-right order matters
● If AB = BA, then A and B commute with one another
Theorem 3: Properties of Transpose
● The transpose of a product of matrices equals the product of their transposes in
the reverse order.
Theorem 4: Inverse
● Determinant = det A = ad - bc
● Theorem 4 says that a 2 x 2 matrix A is invertible iff det A≠0.
Theorem 5: uniqueness of invertible matrix equation
Theorem 6: Properties of invertible matrices
● The product of n x n invertible matrices is invertible, and the inverse is the
product of their inverses in the reverse order.
Theorem 7: Elementary row
Theorem 8: The Invertible Matrix Theorem
𝑛
M. The columns of A form a basis of ℝ .
𝑛
N. Col A = ℝ
O. dim Col A = n
P. rank A = n
Q. Nul = {0}
R. dim Nul A = 0
S. The number 0 is not an eigenvalue of A.
T. The determinant of A is not zero.
Corollary to 8
Theorem 9: Invertible Transformations
Theorem 12: Null Space
Theorem 13: Pivot relative to Column space
The pivot columns of a matrix A form a basis for the column space of A.
Theorems, Chapter 3
Theorem 1. Cofactor Expansion of the Determinant
Theorem 2. Determinants of Triangular Matrix
If A is a triangular matrix, then det A is the product of the entries on the main diagonal.
Theorem 3. Row Operations
Theorem 4. Determinants and Invertibility
A square matrix A is invertible if and only if det A ≠ 0.
Corollary Theorem 4
-
𝑇
det A=0 when the columns (or rows, by 𝐴 ) of A are linearly dependent.
In practice, linear dependence is obvious only when two columns or two rows are
the same or a column or a row is zero.
Theorem 5. Determinants of Transposes
𝑇
If 𝐴 is an 𝑛 × 𝑛 matrix, then 𝑑𝑒𝑡 𝐴 = 𝑑𝑒𝑡 𝐴.
Theorem 6. Multiplicative Property
If 𝐴 𝑎𝑛𝑑 𝐵 are 𝑛 × 𝑛 matrices, then 𝑑𝑒𝑡(𝐴𝐵) = 𝑑𝑒𝑡(𝐴)𝑑𝑒𝑡(𝐵).
- Note: det(A+B) is not equal to det(A)+det(B).
Theorem 7. Cramer’s Rule
Theorem 8. Inverse theorem
Theorems, Chapter 4
Theorem 1: The subspace spanned or generated by vectors v1...vp.
Theorem 2. Null space is subspace
Theorem 3. Column space is subspace
Theorem 4. Linear dependence of vectors [similar to CH1, T7]
Theorem 5. Spanning Set Theorem Basis
Theorem 6. Pivot Basis
Theorem 7. Unique Representation Theorem (basis)
Note: The scalars c1, etc., are the coordinates of x relative to b.
The mapping of x to [x]b is called the coordinate mapping (determined by B).
is the change of coordinates from B. It’s basically a set of vector bases
with coordinates added.
Theorem 8. Coordinate Basis Mapping
Isomorphism = one-to-one linear transformation from vector spaces V to W. Every
vector space calculation in V is accurately reproduced in W, and vice versa.
Theorem 9. Basis linear dependence
Theorem 10. Basis vectors
If a vector space V has a basis of n vectors, then every basis of V must consist of
exactly n vectors.
Theorem 11. Expanded basis
Theorem 12. The Base Theorem
Note: By theorem 9, the number of vectors in a linearly independent expansion of S can
never exceed the dimension of V.
Theorem 13. Row space
If two matrices A and B are row equivalent, then their row spaces are the same. If B is in
echelon form, the nonzero rows of B form a basis for the row space of A as well as for that of B.
Theorem 14. Rank Theorem
The dimensions of the column space and the row space of an m x n matrix A are equal.
This common dimension, the rank of A, also equals the number of pivot positions in A
and satisfies the equation rank A + dim Nul A = n .
Theorems, Chapter 5
Theorem 1 (5.1): The eigenvalues of a triangular matrix are the entries on its main
diagonal.
Theorem 2
Theorem 3. Properties of Determinants
Theorem 4. If n x n matrices A and B are similar, then they have the same characteristic
polynomial and hence the same eigenvalues (with the same multiplicities).
Theorem 5. Diagonalization Theorem
Theorem 6. An n x n matrix with n distinct eigenvalues is diagonalizable.
Theorem 7. For < n eigenvalues
Download