Uploaded by Brandon Robinson

Linear Algebra Lecture Notes - MATH 1025 - First Year University

advertisement
MATH1850U: Chapter 1
1
LINEAR SYSTEMS
Introduction to Systems of Linear Equations (1.1; pg.2)
Definition: A linear equation in the n variables, x1 , x 2 , xn is defined to be an
equation that can be written in the form:
a1 x1  a2 x2    a n xn  b
where a1 , a 2 , a n and b are real constants. The ai are called the coefficients, and the
variables xi are sometimes called the unknowns. If it cannot be written in this form, it is
called a nonlinear equation.
Examples:
Application: Suppose that $100 is invested in 3 stocks. If A , B, and C, denote the
number of shares of each stock that are purchased and they have units costs $5, $1.5, and
$3 respectively, write the linear equation describing this scenario.
Definition: A solution of a linear equation a1 x1  a2 x2    an xn  b is a sequence
(or n-tuple) of n numbers s1 , s 2 ,  s n such that the equation is satisfied when we
substitute x1  s1 , x2  s 2 ,  xn  s n in the equation. The set of ALL solutions of the
equation is called the solution set (or sometimes the general solution) of the equation.
Example:
MATH1850U: Chapter 1
2
Definition: A finite set of linear equations in n variables, x1 , x 2 , xn , is called a system
of linear equations or a linear system. A sequence of n numbers s1 , s2 ,  sn is a
solution of the system of linear equations if x1  s1 , x2  s 2 ,  xn  s n is a solution of
every equation in the system.
Example: Verify that x  1, y  1, z  1 is a solution of the linear system:
x  2 y  3z  2
3x  y
4
y  2 z  1
Question: Does every system of equations have a solution?
Definition: A system of equations that has no solutions is said to be inconsistent. If
there is at least of solution of the system, it is called consistent.
Example (one solution):
Example (no solutions):
Example (infinitely many solutions):
Question: What do the above cases correspond to geometrically?
MATH1850U: Chapter 1
3
Extension: Now let’s say we have 3 equations in 3 unknowns…then what? How many
solutions are possible? What is the geometric interpretation?
Ok, finally, let’s extend this even further…n equations in n unknowns.
Theorem: A system of linear equations has either no solution, exactly one solution, or
infinitely many solutions.
Ok, so we know what to expect in terms of solutions, but how do we solve so many
equations in so many unknowns???
Definition: An arbitrary set of m equations in n unknowns
a11 x1  a12 x2    a1n xn  b1
a21 x1  a22 x2    a2 n xn  b2

am1 x1  am 2 x2    amn xn  bm
can be written more concisely in augmented matrix form:
 a11
a
 21
 

a m1
a12
 a1n
a 22

 a2n

a m 2  a mn
b1 
b2 


bn 
MATH1850U: Chapter 1
4
x  2 y  3z  2
Example: Write the linear system in augmented matrix form: 3 x  y
4 .
y  2 z  1
1 2 8 3 
Example: Suppose we have 0 1 5  2 . What is the solution to the system?
0 0 1 7 
So how does this help???
Definition: You find solutions to systems of linear equations using three types of
operations on an augmented matrix (or the system of equations itself, in brackets):
 Multiply a row (equation) through by a nonzero constant
 Interchange two rows (equations)
 Add a multiple of one row (equation) to another
These are called Elementary Row Operations.
MATH1850U: Chapter 1
5
Example: Use elementary row operations to replace the system of equations with
another system that has the same solution, but is easier to solve. Then solve the resulting
system.
x1  x2  x3  2
 x1  x 2  3 x3  0
x1  x2  4 x3  2
MATH1850U: Chapter 1
6
Gaussian Elimination (1.2; pg. 11)
Recall: The elementary row operations we studied earlier today are:
 Multiply a row through by a nonzero constant
 Interchange two rows
 Add a multiple of one row to another
Definition: A matrix is said to be in reduced row-echelon form if it has the following 4
properties (if only the 1st 3 properties are satisfied, it is said to be in row-echelon form).
1. If a row does not consist entirely of zeroes, then the first nonzero number in the
row is a 1. This is called a leading 1.
2. If there are any rows that consist entirely of zeroes, then they are grouped together
at the bottom of the matrix.
3. In any two successive rows that do not consist entirely of zeroes, the leading 1 in
the lower row occurs farther to the right than the leading 1 in the higher row.
4. Each column that contains a leading 1 has zeroes everywhere else.
Examples:
So, why are we doing this? Well, once you’ve got a system in one of these forms,
finding the solution is super-easy.
MATH1850U: Chapter 1
2 1  1 4 
3  2 5
9 
Example: Consider 
and
4  1  1 6 


1  6 1  3
7
1
0

0

0
0 0 2
1 0 1 
0 1 1

0 0 0
More Examples:
1 0 2 
0 1 7 


0 0 0
1 0 0 5
0 0 1 0 


1 0 2 5 
0 0 0 3


1
0

0

0
0 8 7
1 5 9 
0 0 0

0 0 0
Definition: Solving by reducing to row echelon form and then using back-substitution is
called Gaussian Elimination. Solving by reducing to reduced-row echelon form is
called Gauss-Jordan Elimination.
Gaussian Elimination Algorithm (to get matrix to row-reduced form):
1. Locate the 1st non-zero column...if it doesn’t contain a leading 1, create a leading
1 using an elementary row operation.
2. Move the leading 1 to the top of that column by switching 2 rows.
3. Use the leading 1 to create 0’s underneath it, using row operations.
4. Move onto the next column and repeat steps 1-4 working only with the rows
below the ones that the algorithm has already been applied to.
MATH1850U: Chapter 1
Example (Gaussian Elimination): Solve the system
8
MATH1850U: Chapter 1 cont...
LINEAR SYSTEMS cont…
Gaussian Elimination (1.2; pg. 11) cont...
Recall: Last day, we introduced Gaussian and Gauss-Jordan Elimination for rowreducing a matrix. Let’s get some more practice at this.
Example (Gauss-Jordan elimination): Solve the system
1
MATH1850U: Chapter 1 cont...
2
More Examples: Let’s do some more examples of the end of the algorithm...suppose we
row-reduced the augmented matrix corresponding to a linear system and obtained:
Terminology: A leading variable is a variable that has a leading 1 in its column and a
free variable is a variable that has no leading 1 in its column.
Example:
x1  2 x2  x3  1
x 2  5 x3  2
Definition: A system of linear equations is said to be homogeneous if the constant terms
are all zero. That is, the bi  0 for all i between 1 and n.
a11 x1  a12 x2    a1n xn  0
a 21 x1  a22 x2    a 2 n xn  0

am1 x1  a m 2 x2    amn xn  0
Some Properties of Homogeneous Systems:



Every homogeneous system is consistent, because it has at least the solution
x1  0, x2  0,  xn  0 . This is called the trivial solution.
Either there is only the trivial solution, or there are infinitely many solutions
A homogeneous system of equations with more unknowns than equations has
infinitely many solutions
MATH1850U: Chapter 1 cont...
Example:
Example: Determine the values of k for which the system of equations has:
i) no solutions;
ii) exactly one solution;
iii) infinitely many solutions.
3
MATH1850U: Chapter 1 cont...
4
Matrices and Matrix Operations (1.3; pg. 25)
Recall: When we started the course, we introduced matrices. Now let’s better
understand how to work with them.
Definition: A matrix is a rectangular array of numbers. The numbers in the array are
called the entries. Matrices will usually be denoted with capital letters.
Definition: The size of a matrix is described in terms of the number of rows
(horizontal) and columns (vertical) it has. An m  n matrix has m rows and n columns.
Examples: Find the size of the following matrices.
Definition: A matrix with only one column is called a column vector (or column
matrix), and a matrix with only one row is called a row vector (or row matrix).
Notation: ( A) ij and aij will denote the entry of the matrix A that is in both the ith row
and the jth column.
Definition: A square matrix is a matrix that has the same number of rows as columns.
The order of a square matrix is the number of rows and columns. The entries
a11 , a 22 , , a nn form the main diagonal of the square matrix A.
Definition: Two matrices are defined to be equal if they have the same size and all their
corresponding entries are equal.
MATH1850U: Chapter 1 cont...
5
Definition: The sum A  B of two matrices is obtained by adding the entries of A to the
corresponding entry of B. Similarly, the difference A  B is found by subtracting the
corresponding entries.
Caution: Matrices of different sizes can NOT be added or subtracted.
Example: Find A  B and A  C .
.
Definition: If A is any matrix and c is any scalar (some number), then the scalar
product cA is the matrix obtained by multiplying c by each entry of A. The matrix cA
is said to be a scalar multiple of A.
Example: Find cA given
Definition: If A1 , A2 ,, An are matrices of the same size, and c1 , c2 ,, cn are scalars,
then the expression of the form c1 A1  c2 A2    cn An is called a linear combination of
A1 , A2 ,, An with coefficients c1 , c2 ,, cn .
Example: Find c1 A1  c2 A2  c3 A3 given that c1  3, c2  5, c3  2 and
.
MATH1850U: Chapter 1 cont...
Definition: If A is an m  r matrix and B is an r  n matrix, then the product AB is the
m  n matrix whose entries are determined as follows:
( AB) ij  ai1b1 j  ai 2 b2 j  ci 3b3 j    air brj
Caution: To find the matrix product AB, the number of columns in A must be the
same as the number of rows in B.
Example:
Example:
Example: Find the 2nd column in the product AB. Find the 3rd row in the product AB.
Definition (Enrichment): If we write a matrix as a matrix of matrices, this is called a
partitioned matrix.
6
MATH1850U: Chapter 1 cont...
7
Matrix Products as Linear Combinations
Ok, so let’s get back to using what we’ve learned to help us solve systems of equations.
Definition: If we start with
a11 x1  a12 x2    a1n xn  b1
a21 x1  a22 x2    a2 n xn  b2

am1 x1  am 2 x2    amn xn  bm
then we can write the system of equations as
Ax  b
Here, A is called the coefficient matrix, x is the column vector of variables or unknowns,
and the b is the column vector of constant terms.
Example: How do we write the following system in matrix form?
Ok, wait … we need a few more definitions.
MATH1850U: Chapter 1 cont...
8
Definition: If A is an m  n matrix, then the transpose of A, denoted AT, is defined to be
the n  m matrix that results from interchanging the rows and columns of A, i.e.
( AT ) ij  ( A) ij
Example: Find the transpose of the matrix below.
Definition: If A is a square matrix of order n, then the trace of A, denoted tr ( A) , is
defined to be the sum of the entries in the main diagonal of A. The trace is undefined if A
is not a square matrix.
tr ( A)  a11  a 22    a nn
Example:
Summary of Matrix Definitions and Operations
Definition/Operation What it means/How to perform it
Square matrix
A matrix that has the same number of rows and columns
Main diagonal
The entries a11 , a 22 ,  , a nn in a square matrix.
Add/subtract the corresponding entries in A and B
A  B (Matrix
addition/subtraction)
cA (Multiplication by Multiply each entry of A by the scalar c
a scalar)
To find the entry in the ith row and jth column of the product
AB (Matrix
matrix, multiply the entries in the ith row of A by the entries in the
Multiplication)
jth column of B and add up the results.
T
A (Transpose)
Interchange the rows and columns of the matrix
Find the sum of the entries on the main diagonal (for square
tr ( A) (Trace)
matrices only)
MATH1850U: Chapter 1 cont...
1
LINEAR SYSTEMS cont…
Inverses and Algebraic Properties of Matrices (1.4; pg. 38)
Recall: Last day, we learned how to multiply matrices.
Caution: There is no Commutative Law for matrix multiplication! In other words, it is
NOT necessarily true that
Example: Verify that
AB and BA will be equal.
AB  BA for the matrices below.
Luckily, though, most of the laws for real numbers also hold for matrices.
Theorem (Properties of Matrix Arithmetic): Assuming the sizes of the matrices are
such that the indicated operations can be performed, the following rules of matrix
arithmetic are valid.
a)
b)
c)
d)
e)
f)
g)
h)
i)
A B  B  A
A  ( B  C )  ( A  B)  C
A( BC )  ( AB )C
A( B  C )  AB  AC
( B  C ) A  BA  CA
a ( B  C )  aB  aC
(a  b)C  aC  bC
a (bC )  (ab)C
a ( BC )  (aB)C  B (aC )
Commutative Law for Addition
Associative Law for Addition
Associative Law for Multiplication
Left Distributive Law
Right Distributive Law
Definition: A matrix whose entries are all zero is called a zero matrix. Such a matrix is
often denoted by 0 or 0 m  n
MATH1850U: Chapter 1 cont...
2
Theorem (Properties of Zero Matrices): Assuming that the sizes of the matrices are
such that the indicated operations can be performed, the following rules of matrix
arithmetic are valid.
a) A  0  0  A  A
b) A  A  0
c) 0  A   A
d) A0  0; 0A  0
 a11 a12
Example: Find the product 
a 21 a 22
a13 
a23 
0 0 0 
0 0 0 


0 0 0
Question: For numbers, we know that if ab  0 , then a  0 or b  0 . Is the same true
for matrices?
Caution: The cancellation law does not hold for matrices! That is,
AB  AC does NOT imply that B  C
and
AD  0 does NOT imply that either A  0 or D  0
Example: Verify AB  AC even though B  C . Verify that AD  0 even though
A  0, D  0 .
Definition: A square matrix with 1’s on the main diagonal and 0’s everywhere else is
called an identity matrix. They are denoted as I or I n .
Note: If A is an m  n matrix, then AI n  A and I m A  A
MATH1850U: Chapter 1 cont...
3
Theorem: If R is the reduced row-echelon form of an n  n matrix A, then either R has a
row of zeros or R is the identity matrix I n .
Definition: If A is a square matrix, and if a matrix B of the same size can be found such
that AB  BA  I , then A is said to be invertible and B is called an inverse of A. If no
such matrix B can be found, then A is said to be singular.
Example: Consider the following pairs A and B
i)
ii)
Theorem: If B and C are both inverses of the matrix A, then B  C . Thus if A is
invertible, it has only 1 inverse, which will be denoted A 1 , and
AA1  A 1 A  I
Proof:
a b 
Formula for the Inverse of a 2  2 matrix: The matrix A  
 is invertible if
c d 
ad  bc  0 in which case the inverse is given by the formula:
A1 

Example: Find the inverse of A  

1  d  b
ad  bc  c a 



MATH1850U: Chapter 1 cont...
4
Theorem: If A and B are invertible matrices of the same size, then AB is invertible and
( AB ) 1  B 1 A 1 .
Proof:
Definition (Powers of a Matrix): If A is a square matrix, then we define the
nonnegative integer powers to be:
A0  I
A n  AAA A
Moreover, if A is invertible, then we define the negative integer powers to be
A  n  ( A 1 ) n  A 1 A 1 A 1  A 1
Theorem (Laws of Exponents): If A is a square matrix and r and s are integers, then
Ar A s  A r  s ,
( Ar ) s  Ars
Theorem (Negative Exponents): If A is an invertible matrix, then
a) A 1 is invertible, and ( A 1 ) 1  A
b) A n is invertible and ( A n ) 1  ( A 1 ) n for n = 0, 1, 2, ...
c) For any non-0 scalar k, then matrix kA is invertible, and (kA) 1 
Proof of b) with n = 3:
Theorem (Properties of the Transpose):
1 1
A
k
MATH1850U: Chapter 1 cont...
5
Theorem: If A is an invertible matrix, then AT is also invertible and
( AT ) 1  ( A1 )T
Elementary Matrices and a Method for Finding A-1 (1.5; pg. 51)
Definition: An n  n matrix is called an elementary matrix if it can be obtained from
the n  n identity matrix by performing a single elementary row operation. We often
denote these matrices as E.
Examples: Which of the following are elementary matrices?
Theorem (Row Operations by Matrix Multiplication): If the elementary matrix E
results from performing a certain row operation on I m and if A is an m  n matrix, then
the product EA is the matrix that results when the same row operation is performed on A.
Note: For every elementary matrix E there is a corresponding elementary row operation
that takes E back to the identity.
Theorem: Every elementary matrix is invertible, and the inverse is also an elementary
matrix.
Equivalence Theorem: If A is an n  n matrix, then the following statements are
equivalent:
a)
b)
c)
d)
A is invertible
The homogeneous system Ax  0 has only the trivial solution
The reduced row-echelon from of A is I n
A is expressible as a product of elementary matrices.
MATH1850U: Chapter 1 cont...
Proof ( a  b  c )
Definition: Matrices that can be obtained from one another by a finite sequence of
elementary row operations are said to be row equivalent.
Idea: To find the inverse of an invertible matrix A, we must find a sequence of
elementary row operations that reduces A to the identity and then perform this same
sequence of operations on In to obtain A 1
 1 3  2
Example: Find the inverse of A   2 1 5 
 3 9  6
1 2 0 
Example: Find the inverse of A  0 1  1
2 4 3 
6
MATH1850U: Chapter 1 cont...
1
LINEAR SYSTEMS cont…
More on Linear Systems and Invertible Matrices (1.6; pg. 60)
Recall: Last day, we learned about the concept of inverting a matrix.
Theorem: Every system of linear equations has either no solution, exactly one solution,
or infinitely many solutions.
Proof (that if more than one solution, then infinitely many):
Theorem: If A is an invertible n  n matrix, then for each column vector b, the system
of equations Ax  b has exactly one solution, namely
x  A 1b
Proof:
MATH1850U: Chapter 1 cont...
Example: Consider the following system with the information provided...what is the
solution to the system?
Question: Say you wanted to solve ALL the systems Ax  b1 , Ax  b 2 , Ax  b 3 ,
where A is the same for all systems. How could do this efficiently?
Idea: If A is invertible, then x1  A 1 b1 , x2  A 1 b2 , x3  A 1 b3 , or set up the
augmented matrix system  A | b1 | b 2 | b 3 | 
Example: Solve the three systems Ax  b1 , Ax  b 2 , Ax  b 3 with
1 
 2
5
1 2
A
, and b1    , b 2    , b 3   

 2
0 
 1
1 1 
2
MATH1850U: Chapter 1 cont...
Theorem: Let A be a square matrix.
a) If B is a square matrix satisfying BA  I then B  A 1
b) If B is a square matrix satisfying AB  I then B  A 1
Note: this implies that A is invertible, and the commuting part ‘comes for free’.
Let’s return to our theorem from section 1.5 where we talked about what statements are
equivalent about an invertible matrix, and let’s add a few more equivalent statements.
Equivalence Theorem: If A is an n  n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
c) The reduced row-echelon from of A is I n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
Theorem: Let A and B be square matrices of the same size. If AB is invertible, then
both A and B must also be invertible.
Example: What conditions must b1 , b2 , and b3 satisfy in order for the system of
 1 1 1   x1   b1 
equations  1 0  2  x2   b2  to be consistent?
 0 1  1   x3  b3 
3
MATH1850U: Chapter 1 cont...; 2
LINEAR SYSTEMS cont…
Diagonal, Triangular, and Symmetric Matrices (1.7; pg. 66)
Recall: We’ve spent a lot of time talking about how to work with matrices. Now let’s
introduce a few special matrices.
Definition: A square matrix in which all the entries off the main diagonal are zero is
called a diagonal matrix.
Examples:
Note: Inverses and Powers of a Diagonal Matrix are intuitive, but for a detailed
explanation, refer to the text.
Examples: Given the matrix below, find A3 and A-1.
Definition: A square matrix in which all the entries above the main diagonal are zero is
called a lower triangular matrix. A square matrix in which all the entries below the
main diagonal are zero is called upper triangular. A matrix that is either upper
triangular or lower triangular is called triangular.
Examples:
1
MATH1850U: Chapter 1 cont...; 2
2
Theorem (Triangular Matrices):
a) The transpose of a lower triangular matrix is upper triangular, and the transpose of
an upper triangular matrix is lower triangular.
b) The product of lower triangular matrices is lower triangular, and the product of
upper triangular matrices is upper triangular.
c) A triangular matrix is invertible if and only if its diagonal entries are all nonzero.
d) The inverse of an invertible lower triangular matrix is lower triangular, and the
inverse of an invertible upper triangular matrix is upper triangular.
Definition: A square matrix A is called symmetric if
A  AT
Examples:
Theorem (Symmetric Matrices): If A and B are symmetric matrices with the same
size, and k is any scalar, then:
a) AT is symmetric
b) A  B and A  B are symmetric
c) kA is symmetric.
Caution: But the product of symmetric matrices is NOT always symmetric!!!
Example: Consider the matrices below. Verify that A  B is symmetric while AB is
not.
Theorem: If A is an invertible symmetric matrix, then A 1 is symmetric.
Proof:
MATH1850U: Chapter 1 cont...; 2
3
Example: Show that the above theorem holds for
Theorem (Product of a Matrix and its Transpose): The products AAT and AT A of a
matrix with its transpose is always symmetric, even when A is not symmetric.
Example: Show that the above theorem holds for
Theorem: If A is an invertible matrix, then AAT and AT A are also invertible.
Enrichment Material: Applications of Linear Systems (Section
1.8)
Recall: We’ve spent the past few weeks studying techniques for solving a system of
equations. So let’s talk a bit more about the applications of this!
Applications include:

Current in the loops of an electric circuit

Traffic control

Medical imaging

Polynomial Interpolation

Balancing chemical equations
Refer to section 1.8 and 1.9 of the text for additional applications!
MATH1850U: Chapter 1 cont...; 2
Application: Balancing Chemical Equations
Write a balanced chemical equation for the following reaction (propane combustion):
C3H8 + O2  CO2 + H2O
DETERMINANTS
In this chapter, we study “determinants”. These will be useful for solving very small
linear systems, will provide a formula for the inverse of a matrix, and will help us link
various concepts in linear algebra together.
Determinants by Cofactor Expansion (Section 2.1; pg. 93)
Recall: Back in section 1.4, we had defined the inverse of a 2  2 matrix as:
1  d  b
provided that ad  bc  0 .
A1 
ad  bc  c a 
4
MATH1850U: Chapter 1 cont...; 2
5
Definition: If A is a square matrix, then the minor of entry aij is denoted by Mi j and is
defined to be the determinant of the submatrix that remains after the ith row and jth column
are deleted from A. The number (1) i  j M ij is denoted by Cij and is called the cofactor
of entry aij .

Example: Find C11 and C12 for the 3x3 matrix 


.


Theorem: If A is an n  n matrix, then regardless of which row or column of A is
chosen, the number obtained by multiplying the entries in that row or column by the
corresponding cofactors and adding the resulting products is always the same.
Definition: The determinant of an n  n matrix A, denoted det( A) can be computed by
multiplying the entries in any row (or column) by their cofactors and adding the resulting
products. The sums themselves are called cofactor expansions of A. That is, the
cofactor expansion along the jth column gives
det( A)  a1 j C1 j  a2 j C 2 j    a nj C nj
and the cofactor expansion along the ith row gives
det( A)  ai1Ci1  ai 2 Ci 2    ain Cin
Caution: The entries and cofactors are numbers, so the determinant is a number. It is
NOT a matrix!
MATH1850U: Chapter 1 cont...; 2
6

.



Example: Find the determinant of the 3x3 matrix 

Example: Find the determinant of the matrix below.
Theorem: If A is an n  n triangular matrix (upper triangular, lower triangular, or
diagonal), then det( A) is the product of the entries on the main diagonal of the matrix;
that is, det( A)  a11a 22  a nn .
Explanation:
Example: Find det( I n ) .
5
1
Example: Find the determinant of the 4x4 matrix 
2

0
0
0
3
0
7
2
2
9
0
0
.
0

1
MATH1850U: Chapter 1 cont...; 2
7
Ok, let’s do some more examples!


Example: Find the determinant of the 4x4 matrix 





.



Note: For 2x2 and 3x3 matrices, there's an alternate way to compute determinants
that you may find faster!
Trick for 2x2:
Trick for 3x3:
Caution: The “3x3” Trick is called this because it only works for 3x3! If
you try it for 4x4 or 5x5, etc., it is completely wrong, and you will get an
automatic 0 for the question!!!
Example:
MATH1850U: Chapter 2 cont...
1
DETERMINANTS cont...
Evaluating Determinants by Row Reduction (2.2; pg. 100)
Recall: Last day, we introduced the method of cofactor expansion for finding
determinants. Today, we will learn to evaluate determinants by row reduction...this
involves less computations, so it is better for larger matrices.
Theorem: Let A be a square matrix. If A has a row of zeroes or a column of zeros, then
det( A)  0 .
Example:
Theorem: Let A be a square matrix. det( A)  det( AT )
Proof (2x2): Try as an exercise.
Proof (3x3):
The following theorem tells us how elementary row or column operations affect the value
of a determinant.
Theorem: Let A be an n  n matrix.
a) If B is the matrix that results when a single row or single column of A is
multiplied by a scalar k, then det( B )  k det( A)
b) If B is the matrix that results when two rows or two columns of A are
interchanged, then det( B )   det( A)
c) If B is the matrix that results when a multiple of one row is added to another row,
or when a multiple of one column of A is added to another column, then
det( B )  det( A)
MATH1850U: Chapter 2 cont...
Notes of Caution:


Elementary row and column operations may change the value of the determinant
(this is in contrast to systems of equations for which elementary row operations
do not change solutions
Unlike in solving for systems of equations, to find the determinant of a matrix,
you can use column operations as well as row operations
Theorem: If A is a square matrix with two proportional rows or two proportional
columns, then det( A)  0 .
Example: Use row reduction to evaluate the determinant of the given matrix.
Example: Use row reduction to evaluate the determinant of the given matrix.
2
MATH1850U: Chapter 2 cont...
3
Note: Sometimes it is convenient to use row or column operations to get the matrix into
a form that is good for cofactor expansion.
Example: Find det(A)
Properties of Determinants; Cramer’s Rule (2.3; pg. 106)
Theorem: For an n  n matrix A,
Example: Find
det(3 A)
Caution: In general,
if
det( kA)  k n det( A)
where k is any scalar.
det( A)  2 .
det( A  B)  det( A)  det( B)
Example: Verify that det( A  B )  det( A)  det( B ) for the given matrices.
Question: Is there a rule for the determinant of a product of two matrices?
Let’s start with the special case when one of the matrices is an elementary matrix.
MATH1850U: Chapter 2 cont...
4
Lemma: If B is an n  n matrix and E is an n  n elementary matrix, then
det( EB )  det( E ) det( B )
Theorem: A square matrix is invertible if and only if det( A)  0 .
Proof:
Theorem: If A and B are square matrices of the same size, then
det( AB )  det( A) det( B )
That is, the determinant of the product is equal to the product of the determinants.
Example: Verify that the above theorem holds for the given matrices.
One nice consequence of the theory developed here is that we can finally tackle the
proofs of a couple of theorems from earlier sections:
Proof of Theorem from 1.6: If A and B are square matrices of the same size and if AB is
invertible, then so are A and B.
MATH1850U: Chapter 2 cont...
5
Proof of part of Theorem from 1.7 (Properties of Triangular Matrices): A triangular
matrix is invertible iff its diagonal entries are all non-zero.
Theorem: If A is invertible, then
det( A 1 ) 
1
det( A)
Proof:
Example: Find det( A 1 ) , det(3 A 1 ) , and det(3 A) 1 for the given matrix.
Definition: If A is any n  n matrix and Cij is the cofactor of aij , then the matrix
C11 C12  C1n 
C

 21 C 22  C 2 n 
 

 


C n1 C n 2  C nn 
is called the matrix of cofactors from A. The transpose of this matrix is called the
adjoint of A (or the classical adjoint and is denoted adj( A) .
Example: Find adj(A) for the given matrix.
MATH1850U: Chapter 2 cont...
6
Theorem (Inverse of a Matrix Using its Adjoint): If A is an invertible matrix, then
A1 
1
adj( A)
det( A)
The proof uses the following Lemma:
Optional (if you’re interested in more detail about the Lemma): Have a look at
Example 5 from pg. 110 of the text: Entries and Cofactors from Different Rows.
Proof of Theorem:
Example: Find
example.
A1 for the given matrix.
Hint: We already found adj(A) in a previous
MATH1850U: Chapter 2 cont...; 3
1
DETERMINANTS cont...
Properties of Determinants; Cramer’s Rule (2.3; pg. 106) cont...
Recall: Last class, we began studying several properties of determinants.
Theorem (Cramer’s Rule): If Ax  b is a system of n equations in n unknowns such
that det( A)  0 , then the system has a unique solution. This solution is
x1 
det( A1 )
,
det( A)
x2 
det( A2 )
, ,
det( A)
xn 
det( An )
det( A)
where A j is matrix obtained by replacing the entries in the jth column of A by the entries
 b1 
b 
in the column vector b   2 

 
bn 
Example: Use Cramer’s rule to solve the 2x2 system:



 x  
  y   
   
MATH1850U: Chapter 2 cont...; 3
Example: Use Cramer’s rule to solve the 2x2 system:
2



 x  
  y   
   
Example: Use Cramer’s Rule to find the value of x1 for the system below.
MATH1850U: Chapter 2 cont...; 3
Equivalence Theorem: If A is an n  n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
c) The reduced row-echelon from of A is I n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
g) det( A)  0
Application: A first look at Eigenvalues and Eigenvectors
This is a brief intro to what we’ll study in Chapter 5. Consider the system
 x1 
1 2  x1 
3 4  x     x  . For what values of  will this system have non-trivial solutions?

 2 
 2
3
MATH1850U: Chapter 2 cont...; 3
4
EUCLIDEAN VECTOR SPACES
Vectors in 2-Space, 3-Space, and n-Space (3.1; pg. 119)
Definition: Quantities that are completely determined by a number are called scalars.
Quantities that need both magnitude and direction to be completely determined are called
vectors.
Definition: Vectors are often represented by the coordinates of the terminal point of a
vector. These are called the components of the vector.
Example:
Definition: If n is a positive integer, then an ordered n-tuple is a sequence of n real
numbers (a1 , a2 ,, an ) . The set of all ordered n-tuples is called n-space and is denoted
Rn .
Applications: Vectors can be useful in representing data sets, e.g.
 Temperature of a fluid at certain times
 Positions of a cars on a road
MATH1850U: Chapter 2 cont...; 3
5
Definition: Two vectors u  (u1 , u 2 ,, u n ) and v  (v1 , v2 ,, vn ) in R n are called equal
if
u1  v1 , u 2  v2 , , u n  v n
Example: If (5, a, 3,  7)  (5, 8, 3, b) , find a and b.
Definition: We have the following definitions for vector arithmetic in R n :
sum:
v  w  (v1  w1 , v2  w2 ,, vn  wn )
scalar multiple:
kv  (kv1 , kv2 ,, kvn )
negative:
 v  (v1 ,v 2 ,,vn )
difference:
w  v  ( w1  v1 , w2  v2 ,, wn  vn )
In Maple:
> with(Student[LinearAlgebra]):
> VectorSumPlot(<1,2>,<3,-5>);
MATH1850U: Chapter 2 cont...; 3
Example: Find (___, ___) + (___, ___) and show a graphical representation.
Example: 5(3,1,  ,7) = __________________
Theorem (Properties of Vectors in R n : If u  (u1 , u 2 ,, u n ) , v  (v1 , v2 ,, vn ) ,
and w  ( w1 , w2 ,, wn ) are vectors in R n and k and l are scalars, then
a) u  v  v  u
b) u  ( v  w )  (u  v )  w
c) u  0  0  u  u
d) u  (u)  u  u  0
e) k (lu)  (kl )u
f) k (u  v )  ku  kv
g) (k  l )u  ku  lu
h) 1u  u
Explanation of f):
Definition: If w is a vector in R n , then w is said to be a linear combination of the
vectors v1 , v 2 ,, v r in R n if it can be expressed in the form w  k1 v1  k 2 v 2    k r v r
where k1 , k 2 ,, k r are scalars. These scalars are called the coefficients of the linear
combination.
6
MATH1850U: Chapter 3 cont...
1
EUCLIDEAN VECTOR SPACES cont...
Norm, Dot Product, and Distance in Rn (3.2; pg. 130)
Definition: We define the Euclidean norm (or Euclidean length) of a vector
u  (u1 , u 2 ,, u n ) in R n by
|| u || (u  u)1 2  u12  u 22    u n2
Example: If u  (____, ____, ____, ____) , find
|| u || .
Theorem (Properties of Length in R n ): If u and v are vectors in R n , and k is any
scalar, then:
Explanation of d) in R2:
Definition: The vector u is a unit vector if
|| u || 1 .
Example: Determine whether (____, ____, ____) is a unit vector.
Definition: The standard unit vectors in R n are the vectors with one component equal
1 and all other components equal zero.
MATH1850U: Chapter 3 cont...
2
Definition: The Euclidean distance between the points u  (u1 , u 2 ,, u n ) and
v  (v1 , v2 ,, vn ) in R n is defined by
d (u, v ) || u  v || (u1  v1 ) 2  (u 2  v2 ) 2    (u n  vn ) 2
Example: If u  (____, ____, ____) and v  (____, ____, ____) , find
d (u, v ) .
Theorem (Properties of Length in R n ): If u, v, and w are vectors in R n , and k is any
scalar, then:
Definition: If u  (u1 , u 2 ,, u n ) and v  (v1 , v2 ,, vn ) are any two vectors in R n , then
the dot product (also called the Euclidean inner product) u  v is defined by
u  v  u1v1  u2 v2    u n vn
Note: It is common to refer to R n with the operations of addition, scalar multiplication,
and the Euclidean inner product as Euclidean n-space.
Example: Find
.
Example: Find (____, ____, ____)  (____, ____, ____)
MATH1850U: Chapter 3 cont...
3
Application (from our text): Most books published in the last 25 years have a 10-digit
ISBN (International Standard Book Number) on them. The last of these digits is a check
digit to ensure that an ISBN has been recorded/transmitted error-free. If
a  (1,2,3,4,5,6,7,8,9) and b is the vector in R9 composed of the 1st nine digits of the
ISBN, then the check-digit c is calculated as follows:
1. Find a  b .
2. Divide
a b
by 11 to get the remainder c. This produces a number between 0
and 10 (and c  10 is written as X to avoid double-digits)
The text for the 10th edition of Howard and Anton’s “Elementary Linear Algebra:
Applications Version” has ISBN 9780470432051. Compute the check-digit to verify this
ISBN has been recorded here correctly.
In R2 and R3, the dot product can be written as:
 u v cos( ) if
uv  
if
0

u  0 and v  0
u  0 or v  0
where  is the angle between u and v (and 0     ). In words, the dot product u  v
depends on the size of u and v and the angle between them.
Example: Find the angle between the vectors u  (____, ____) and v  (____, ____) .
MATH1850U: Chapter 3 cont...
4
Theorem (Properties of the Euclidean Inner Product): If u, v, and w are vectors in
R n , and k is any scalar, then:
Explanation of a)
Explanation of d)
Theorem (Cauchy-Schwarz Inequality in R n ): If u  (u1 , u 2 ,, u n ) and
v  (v1 , v2 ,, vn ) are vectors in R n , then
uv  u v
Theorem: If u and v are vectors in R n with the Euclidean inner product, then
uv 
1
1
2
uv  uv
4
4
2
Note (Vectors as Matrices): It is often useful to write a vector in R n as a row vector
(matrix) or column vector (matrix).
Definition: The matrix form for the dot product is u  v  v T u
MATH1850U: Chapter 3 cont...
5
Example: Verify that Au  v  u  AT v for the following matrix and vectors.
Note (A Dot Product View of Matrix Multiplication): If ri is the ith row of A, and c j
is the jth column of B, then
( AB) ij  ri  c j
Orthogonality (Section 3.3; pg. 143)
 uv
 u v
1
Recall: In the previous section, we found the relationship   cos 




MATH1850U: Chapter 3 cont...
6
Definition: Two vectors u and v in R n are called orthogonal (or perpendicular) if
u v  0.
Example: Are any of the given vectors orthogonal to one another?
Theorem (Projections): If u and a are vectors in 2-space or 3-space and if a  0 , then
the vector component of u along a is proja u 
orthogonal to a is given by u - proja u  u 
u a
a
u a
a
2
2
a and the vector component of u
a.
Example: Given
a) the vector component of u along a
b) the vector component of u orthogonal to a.
, find
In section 3.2, we found that many theorems about vectors in R2 and R3 hold in
R n ...here’s another example of such a generalization.
Theorem (Pythagorean Theorem in R n ): If u and v are orthogonal vectors in R n with
the Euclidean inner product, then
2
2
uv  u  v
Proof:
2
MATH1850U: Chapter 3 cont...; 4
1
EUCLIDEAN VECTOR SPACES cont...
The Geometry of Linear Systems (Section 3.4; pg. 152)
Recall: A line in R 2 can be specified by giving its slope and one of its points.
Similarly, one can specify a plane in R 3 by giving its inclination and specifying one of it's
points. A convenient method for describing the inclination of a plane is to specify a
nonzero vector, called a normal, that is perpendicular to the plane.
Definition: Suppose that we want to find the equation of the plane passing through the
point ( x0 , y0 , z 0 ) and having the nonzero vector n  (a, b, c) as normal. Then the pointnormal form of the equation of the plane is
a ( x  x0 )  b ( y  y 0 )  c ( z  z 0 )  0
Example: Find the equation of the plane with normal n  (____, ____, ____) and that
goes through the point (____, ____, ____) .
Definition: Suppose the l is the line in R 3 through the point ( x0 , y0 , z 0 ) and parallel to a
nonzero vector v  (a, b, c) . The parametric equations for the line l are:
x  x0  ta ,
for    t   .
y  y 0  tb,
z  z 0  tc
MATH1850U: Chapter 3 cont...; 4
2
Example: Find parametric equations of the line containing the point (____, ____, ____)
and having the direction vector d  (____, ____, ____)
Cross-Product (Section 3.5; pg. 161)
Definition: If u  (u1 , u 2 , u 3 ) and v  (v1 , v2 , v3 ) are vectors in 3-space, then the crossproduct u  v is the vector defined by
u  v  (u2 v3  u3v2 , u3v1  u1v3 , u1v2  u 2 v1 )
or in determinant notation
u
u  v   2
 v2
u3 u1 u3 u1 u 2 

,
,
v3 v1 v3 v1 v2 
The cross-product u  v is a vector that is perpendicular to both u and v.
Example: Find the cross product (____, ____, ____)  (____, ____, ____) .
Example: Find the cross product (____, ____, ____)  (____, ____, ____) .
MATH1850U: Chapter 3 cont...; 4
3
Theorem (Properties of the Cross Product): If u, v, and w are any vectors in R3 and k
is any scalar, then
Proof of a)
Proof of f)
Optional: See the text for the determinant form of cross-product using the standard unit
vectors and how to get the direction of the cross product from the "right-hand rule".
Theorem (Area of a Parallelogram): If u and v are vectors in R3, then || u  v || is equal
to the area of the parallelogram determined by u and v.
MATH1850U: Chapter 3 cont...; 4
4
Example: Find the area of the parallelogram determined by the vectors
u  (____, ____, ____) and v  (____, ____, ____) .
Note: See the text for a geometric interpretation of determinants for vectors in R 2 and
R3 .
Note: The cross product u  v does not depend on the coordinate system you choose to
write u and v in.
GENERAL VECTOR SPACES
Recall: In Chapter 3, we saw n-space or Rn. All together, the following 3 things make
up n-space:
1. The objects
2. Rule for addition: a rule for associating with each pair of objects u and v an
object u  v , called the sum of u and v
3. Rule for scalar multiplication: a rule for associating with each scalar k and each
object u in V an object ku , called the scalar multiple of u by k
MATH1850U: Chapter 3 cont...; 4
5
Real Vector Spaces (4.1; pg. 171)
Definition: Let V be an arbitrary nonempty set of objects on which two operations are
defined, addition and multiplication by scalars (numbers). If the following axioms are
satisfied by all objects u, v, w in V and all scalars k and l, then we call V a vector space
and we call the objects in V vectors.
If u and v are objects in V, then u  v is in V.
u v  vu
u  ( v  w )  (u  v )  w
There is an object 0 in V called the zero vector for V, such that u  0  0  u  u
For each u in V, there is an object  u in V, called the negative of u, such that
u  (u)  (u)  u  0
6. If k is any scalar and u is any object in V, then ku is in V.
7. k (u  v )  ku  kv
8. (k  l )u  ku  lu
9. k (lu)  (kl )u
10. 1u  u
1.
2.
3.
4.
5.
Example: Show that the set of all 2 x 2 matrices with real entries is a vector space if
vector addition is defined to be matrix addition and vector scalar multiplication is defined
to be matrix scalar multiplication.
1. If u and v are objects in V, then u  v is in V.
2. u  v  v  u
3. u  ( v  w )  (u  v )  w
MATH1850U: Chapter 3 cont...; 4
4. There is an object 0 in V called the zero vector for V, such that u  0  0  u  u
5. For each u in V, there is an object  u in V, called the negative of u, such that
u  (u)  (u)  u  0
6. If k is any scalar and u is any object in V, then ku is in V.
7. k (u  v )  ku  kv
8. (k  l )u  ku  lu
9. k (lu)  (kl )u
10. 1u  u
We’ll study more examples involving vector spaces next class.
6
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Real Vector Spaces (4.1; pg. 171) cont...
Recall: Last day, we introduced the definition of a vector space. Let’s do some more
examples.
Example: Let V be the set of real-valued functions defined on the entire real line
(, ) with the standard rules for function addition and scalar multiplication that we are
familiar with, i.e. If f  f (x) and g  g (x) , then f  g  f ( x)  g ( x) and kf  kf (x) . This
is called the Vector Space of Functions, and is often labelled as F (, ) . Show that V
is indeed a vector space.
1. If u and v are objects in V, then u  v is in V.
2. u  v  v  u
3. u  ( v  w )  (u  v )  w
MATH1850U: Chapter 4 cont...
4. There is an object 0 in V called the zero vector for V, such that u  0  0  u  u
5. For each u in V, there is an object  u in V, called the negative of u, such that
u  (u)  (u)  u  0
6. If k is any scalar and u is any object in V, then ku is in V.
7. k (u  v )  ku  kv
8. (k  l )u  ku  lu
9. k (lu)  (kl )u
10. 1u  u
2
MATH1850U: Chapter 4 cont...
3
Example: Let V be the line y  x  1 in R2. Define addition and scalar multiplication as
the standard addition and scalar multiplication of vectors in R2. Show that the points in V
do NOT form a vector space.
Definition: Let V consist of a single object, which we will denote by 0, and define
addition as 0  0  0 and scalar multiplication as k 0  0 for all scalars k. We call this the
zero vector space.
MATH1850U: Chapter 4 cont...
Theorem (Properties of Vector Spaces): Let V be a vector space, u be a vector in V,
and k a scalar; then:
Subspaces (4.2; pg. 179)
Definition: A subset W of V is called a subspace of V if W is itself a vector space under
the addition and scalar multiplication defined on V.
Theorem (Subspaces): If W is a set of one or more vectors from a vector space V, then
W is a subspace of V if and only if the following conditions hold:
a) If u and v are vectors in W, then u  v is in W
b) If k is any scalar and u is any vector in W, then ku is in W
4
MATH1850U: Chapter 4 cont...
Example: Let W consist of the points on the line y  x  1 and so, W is a subset of R2.
Show that W is NOT a subspace of R2.
Example: The set of points W that lie on a line through the origin is a subset of R3.
Show that W is a subspace of R3.
Example: The set of all points on a plane is a subset of R3. Every plane through the
origin is a subspace of R3.
5
MATH1850U: Chapter 4 cont...
6
Example: In the previous section, we saw that the set that contained only the zero vector
was a vector space (called the zero vector space). Because every vector space has a zero
vector, the zero vector is a subspace of every vector space. In addition, the vector space
V is a subspace of itself.
Note:



The subspaces of R2 are:
the zero subspace
lines through the origin
R2
Note:




The subspaces of R3 are:
the zero subspace
lines through the origin
planes through the origin
R3
Example: The set P2 of polynomials of degree  2 is a subset of F (, ) , the set of
real-valued functions on (, ) . Show that P2 is a subspace of F (, ) .
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Subspaces (4.2; pg. 179) cont...
Recall: Last day, we introduced the concept of a subspace:
• If u and v are vectors in W, then u + v is in W
• If k is any scalar and u is any vector in W, then ku is in W
Let’s do a few more examples.
Example: Determine whether each of the following is a subspace of the indicated vector
space.
a) Is the set of all 3 x 3 matrices A such that det(A)=1 a subspace of M nn ?
−1
b) Is the set of all 2 x 2 matrices A of the form 
0
b
a subspace of M nn ?
c

MATH1850U: Chapter 4 cont...
2
c) Is the set of all vectors of the form (a, b, 0) with b = 3a a subspace of R3?
2
d) Is the set of all polynomials of the form a0 + a1 x + a 2 x with a0 + a1 = 0 a
subspace of P2?
Theorem (Solution space of homogeneous systems): If Ax = 0 is a homogeneous
linear system of m equations in n unknowns, then the set of solution vectors is a subspace
of R n .
Example: Consider Ax = 0 with A as given below.
Definition: A vector w is called a linear combination of the vectors v1 , v 2 , , v r if it
can be expressed in the form w = k1 v1 + k 2 v 2 + + k r v r where k1 , k 2 , , k r are
scalars.
MATH1850U: Chapter 4 cont...
3
Example: Consider the 2 vectors u =(1, −1, 1) and v = (−1, 2, 1) in R 3 . Show that
w = (1, 0, 3) is a linear combination of u and v and that x = (1, 0, 0) is not.
Theorem: If v1 , v 2 , , v r are vectors in a vector space V, then:
a) Then set W of all linear combinations of v1 , v 2 , , v r is a subspace of V.
b) W is the smallest subspace of V that contains v1 , v 2 , , v r in the sense that every
other subspace of V that contains v1 , v 2 , , v r must contain W.
Proof/explanation of part a):
MATH1850U: Chapter 4 cont...
4
Definition: If S = { v1 , v 2 , , v r } is a set of vectors in a vectors space V, then the
subspace W of V consisting of all the linear combinations of the vectors in S is called the
space spanned by v1 , v 2 , , v r , and we say that the vectors v1 , v 2 , , v r span W. To
indicate that W is the space spanned by the vectors in the set S, we write
W = span ( S )
W = span{ v1 , v 2 ,  , v r }
or
Example: span{e1 , e 2 , e 3 } = _________
Example: Describe the space spanned by the vector (0,1,1) in R 3 .
Example: Describe the space spanned by the vectors (1,0,0) and (0,1,0) in R 3 .
MATH1850U: Chapter 4 cont...
5
Example: Determine whether the vectors v 1 = ( 2,1, −3) , v 2 = (0, − 5,1) and
v 3 = (6, − 7, −9) span R 3 .
Theorem: If S = { v1 , v 2 , , v r } and S ′ = { w 1 , w 2 , , w k } are two sets of vectors in a
vector space V, then span{ v1 , v 2 , , v r } = span{ w 1 , w 2 , , w k } if and only if each vector
in S is a linear combination of those in S ′ and each vector in S ′ is a linear combination
of those in S.
Example: Suppose you’re given the sets of vectors below. Show that
span(S) = span(S′) .
MATH1850U: Chapter 4 cont...
6
Linear Independence (4.3; pg. 190)
Definition: If S = { v1 , v 2 , , v r } is a nonempty set of vectors, then the vector equation
k1 v1 + k 2 v 2 +  + k r v r = 0
has at least one solution, namely
k1 = 0, k 2 = 0,  , k r = 0
If this is the only solution, the S is called a linearly independent set. If there are other
solutions, then S is called a linearly dependent set.
Example: Show that the set of vectors v1 = (1, 0, −1), v 2 = (5, 2, 8), v 3 = (6, 2, 7) is
linearly dependent.
Example: Show that the polynomials given below form a linearly dependent set.
Example: Determine whether the vectors given below form a linearly dependent or
linearly independent set.
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Linear Independence (4.3; pg. 190) cont...
Recall: Last class, we introduced the notion of linear dependence and independence.
Example: Determine whether the
linearly independent set.
P2 vectors given below form a linearly dependent or
Theorem: A set S with two or more vectors is:
a) Linearly dependent if and only if at least one of the vectors in S is expressible as a
linear combination of the other vectors in S.
b) Linearly independent if and only if no vector in S is expressible as a linear
combination of the other vectors in S.
MATH1850U: Chapter 4 cont...
2
Example: We saw in a previous example that the vectors below form a linearly
dependent set. Write one of these vectors as a linear combination of the others.
Example: We mentioned that the standard unit vectors in R , e1 , e 2 ,, e n , form a
linearly independent set, so we cannot write any of these vectors as a linear combination
n
of the others. Verify that in
R 3 , you cannot write e 3 as a linear combination of e1 and
e2 .
Theorem:
a) A finite set of vectors that contains the zero vector is linearly dependent.
b) A set with exactly one vector is linearly independent if and only if that vector is
not 0.
c) A set with exactly two vectors is linearly independent if and only if neither vector
is a scalar multiple of the other.
MATH1850U: Chapter 4 cont...
Example: Determine whether each of the following sets is linearly dependent or
independent.
1)
2)
3)
Geometric Interpretation of Linear Independence
 In R2 or R3, a set of 2 vectors is linearly independent iff the vectors do not lie on
the same line when placed with their initial points at the origin

In R3, a set of 3 vectors is linearly independent iff the vectors do not lie in the
same place when they are placed with their initial points at the origin.
Enrichment Concept: Looking ahead to your 2nd year Differential Equations course,
you’ll learn something called the “Wronskian”, which is a special determinant that helps
you to determine whether a given set of functions is linearly independent.
Theorem: Let S  v1 , v 2 ,, v r  be a set of vectors in R n . If r  n , then S is linearly
dependent.
3
MATH1850U: Chapter 4 cont...
4
Explanation (in R3 case):
Example: Determine whether the following statements are TRUE or FALSE.
a) Suppose you have 5 vectors in R4. They must be linearly dependent.
b) Suppose you have 8 vectors in R9. They must be linearly independent.
c) Suppose you have 6 vectors in R3. They could be linearly independent.
Coordinates and Basis (4.4; pg. 200)
Note regarding non-rectangular coordinate systems: A coordinate system establishes
a one-to-one correspondence with all points in the plane, and we use an ordered pair to do
this. The numbers in the ordered pair usually correspond to units along perpendicular
axis, but this is not necessary. Any two nonparallel lines can be used to define a
coordinate system in the plane.
MATH1850U: Chapter 4 cont...
Note: We can specify a coordinate system in terms of basis vectors.
Note: For a coordinate system, we need:
 Unique representation of everything in the vector space
 Ability to “label” everything in the vector space
Note: A basis can be used to define a coordinate system in a vector space.
Definition: If V is any vector space and S  v1 , v 2 ,, v n  is a set of vectors in V, then
S is called a basis for V if the following two conditions hold:
a) S is linearly independent
b) S spans V
Example:
e1 , e 2 ,, e n
are a basis for Rn.
Theorem (Uniqueness of Basis Representation): If S  v1 , v 2 ,, v n  is a basis for a
vector space V, then every vector v in V can be expressed in the form
v  c1v1  c2 v 2    cn v n in exactly one way.
Proof:
5
MATH1850U: Chapter 4 cont...
Definition: If S  v 1 , v 2 ,, v n  is a basis for a vector space V, and
v  c1v1  c2 v 2    cn v n
is the expression for a vector v in terms of the basis S, then the scalars c1 , c2 ,, cn are
called the coordinates of v relative to the basis S. The vector c1 , c2 ,, cn  constructed
from these coordinates is called the coordinate vector of v relative to S; it is denoted
( v ) S  c1 , c2 ,, c n  .
Note:
S  e1 , e 2 ,, e n  is the standard basis for R3.
6
MATH1850U: Chapter 4 cont...


Note: S  1, x, x 2 , x n is the standard basis for Pn .
Example: Find the coordinate vector of the polynomial shown below relative to the
standard basis for P3 .
Example: Verify that the set S  m11 , m12 , m21 , m22  comprised of the matrices shown
below forms a standard basis for M 22 .
7
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Dimension (4.5; pg. 209)
Definition: A nonzero vector space V is called finite-dimensional if it contains a finite
set of vectors v1 , v 2 ,, v n that form a basis. If no such set exists, V is called infinitedimensional. In addition, we shall regard the zero vector space to be finite dimensional.
Theorem: All bases for a finite-dimensional vector space have the same number of
vectors.
Theorem: Let V be a finite-dimensional vector space and v1 , v 2 ,, v n  be any basis:
a) If a set has more than n vectors, then it is linearly dependent.
b) If a set has fewer than n vectors, then it does not span V.
Definition: The dimension of a finite-dimensional vector space V, denoted by dim(V )
is defined to be the number of vectors in a basis for V. In addition, we define the zero
vector space to have dimension zero.
Examples: Find the dimension of each of the following sets:
1)
Rn
2) Pn
3) M mn
MATH1850U: Chapter 4 cont...
Examples: Find a basis for each of the following sets and state its’ dimension:
1) The set of all 3x3 diagonal matrices.
0 0
 a

a 0
2) The set of all 3x3 matrices of type  0
a  b 0 b 
3) The set of all 5-tuples of the form (a, 0, b, a  b, a  b) .
Theorem (Plus/minus theorem): Let S be a nonempty set of vectors in a vector space
V.
a) If S is a linearly independent set, and if v is a vector in V that is outside of
span ( S ) , then the set S  v that results by inserting v into S is still linearly
independent.
b) If v is a vector in S that is expressible as a linear combination of other vectors in
S, and if S  v denotes the set obtained by removing v from S, then S and
S  v span the same space: span ( S )  span ( S  v)
Example: S  (1,0,0), (0,1,0) and v  (0,0,1)
Example: S  (1,0), (0,1), (5,2) and v  (5,2)
Theorem: If V is an n-dimensional vector space, and if S is a set in V with exactly n
vectors, then S is a basis for V if either S spans V or S is linearly independent.
2
MATH1850U: Chapter 4 cont...
Theorem: Let S be a finite set of vectors in a finite-dimensional vector space V.
a) If S spans V but is not a basis for V, then S can be reduced to a basis for V by
removing appropriate vectors from S.
b) If S is a linearly independent set that is not already a basis for V, then S can be
enlarged to a basis for V by inserting appropriate vectors into S.
Example: S  (1,0), (0,1), (5,2)
Example: S  (1,0,0), (0,1,0)
Theorem: If W is a subspace of a finite-dimensional vector space V, then:
a) W is finite-dimensional
b) dim(W )  dim(V )
c) W  V if and only iff dim(W )  dim(V )
Row Space, Column Space, and Null Space (4.7; pg. 225)
 a11 a12  a1n 
a
a 22  a 2 n 
21

, the vectors in Rn formed
Definition: For an m  n matrix A 
 

 


a m1 a m 2  a mn 
from the rows of A are called the row vectors of A, and the vectors in Rm formed from
the columns of A are called the column vectors of A.
3
MATH1850U: Chapter 4 cont...
4
Example: Identify the row vectors and column vectors of the matrix below.
Definition: If A is an m  n matrix, then the subspace of Rn spanned by the row vectors
of A is called the row space of A, and the subspace of Rm spanned by the column vectors
of A is called the column space of A. The solution space of the homogeneous system of
equations Ax  0 , which is a subspace of Rn, is called the nullspace of A.
Theorem: A system of linear equations Ax  b is consistent if and only if b is in the
column space of A.
Example:
Theorem: If x 0 denotes any single solution of a consistent linear system Ax  b , and if
v1 , v 2 ,, v k  form a basis for the nullspace of A, that is, the solution space of the
homogeneous system Ax  0 , then every solution of Ax  b can be expressed in the
form
x  x 0  c1 v 1  c 2 v 2    c k v k
and, conversely, for all choice of scalars c1 , c 2 ,, ck , the vector x in this formula is a
solution of Ax  b .
MATH1850U: Chapter 4 cont...
Terminology:
 x 0 is called a particular solution of Ax  b .


The expression x 0  c1 v1  c2 v 2    ck v k is called the general solution of
Ax  b
The expression c1v1  c2 v 2    ck v k is called the general solution of Ax  0 .
Theorem: Elementary row operations do not change the nullspace of a matrix.
Example: Find the general solution of the nonhomogeneous system of linear equations
below.
5
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Row Space, Column Space, and Null Space (4.7; pg. 225) cont...
Recall: Last day, we introduced the concept of row, column, and null space.
Theorem: Elementary row operations do not change the row space of a matrix.
Theorem: If A and B are row equivalent matrices, then:
a) A given set of column vectors of A is linearly independent if and only if the
corresponding column vectors of B are linearly independent.
b) A given set of column vectors of A forms a basis for the column space of A if and
only if the corresponding column vectors of B form a basis for the column space
of B.
Theorem: If a matrix R is in row-echelon form, then the row vectors with the leading 1’s
(i.e., the nonzero row vectors) form a basis for the row space of R, and the column
vectors that contain the leading 1’s of the row vectors form a basis for the column space
of R.
Example: Find a basis for the row space and column space of the matrix below (Hint:
It’s in row-echelon form).
MATH1850U: Chapter 4 cont...
Example: Find a basis for the row space and column space of the matrix below.
You can use this method for finding a basis for a vector space.
Example: Find a basis for the space spanned by the vectors below.
Example: (see example 10 on pg. 233 of text) Find a subset of the vectors below that
forms a basis for the space spanned by these vectors.
2
MATH1850U: Chapter 4 cont...
3
Rank, Nullity, and the Fundamental Matrix Spaces (4.8; pg. 237)
Note:
1)
2)
3)
4)
If we have matrix A and its transpose AT, there are 4 vectors spaces of interest
row space of A
column space of A
nullspace of A
nullspace of AT
Remarks: Here are a few interesting properties for the spaces associated with an m  n
matrix A:
 the row space of AT is the column space of A and vice versa
 the row space and null space of A are subspaces of Rn
 the column space of A and null space of AT are subspaces of Rm
Theorem: If A is any matrix, then the row space and column space of A have the same
dimension.
Definition: The common dimension of the row and column space of a matrix A is called
the rank of A and is denoted by rank(A). The dimension of the nullspace of A is called
the nullity of A and is denoted by nullity(A).
Application: The concept of “rank” is used to help find efficient methods for
transmitting large amounts of digital data (see pg. 245 for more details).
Example: We showed before that the row reduced echelon form of
is
So, a basis for the row space is:
We also showed that a basis for the nullspace is:
.
Find the rank and nullity.
MATH1850U: Chapter 4 cont...
Theorem: If A is any matrix, then
4
rank ( A)  rank ( AT )
Theorem (Dimension Theorem for Matrices): If A is a matrix with n columns, then
rank ( A)  nullity( A)  n
Example: Say A is a 5 x 6 matrix with rank ( A)  3 . What is the nullity(A), rank ( AT ) ,
T
and nullity ( A ) .
Theorem: If A is an
a)
b)
mn
matrix, then:
rank ( A)  the number of leading variables in the solution of Ax  0
nullity( A)  the number of parameters in the general solution of Ax  0
Example: Find the number of parameters in the general solution of Ax  0 if A is a
8 6 matrix of rank 5.
Theorem (Maximum Value of Rank): If A is an m  n matrix, then
rank ( A)  min(m, n)
Example: Say A is a 7 x 11 matrix. What can we say about the max/min of the rank and
nullity?
MATH1850U: Chapter 4 cont...
Theorem: If Ax  b is a consistent linear system of m equations in n unknowns, and
if A has rank r, then the general solution of the system contains n  r parameters.
Terminology: Ideally, we would hope in an application that the number of equations is
the same as the number of unknowns, because often these have a unique solution.
However, this is not always the case. You could have:
 Overdetermined Systems: more constraints than unknowns
 Undetermined Systems: fewer constraints than unknowns
Equivalence Theorem: If A is an n  n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
I
c) The reduced row-echelon from of A is n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
g) det( A)  0
h)
i)
j)
k)
l)
m)
n)
o)
The column vectors of A are linearly independent
The row vectors of A are linearly independent
The column vectors of A span R n
The row vectors of A span R n
The column vectors of A form a basis for R n
The row vectors of A form a basis for R n
A has rank n
A has nullity 0
5
MATH1850U: Chapter 4 cont...
1
GENERAL VECTOR SPACES cont...
Matrix Transformations from Rn to Rm (4.9; pg. 247)
Recall: You are already familiar with functions from Rn to R; this is a rule that associates
with each element in a set A one and only one element in a set B.
Definition: If f associates the element b with the element a, then we write b  f (a) and
say that b is the image of a under f or that f (a) is the value of f at a. A is the domain
and the set B is called the codomain. The subset of B consisting of all possible values
for f as a varies over A is called the range of f.
Example:
Example:
Definition: If V and W are vector spaces, and if f is a function with domain V and
codomain W, then f is called a map or transformation from V to W and we say that f
maps V to W. We denote this by writing f : V  W . In the case where V = W, the
transformation is called an operator on V.
Note: An important form of a transformation is:
w1  f1 ( x1 , x2 ,, xn )
w2  f 2 ( x1 , x2 ,, xn )

wm  f m ( x1 , x2 ,, xn )
This is a transformation that maps a vector x  ( x1 , x2 ,, xn ) in Rn into a vector
w  ( w1 , w2 ,, wm ) in Rm . If we denote this transformation by T, then T : R n  R m and
T ( x1 , x2 ,  , xn )  ( w1 , w2 ,  , wm )
MATH1850U: Chapter 4 cont...
2
Example:
Definition: When the f j are linear equations, you get a linear transformation:
w1  a11 x1  a12 x 2    a1n xn
w2  a 21 x1  a 22 x 2    a 2 n xn

wm  a m1 x1  a m 2 x2    a mn xn
If m = n this would be called a linear operator. This can be written in matrix form:
w  Ax
The matrix A is called the standard matrix for the linear transformation T, and T is
called multiplication by A.
Example: Find the standard matrix for the transformation defined below.
Example: Find the standard matrix for the transformation defined below.
T ( x1 , x2 , x3 )  (___________, ___________, ___________)
Notation: We might write the transformation T with standard matrix A as:
TA (x)  Ax
or
T (x)  [T ]x
MATH1850U: Chapter 4 cont...
3
Note: There is a correspondence between matrices and linear transformations.
Note: It can be said that an operator from Rn to Rm maps (or transforms) one vector in Rn
to another vector in Rm.
Example: Consider the matrix transformation T : R 3  R 2 defined by
T ( x1 , x2 )  (___________, ___________, ___________) . Find the image of
x  (___, ___) under T.
Theorem: If T : R n  R m is a linear transformation, and e1 , e 2 ,, e n are the standard
basis vectors for R n , then the standard matrix for T is
T   T (e1 ) | T (e 2 ) |  | T (e n )
where the standard basis e i has a 1 for its ith component, and all other components zero.
Summary: Finding the Standard Matrix for a Matrix Transformation
Step 1: Find the images of the standard basis vectors e1 , e 2 ,, e n for R n in column
form.
Step 2: Construct the matrix that has the images obtained in Step 1 as its successive
columns. This matrix is the standard matrix for the transformation.
MATH1850U: Chapter 4 cont...
4
Definition: The zero transformation takes any vector in Rn and gives the zero vector in
Rm.
Example:
Definition: The identity transformation maps any vector in Rn onto itself.
Example:
Definition: The reflection operator T is given by T : R 2  R 2 ; it multiplies the x
component by -1 and leaves the y component the same.
Example: Use matrix multiplication to find the reflection of (___, ___) about
a) The x-axis
b) The y-axis
c) The line y = x
MATH1850U: Chapter 4 cont...
Definition: The rotation operator rotates each vector in R2 through a fixed angle  .
Example: Use matrix multiplication to find the image of the vector (___, ___) if it
rotated through an angle of   __________
Note: The rotation operator in R3 is a linear operator that rotates each vector about
some rotation axis through a fixed angle  .
5
MATH1850U: Chapter 4 cont...
6
Definition: The dilation and contraction operators are operators that increase or
decrease the length of a vector.
Example: Use matrix multiplication to find the image of the vector (___, ___, ___) if it is
dilated by a factor of 3.
Example: In words, describe the geometric effect of multiplying a vector x by the matrix
5 0 
A

0 2 
Definition: The projection operator is any operator that maps each vector into its
orthogonal projection on a line or plane through the origin.
Example: Use matrix multiplication to find the orthogonal projection of (___, ___, ___)
on the xz-plane.
MATH1850U: Chapter 4 cont...
GENERAL VECTOR SPACES cont...
Properties of Matrix Transformations (4.10; pg. 263)
Recall: Last day, we said that the standard matrix for a transformation can be found
using T   T (e1 ) | T (e 2 ) |  | T (e n ) .
Example: Find the standard matrix for the linear operator T : R 2  R 2 that projects a
vector onto the x-axis, then contracts that image by a factor of 5.
Definition: If we have TA : R n  R k and TB : R k  R m , we can apply these in
succession by finding the composition of TB with TA, denoted by
w  (TB  TA )(x)  TB (TA (x))
to produce a transformation from Rn to Rm.
Example: Say T1 : R 2  R 2 is rotation counter-clockwise by    / 2 , say
T2 : R 2  R 2 is projection on the y-axis. Find (T2  T1 )( x1 , x2 )
1
MATH1850U: Chapter 4 cont...
Example:
Example: Find the standard matrix for the linear operator T : R 2  R 2 that reflects a
vector versus the line y  x , then rotates that image by  / 2 counter-clockwise, then
dilates that image by a factor of 3.
Caution: Composition is not commutative!
Example:
2
MATH1850U: Chapter 4 cont...
Definition: A linear transformation T : R n  R m is said to be one-to-one if T maps
distinct vectors (points) in Rn into distinct vectors (points) in Rm.
Example: Determine if the following transformation is one-to-one.
Example: Determine if the following transformation is one-to-one.
Examples: Are the following transformations in R2 one-to-one?
a) Rotation
b) Projection
Theorem: If A is an n  n matrix and TA : R n  R n is multiplication by A, then the
following statements are equivalent:
a) A is invertible.
b) The range of TA is R n
c) TA is one-to-one.
3
MATH1850U: Chapter 4 cont...
Explanation:
Example: Use the above theorem to show that the transformation below is one-to-one.
Example: On the previous page, we discussed whether the projection and rotation
operators in R2 are each one-to-one...let’s verify the results using the above theorem.
Definition: If TA : R n  R n is a one-to-one linear operator, then from the theorem
above, the matrix A is invertible. Thus, TA1 : R n  R n is itself a linear operator; it is
called the inverse of TA. The linear operators TA and T A1 cancel the effect of one
another.
4
MATH1850U: Chapter 4 cont...
 
5
Notation: TA1  TA1 and T 1  T 
1
Example: Verify that T2 is the inverse of T1.
Example: Find T 1 for T : R 2  R 2 defined as shown below.
Theorem (Properties of Linear Transformations): A transformation T : R n  R m is
linear if and only if the following relationships hold for all vectors u and v in R n and
every scalar k.
a) T (u  v )  T (u)  T ( v )
b) T (ku)  kT (u)
MATH1850U: Chapter 4 cont...
6
Examples: Verify that the 1st transformation below is linear because both axioms hold;
and verify that the 2nd transformation below is not linear because at least one axiom fails.
Theorem: Every linear transformation from R n  R m is a matrix transformation, and
conversely, every matrix transformation from R n  R m is a linear transformation.
Equivalence Theorem: If A is an n n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
I
c) The reduced row-echelon from of A is n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
g) det( A)  0
h) The column vectors of A are linearly independent
i) The row vectors of A are linearly independent
n
j) The column vectors of A span R
n
k) The row vectors of A span R
n
l) The column vectors of A form a basis for R
n
m) The row vectors of A form a basis for R
n) A has rank n
o) A has nullity 0
p) The range of TA is R n
q) TA is one-to-one.
MATH1850U: Chapter 5
1
EIGENVALUES AND EIGENVECTORS
Eigenvalues and Eigenvectors (5.1; pg. 295)
Definition: If A is an n  n matrix, then a nonzero vector x in Rn is called an eigenvector
of A if Ax is a scalar multiple of x; that is
Ax   x
for some scalar  . The scalar  is called an eigenvalue of A, and x is said to be an
eigenvector of A corresponding to  .
Example:
Question: But how would we find eigenvalues/eigenvectors in the first place?
Definition: The equation det(I  A)  0 is called the characteristic equation of A.
The scalars that satisfy this equation are the eigenvalues of A. When expanded, the
characteristic equation is a polynomial in  , and is called the characteristic polynomial
of A.
MATH1850U: Chapter 5
Example: Find the eigenvalues of the matrix below.
Theorem (Eigenvalues of a triangular matrix): If A is an n  n matrix (upper
triangular, lower triangular, or diagonal), then the eigenvalues of A are the entries on the
main diagonal of A.
Example: Find the eigenvalues of the matrix below.
Example:
2
MATH1850U: Chapter 5
3
Remark: The eigenvalues of a matrix may be complex numbers.
Example: Find the eigenvalues of the matrix below.
Theorem (Equivalent Statements for eigenvalues and eigenvectors): If A is an n  n
matrix, and  is a real number, then the following are equivalent:
a)  is an eigenvalue of A.
b) The system of equations (I  A)x  0 has nontrivial solutions.
c) There is a nonzero vector x in R n such that Ax  x .
d)  is a solution of the characteristic equation det(I  A)  0
Definition: Given a specific eigenvalue  , the eigenvectors corresponding to  are the
nonzero vectors that satisfy Ax  x . Equivalently, the eigenvectors corresponding to 
are the nonzero vectors in the solution space of (I  A)x  0 . We call this solution
space the eigenspace of A corresponding to  .
Example: Find a basis for the eigenspace of the matrix below.
MATH1850U: Chapter 5
Theorem: If k is a positive integer,  is an eigenvalues of a matrix A, and x is a
corresponding eigenvector, then k is an eigenvalue of the matrix A k and x is a
corresponding eigenvector.
Proof:
4
MATH1850U: Chapter 5
5
Example: Find the eigenvalues and bases for the eigenspaces of A5 where
 4  3
A
5 
6
Theorem: A square matrix A is invertible if and only if   0 is not an eigenvalue of A.
Proof:
Equivalence Theorem: If A is an n n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
c) The reduced row-echelon from of A is I n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
g) det( A)  0
h) The column vectors of A are linearly independent
i) The row vectors of A are linearly independent
n
j) The column vectors of A span R
n
k) The row vectors of A span R
n
l) The column vectors of A form a basis for R
n
m) The row vectors of A form a basis for R
n) A has rank n
o) A has nullity 0
n
p) The range of TA is R
q) TA is one-to-one.
r)   0 is not an eigenvalue of A.
MATH1850U: Chapter 5 cont…
1
EIGENVALUES AND EIGENVECTORS cont…
Eigenvalues and Eigenvectors (5.1; pg. 295) cont…
Recall: Last day, we introduced the concept of eigenvalues and eigenvectors.
3 6
Example: Find the eigenvalues and bases for the eigenspaces of A where A  

5 2
MATH1850U: Chapter 5 cont…
2
Application: Markov Chains
Recall: On assignment #3, you had the opportunity to study Markov Chains.
Notation:

x: state vector

P is a matrix, called the transition matrix
Recall: The steady state vector x 0 is defined as Px 0  x 0 .
Example (from our text): Suppose that at some initial point in time 100,000 people live
in a certain city and 25,000 live in its suburbs. The Regional Planning Commission
determines that each year 5% of the city population moves to the suburbs and 3% of the
suburban population moves to the city. Over the long term, how will the population be
distributed between the city and its suburbs?
Application: Differential Equations
Definition: A differential equation is an equation that contains an unknown function
and one or more of its derivatives.
MATH1850U: Chapter 5 cont…
Example: Solve
3
y  7 y
Question: What happens if we have a system of differential equations?
Question: So how would we solve the system below?
Key Steps:
y   Ay

Write as a matrix system

Find the eigenvalues and eigenvectors of A

The solution is
y  c1x 1e1t  c2 x 2 e 2t    cn x n e nt
Example: Consider the system above. Given that the eigenvalues of the coefficient
1
  2
 respectively, find
1 
matrix are 5, -1 and the corresponding eigenvectors are   and 
1

the solution of the system.
MATH1850U: Chapter 5 cont…
4
Diagonalization (Section 5.2; pg. 305)
Two Equivalent Problems we consider:
1.
The Eigenvector Problem: Given an n  n matrix A, does there exist a basis for
R n consisting of eigenvectors of A?
2. The Diagonalization Problem (Matrix Form): Given an n  n matrix A, does
there exist an invertible matrix P such that P 1 AP is a diagonal matrix?
Definition: A square matrix A is called diagonalizable if there is an invertible matrix P
such that P 1 AP is a diagonal matrix; the matrix P is said to diagonalize A.
Example: Consider A and P below. Show that A is diagonalizable.
Looking Ahead: Next day, we’ll learn how to find the matrix P that diagonalizes A.
MATH1850U: Chapter 5 cont...; 6
EIGENVALUES AND EIGENVECTORS
Diagonalization (Section 5.2; pg. 305) cont...
Recall: Last day, we introduced the concept of diagonalizing a matrix.
Theorem: If A is an n  n matrix, then the following are equivalent:
a) A is diagonalizable.
b) A has n linearly independent eigenvectors.
Remark: Since A is n  n , the eigenvectors are in Rn. Thus, b) implies that the
eigenvectors form a basis for Rn.
Method for Diagonalizing A:
1) Find n linearly independent eigenvectors of A, say p1 , p 2 ,p n
2) Form the matrix P having p1 , p 2 ,p n as its column vectors.
3) The matrix P 1 AP will then be diagonal with 1 , 2 , n as its successive
diagonal entries, where i is the eigenvalue corresponding to the
eigenvector p i , for i  1,2,, n .
Example: Find the matrix P that diagonalizes A.
1
MATH1850U: Chapter 5 cont...; 6
Example: Find the matrix P that diagonalizes B.
2
MATH1850U: Chapter 5 cont...; 6
3
Theorem: If v1 , v 2 , v k are eigenvectors of A corresponding to the distinct eigenvalues
1 , 2 ,k , then v1 , v 2 , v k  is a linearly independent set.
Theorem: If an n  n matrix A has n distinct eigenvalues, then A is diagonalizable.
Definition: If 0 is an eigenvalue of an n  n matrix A, then the dimension of the
eigenspace corresponding to 0 is called the geometric multiplicity of 0 , and the
number of times that   0 appears as a factor in the characteristic polynomial of A is
called the algebraic multiplicity of 0 .
Examples: For the matrices below, determine the algebraic and geometric multiplicity
of their eigenvalues.
Theorem (Geometric and Algebraic Multiplicity): If A is a square matrix, then:
a) For every eigenvalue of A, the geometric multiplicity is less than or equal to the
algebraic multiplicity
b) A is diagonalizable if and only if the geometric multiplicity is equal to the
algebraic multiplicity for every eigenvalue.
Computing Powers of a Matrix: If a matrix A is diagonalizable and P is the matrix that
diagonalizes A, then the kth power of A can be computed from A k  PD k P 1 .
MATH1850U: Chapter 5 cont...; 6
4
Example: With A as given below, we obtain P, P-1, and D as shown below. Find A100.
5 3
Example: Consider the matrix A  
. Find A3 using the method of diagonalizing

 7 1
the matrix.
Inner Products (Section 6.1; pg. 335)
Definition: An inner product on a real vector space V is a function that associates a real
number u, v with each pair of vectors u and v in V in such a way that the following
axioms are satisfied for all vectors u, v, and w in V and all scalars k.
1) u, v  v, u
(symmetry axiom)
2)
u  v , w  u, v  v , w
3)
ku , v  k u , v
(additivity axiom)
(homogeneity axiom)
4) v, v  0 and v, v  0 iff v  0
A real vector space with an inner product is called a real inner product space.
MATH1850U: Chapter 5 cont...; 6
5
Note: The dot product is not the only inner product that you can define on Rn. Any rule
that satisfies the definition of inner product can be used.
Definition: For example, an alternative inner product on Rn is the weighted Euclidean
inner product with non-negative weights w1 , w2 ,, wn which is defined by the formula
u, v  w1u1v1  w2u2u2    wnun vn
where u  (u1 , u 2 ,, u n ) and v  (v1 , v2 ,, vn ) are two vectors.
Example: Find the weighted Euclidean inner product of u  (u1 , u 2 ) and v  (v1 , v2 ) in
R2 with weights w1  7 and w2  3
Example: Verify that the weighted inner product u, v 
the previous example satisfies the four inner product axioms.
7u1v1  3u2v2 defined in
MATH1850U: Chapter 5 cont...; 6
6
Example: Find the weighted Euclidean inner product of u  (___, ___, ___) and
v  (___, ___, ___) in R3 with weights w1  ___ , w2  ___ , and w3  ___ .
Note: See pg. 337 for an example: arithmetic average or mean.
Definition: If V is an inner product space, then the norm (or length) of a vector v in V is
denoted by
|| v ||
and is defined by
|| v || v  v
12
The distance between two points (vectors) u and v is denoted by
by
d (u, v ) and is defined
d (u, v ) || u  v ||
Note: The norm and distance depend on the inner product that you use.
Example: Consider the 2 vectors below. With the standard Euclidean inner product,
they each have a norm of 1, and the distance is 2 . Use the weighted Euclidean inner
product defined in a previous example, u, v  7u1v1  3u 2 v2 to find the norm of u, v,
and the distance between them.
Example: Use the distance between u  (___, ___, ___) and v  (___, ___, ___) in R3
using the weighted Euclidean inner product u, v  __ u1v1  __ u2 v2  __ u3v3 .
MATH1850U: Chapter 6 cont…
1
INNER PRODUCT SPACES cont…
Inner Products (Section 6.1; pg. 335) cont…
Recall: Last day, we introduced the concept of a weighted inner product, and how to
find distance and norm using this inner product.
Definition: If V is an inner product space, then the set of points in V that satisfy
|| u || 1
is called the unit sphere or sometimes the unit circle in V.
Example: For the standard Euclidean inner product in R2, the unit circle is a geometric
circle. For the weighted inner product u, v  7u1v1  3u 2 v2 , the unit circle is
(geometrically) an ellipse.
MATH1850U: Chapter 6 cont…
2
Example: The weighted Euclidean inner product u, v  7u1v1  3u 2 v2 is the inner
 7
product on R2 generated by A  
0
0
 . Verify the above theorem holds for this
3
example.
Example: Given the matrices below, find the inner product and the norm of each matrix.
MATH1850U: Chapter 6 cont…
2
2
Example: Given p  1  x  2 x and q  x  x , find  p, q  , p , and p  q .
3
MATH1850U: Chapter 6 cont…
Theorem (Properties of Inner Products): If u, v, and w are vectors in a real inner
product space, and k is any scalar, then:
a) 0, v  v,0  0
b)
u, v  w  u, v  u, w
c)
u, v  w  u, v  u, w
d)
u  v, w  u, w  v , w
e) k u, v  u, kv
Example: Find 2 v  w , 3u  2w given the information below.
Angle and Orthogonality in Inner Product Spaces (6.2; pg. 345)
Recall: In sections 3.2-3.3, you already learned about the Cauchy-Schwartz inequality,
properties of lengths and distances, cosine of an angle between 2 vectors, and the
Pythagorean Theorem for Rn.
Definition: Two vectors u and v in an inner product space are called orthogonal if
u, v  0 .
Example: Suppose p, q, and r are defined as given below.
i) Are p and q orthogonal?
ii) Are p and r orthogonal?
4
MATH1850U: Chapter 6 cont…
5
Definition: Let W be a subspace of an inner product space V. A vector u in V is said to
be orthogonal to W if it is orthogonal to every vector in W. The set of all vectors in V
that are orthogonal to W is called the orthogonal complement of W. This set is denoted
W.
Theorem (Properties of Orthogonal Complements): If W is a subspace of an inner
product space V, then:
a)
W
is a subspace of V.
b) The only vector common to W and
W
is 0.
Explanation of b)
Theorem: If W is a subspace of a finite-dimensional inner product space V, then the
orthogonal complement of W

is W; that is:
(W  )   W
Theorem: If A is an
mn
matrix, then:
a) The nullspace of A and the row space of A are orthogonal complements in
with respect to the Euclidean inner product.
b) The nullspace of
Rm
AT
Rn
and the column space of A are orthogonal complements in
with respect to the Euclidean inner product.
MATH1850U: Chapter 6 cont…
Proof of a):
Example: Let W be the subspace or R5 spanned by the vectors shown below. Find a
basis for the orthogonal complement of W.
6
MATH1850U: Chapter 6 cont…
Equivalence Theorem: If A is an n n matrix, then the following statements are
equivalent:
a) A is invertible
b) The homogeneous system Ax  0 has only the trivial solution
I
c) The reduced row-echelon from of A is n
d) A is expressible as a product of elementary matrices.
e) Ax  b is consistent for every column vector b
f) Ax  b has exactly one solution for every column vector b
g) det( A)  0
h) The column vectors of A are linearly independent
i) The row vectors of A are linearly independent
j) The column vectors of A span R n
k) The row vectors of A span R n
l) The column vectors of A form a basis for R n
m) The row vectors of A form a basis for R n
n) A has rank n
o) A has nullity 0
n
p) The range of TA is R
q) TA is one-to-one.
r)   0 is not an eigenvalue of A.
s) The orthogonal complement of the nullspace of A is R n
t) The orthogonal complement of the row space of A is 0
7
MATH1850U: Chapter 6 cont...
1
INNER PRODUCT SPACES cont...
Gram-Schmidt Process; QR-Decomposition (6.3; pg. 352)
Definition: A set of vectors in an inner product space is called an orthogonal set if all
pairs of distinct vectors in the set are orthogonal. An orthogonal set in which each vector
has norm 1 is called orthonormal.
Recall: Previously, we saw that given any vector u, then v defined by v 
u
|| u ||
is a unit vector (that is, it has norm 1) and is a scalar multiple of u (parallel to u).
Example: Consider the set of vectors shown below.
i) Show that the vectors form an orthogonal set in R3 with the Euclidean inner
product.
ii) Construct an orthonormal set from these vectors.
Definition: A basis of an inner product space that consists of orthonormal vectors is
called an orthonormal basis. A basis of an inner product space consisting of orthogonal
vectors is called an orthogonal basis.
MATH1850U: Chapter 6 cont...
Examples:
Theorem: If S  v 1 , v 2 ,, v n  is an orthonormal basis for an inner product space V,
and u is any vector in V, then
u  u, v 1  v 1   u, v 2  v 2     u, v n  v n
Explanation/Example:
Note: Because the components of the coordinate vector of u relative to a basis
S  v1 , v 2 ,, v n  are the coefficients k i in u  k1 v 1  k 2 v 2    k n v n , the previous
theorem also says that the coordinate vector relative to an orthogonal basis is
(u) S  ( u, v1 , u, v 2 ,, u, v n )
2
MATH1850U: Chapter 6 cont...
3
Example: Find the coordinate vector of the polynomial ______________ relative to the
standard basis for P2.


Example: S   1 , 1 , 0 ,  1 ,  1 , 0 , 0, 0, 1 is an orthonormal basis of
2
2  
2
2 


3
R . Find the coordinate vector relative to this basis.
Part of the usefulness of working with orthonormal bases can be seen in the following
theorem:
Theorem: If S is an orthonormal basis for an n-dimensional inner product space, and if
(u) S  u1 , u 2 ,, u n  and ( v) S  v1 , v2 ,, vn 
then
Note: Observe that the right side of the equality in part a) is the norm of the coordinate
vector (u) S with respect to the Euclidean inner product on Rn, and the right side of the
equality of part c) is the Euclidean inner product of (u) S and ( v ) S . Thus, by working
with orthonormal bases, the computation of norms and inner products in general inner
product spaces can be reduced to the computation of Euclidean norms and inner products
of coordinate vectors.
MATH1850U: Chapter 6 cont...
 1 2
1 1 
T
Example: Consider A  
and B  

 in M 22 with  u, v  tr ( v u) .
5
3
2
6




Verify parts a) and c) of the above theorem hold.
Theorem: If S  v 1 , v 2 ,, v n  is an orthogonal set of nonzero vectors in an inner
product space, then S is linearly independent.
Theorem (Projection Theorem): If W is a finite-dimensional subspace of an inner
product space V, then every vector u in V can be expressed in exactly one way as
u  w 1  w 2 where w 1 is in W and w 2 is in W  .
Note: This is the general case of Theorem 3.3.3 (the projection of u along a for u and a
in R2 and R3). And so the terminology follows: the vector w 1 is called the orthogonal
projection of u on W and is denoted projW u and w 2 is called the component of u
orthogonal to W and is denoted projW  u . Thus, u  projW u  projW  u and so
projW  u  u  projW u .
4
MATH1850U: Chapter 6 cont...
Theorem: Let W be a finite-dimensional subspace of an inner product space V.
Example: Let V  R 3 with Euclidean inner product, and let W  spane1 , e 2  . Say
u  (3, 2,  2) . Find projW u and projW  u .
Example: Let W be the xy-plane (a subspace of R3). Then an orthonormal basis for W
is v1  (1, 0, 0) , v 2  (0, 1, 0) . Find the projection of u  (1,  1, 1) onto W.
Theorem: Every nonzero finite-dimensional inner product space has an orthogonal
basis.
5
MATH1850U: Chapter 6 cont...
6
The Gram-Schmidt Process for Finding an Orthonormal Basis:
Given that u1 , u 2 ,, u n  form a basis for V, we will construct a set
v1 , v 2 ,, v n that will form an orthogonal basis, and then normalize each vector to
get an orthonormal basis.
a) Set v1  u1
b)
v 2  u 2  projW1 u 2  u 2 
c) v 3  u 3  projW2 u 3  u 3 
d) v 4  u 4  projW3 u 4  u 4 
u 2 , v1
|| v1 || 2
u 3 , v1
|| v1 ||
2
u 4 , v1
|| v1 ||
2
v1
v1 
v1 
u3 , v 2
|| v 2 || 2
u4 , v2
|| v 2 ||
2
v2
v2 
u4 , v3
|| v 3 || 2
e) Continuing for n steps we construct an orthogonal basis.
Graphical Explanation:
v3
MATH1850U: Chapter 6 cont...
Example: Let S  u1 , u 2 , u 3  with the vectors defined below. S is a basis for R3. Use
the Gram-Schmidt process to get an orthonormal basis.
7
Download