Uploaded by cadepurcell

Lecture4B

advertisement
Linear Algebra for Electrical Engineers
(M340L-ECE)
Lecture 4
Sept 7
Matrix Transformations
• We use matrices to represent linear systems in the form Ax
= b.
• We will also use matrices to represent linear functions
y = Ax
• Here:
•
A is an m × n matrix.
•
x ∈ ℝn is the input vector
•
y ∈ ℝm is the output vector
• This idea leads us to the study of matrix transformations.
• Examples of matrix transformations include: Rotations, projections, stretching
transformations, and others.
Matrix Transformations: Function notation
• Let A be an m × n matrix.
• We associate to A the function TA
: ℝn → ℝm defined by
TA(x) = Ax
(x ∈ ℝn).
• The domain of TA is ℝn (the set of n-vectors).
• The codomain of TA is ℝm (the set of m-vectors).
• We refer to the function TA
the matrix A.
: ℝn → ℝm as the matrix transformation given by
• For example, the 2 × 3 matrix
A=
3 −1 2
[2 5 −3]
gives rise to the matrix transformation (function) T
: ℝ3 → ℝ2
x
x
3x − y + 2z
3
−1
2
y =
T y =
.
[
]
[
]
2
5
−3
2x
+
5y
−
3z
[z]
[z]
• We can represent the transformation T in point form:
T(x, y, z) = (3x − y + 2z, 2x + 5y − 3z).
example: identity transformation on ℝn
• Consider the n × n identity matrix
In =
1
0
⋮
0
0
1
⋮
0
…
…
⋱
…
0
0
.
⋮
1
• The identity matrix gives rise to the identity transformation
T(x) = Inx = x.
(recall the identity property: Inx
= x for any n-vector x)
example: rotation transformation on ℝ2
• Given an angle θ (rads), consider the 2 × 2 matrix
Rθ =
cos θ −sin θ
.
[ sin θ cos θ ]
• The associated transformation
x cos θ − y sin θ
x
x
cos θ −sin θ x
Tθ ([y]) = Rθ [y] =
=
[ sin θ cos θ ] [y] [x sin θ + y cos θ]
is represented geometrically as the function that maps an input vector to the
vector rotated by θ radians in the counterclockwise direction
x cos θ − y sin θ
x
.
Tθ : [y] ↦
[x sin θ + y cos θ]
• The inverse of the 2 × 2 matrix Rθ is given by
Rθ−1 =
cos θ sin θ
.
[−sin θ cos θ]
−1
• The matrix Rθ corresponds to a matrix transformation that rotates by −θ
radians counterclockwise (equivalently, a rotation by θ radians clockwise).
−1
• We verify the inverse property of Rθ and Rθ, using the definition of matrixmatrix product:
cos2 θ + sin2 θ
−sin θ cos θ + sin θ cos θ
1 0
cos θ sin θ cos θ −sin θ
=
=
2
2
[−sin θ cos θ] [ sin θ cos θ ] [−sin θ cos θ + sin θ cos θ
] [0 1]
cos θ + sin θ
(by the trig. identity cos2 θ + sin2 θ
= 1)
• Given two angles θ, ϕ (rads), consider the rotation matrices
Rθ =
cos ϕ −sin ϕ
cos θ −sin θ
Rϕ =
[ sin θ cos θ ]
[ sin ϕ cos ϕ ]
cos(θ + ϕ) −sin(θ + ϕ)
.
Rθ+ϕ =
[ sin(θ + ϕ) cos(θ + ϕ) ]
• Fact: The product matrix Rθ Rϕ is equal to Rθ+ϕ.
• “Geometric Proof” of Fact:
1. Think of what Rθ Rϕ does to a 2-vector [y]:
x
x
x
(Rθ Rϕ)[y] = Rθ Rϕ [y]
(
)
x
x
(first rotate vector [y] by ϕ rads, then rotate resulting vector Rϕ [y] by θ rads).
x
2. Now think of what Rθ+ϕ does to [y]:
x
Rθ+φ [y]
x
(rotate vector [y] by θ + ϕ rads).
x
3. Since the matrices Rθ Rϕ and Rθ+ϕ are doing the same thing to any input vector [y],
the matrices must be identical.
• Starting from:
cos ϕ −sin ϕ
cos(θ + ϕ) −sin(θ + ϕ)
cos θ −sin θ
.
⋅
=
[ sin θ cos θ ] [ sin ϕ cos ϕ ] [ sin(θ + ϕ) cos(θ + ϕ) ]
Rθ
Rϕ
• Derive the cosine/sine sum laws:
Rθ+ϕ
Formula for the inverse of a 2 × 2 matrix
a b
. Then A is invertible if and only if ad − bc ≠ 0, and in this case the
[c d]
inverse of A is given by
1
d −b .
A −1 =
[
ad − bc −c a ]
• Let A
=
• For example,
4 7
[2 6 ]
−1
=
1
0.6 −0.7
6 −7
=
4 ⋅ 6 − 7 ⋅ 2 [−2 4 ] [−0.2 0.4 ]
• At-home work: Check that the inverse property is satisfied in this example by
4 7
0.6 −0.7
and
in both orders, and showing
[2 6 ]
[−0.2 0.4 ]
the result is the 2 × 2 identity matrix I2.
evaluating the product of
example: stretching transformation on ℝ2
Consider the vectors u
=
1
−1
.
,v =
[1]
[1]
Find a 2 × 2 matrix A satisfying Au
= 2u and Av = 3v.
work space
Linear Independence
Lay 1.7
Linear Independence for n-vectors
Definition:
1. A set {v1, v2, …, vp} of vectors in ℝn (resp. ℂn) is said to be linearly dependent if
there exist scalar weights c1, c2, …, cp, not all zero, with
c1v1 + c2v2 + … + cpvp = 0n
(with 0n the n-vector of all zeros).
This equation is called a linear dependence relation among the vectors v1, …, vp
when the weights are not all zero.
2. A set {v1, v2, …, vp} of vectors is linearly independent if the set is NOT linearly
dependent, i.e., if the vector equation x1v1 + x2v2 + … + xpvp
trivial solution x1
= x2 = … = xp = 0.
= 0n has only the
independence versus dependence
{v1, v2, v3} is a linearly
independent set in ℝ3
{v1, v2, v3} is a linearly
dependent set in ℝ3
Example:
Consider three vectors in ℝ2 oriented in the E, S, and NE directions:
v1 =
1
0
1
, v2 =
, v3 =
.
[0]
[−1]
[1]
Draw a picture of the vectors and then determine a linear dependence relation
among v1, v2, and v3.
Example:
4
1
2
Let v1 = 2 , v2 = 5 , v3 = 1 .
[3]
[0]
6
(a) Determine if the set {v1, v2, v3} is a linearly independent set of vectors in ℝ3.
(b) If possible, determine a linear dependence relation among v1, v2, and v3.
(c) Use part (b) to answer: Is v3 a linear combination of v1 and v2?
work space
work space
Observation used in past example: Let {v1, v2, …, vp} be a set of p vectors in ℝn (or ℂn).
The vector equation x1v1 + x2v2 + … + xpvp
matrix-vector equation:
| | … |
v1 v2 … vp
|
|
…
|
x1
x2
⋮
xp
= 0 is equivalent to the (homogeneous)
=0
⟺
Ax = 0,
where A is the n × p matrix having v1, v2, …, vp as its columns.
Two cases:
A. The equation Ax
= 0 has a nontrivial solution x ≠ 0 ⟺ cols of A are LD.
B. The equation Ax
= 0 has only the trivial solution x = 0 ⟺ cols of A are LI.
Thus we have proven:
• Let A
= [v1 … vp] be a matrix written in column form.
• Theorem: The columns v1, …, vp of A are linearly independent ⟺ the equation
Ax = 0 has only the trivial solution.
This theorem provides a practical means to test whether a set of vectors is linearly
independent. Recall:
•
Ax = 0 has only the trivial solution ⟺ the equation Ax = 0 has no free
variables ⟺ the reduced echelon form of A has a pivot in every column.
A Reformulation of Linear Dependence
Theorem: An indexed set {v1, v2, …, vp} of two or more vectors is linearly dependent if and
only if at least one of the vectors is a linear combination of the rest.
Warning: This theorem does not say that every vector in a linearly dependent set is a linear
combination of the remaining vectors.
Example: Given four vectors v1, v2, v3, v4 satisfying the linear dependence relation
12v1 + 9v2 − 6v4 = 0.
(thus, the set {v1, …, v4} is linearly dependent)
(1) Is v4 expressible as a linear combination of v1, v2, v3?
(2) Is v3 expressible as a linear combination of v1, v2, v4?
Remark: Any set of vectors containing the zero vector is linearly dependent.
Example: The set {0, u, v} is linearly dependent for any vectors u, v.
Special case: Linear independence for a set containing a single vector
Remark: A set {v} containing a single n-vector is linearly independent if v is not
the zero vector.
Special case: Linear independence for a pair of vectors
Remark: A set {u, v} of two n-vectors is linearly dependent if and only if one of the vectors is a scalar
multiple of the other.
proof:
First: Let {u, v} be linearly dependent.
Then su + tv
So su
= 0 for nonzero scalars s, t.
= − tv, hence
if s
≠ 0 then u = −
t
v,
(s)
Conversely: if u is a scalar multiple of v, say u
dependence relation among u and v:
whereas,
if t
≠ 0 then v = −
= αv for some scalar α, then there is a nontrivial linear
1 ⋅ u + (−α) ⋅ v = 0,
so {u, v} are linearly dependent.
s
u.
(t)
Similarly: if v is a scalar multiple of u then {u, v} are linearly dependent.
Example:
Determine if the vectors v1
=[
1−i
−i
v
=
,
, in ℂ2, are linearly independent.
2
]
[1 + i]
1
Introduction to Determinants
Lay 3.1
Introduction to determinants
The determinant of a square n × n matrix A is a scalar, written det(A).
We first consider the case n
= 2:
a11 a12
Definition: The determinant of a 2 × 2 matrix A =
is given by
[a21 a22]
det A = a11a22 − a12a21.
For example:
det
−i 1 − i
=
[ 1 1 + i]
• We compute the determinant of an n × n matrix using the cofactor expansion
formula.
Definition: For an n × n matrix A, the matrix Aij is formed from the matrix A by
removing the i th row and j th column of A.
Ex:
1 2 3
A= 4 5 6 ⟶
7 8 9
A13 =
⋮
A22 =
Determinants
Definition: For n
≥ 3, the determinant of an n × n matrix A = [aij] is given by
det A = a11 det A11 − a12 det A12 + ⋯ + (−1)n+1a1n det A1n
=
n
∑
j=1
(−1) j+1a1j det A1j
1 0
5
Ex: Compute the determinant of A = 2 −1 4 .
[0 0 −2]
.
work space
• To state the next theorem, it is convenient to write the definition of det A in a
slightly different form.
Definition: Given A
= [aij], the (i, j)-cofactor of A is the number Cij given by
Cij = (−1)i+j det Aij.
Then
det A = a11C11 + a12C12 + … + a1nC1n.
This formula is called the cofactor expansion across the first row of A.
Download