2.1 Matrix Multiplication.
In section 1.2 we have defined the matrix-vector product of a
matrix A and a vector x. We will now use this product to define the product of two matrices A and B. This is given by the
following definition.
1. Definition.
If A is an m ⇥ n matrix, and if B is an n ⇥ p matrix with
columns b1, b2, . . . , bp, then the product AB is the m ⇥ p matrix whose columns are Ab1, Ab2, . . . , Abp. That is
AB = A(b1
b2
2. Example.
Let
...
bp) = (Ab1
Ab2
...
Abp)
0
1
◆
1 2 ✓
2
3
4
AB = @ 3 4 A
.
3 2 1
1 5
Evaluate AB using the previous definition.
(3)(5) z(z) 3)) (i)
( 3 =)(z) 3)) (i) (j)
Cols(AB) ( Si )( ) 4) 3 ) x)() (i)
col , (A)
Cok(AD)
=
=
=
+
-
=
+
=
+
=
=
,
=
,
AB
=
(i)( ,")
:
=
.
Then ,
(iii)
We will mostly use the following definition to evaluate the
product of two matrices,
3. Definition.
If A = (aij ) is an m ⇥ p matrix and B = (bij ) is a p ⇥ n matrix,
then the product of A and B, denoted AB, is the m⇥n matrix
C = (cij ), defined by
cij = ai1b1j + ai2b2j + · · · + aipbpj .
The following diagram may be used to evaluate the product
of two matrices. It helps us to visualize the process described
in the definition.
something
like
dot
B : p rows n columns
product
0
&
~
a
b 12
⇥+
21
a 22
b 22
⇥+
..
.+
a
0
a11
B
B
B
B a21
B
B
B
B
...
B
B
B
@ am1
a12
...
a22
...
...
...
am2
...
a1p
2p
⇥
1
C
C
C
a2p; C
C
C
C
C
...
C
C
C
amp A
A : m rows p columns
b p2
B
B
B
B
B
B
B
B
B
B
B
@
0
b11
b12
...
b1n
b21
b22
...
b2n
...
...
...
...
bp1
bp2
...
bpn
c
B 11
B
B c
B 21
B
B
B ..
B .
B
B
@ cm1
c12
...
c1n
c22
...
c2n
...
...
...
cm2
...
cmn
1
C
C
C
C
C
C
C
C
C
C
C
A
1
C
C
C
C
C
C
C
C
C
C
A
C = AB : m rows n columns
Note: AB is defined only when the number of columns of A
is exactly the same as the number of rows of B.
A
m⇥p
B
p⇥n
=
AB
m⇥n
the same
size of AB
4. Example.
Let
A=
✓
1 2
3 1
1
4
◆
and
find AB if it exists.
*
size
of AB
so AB
1
5
3 A,
1
(i)(5) + z( 3) ( 1)(1)
-
(3)(5)
B
3x2
2x2
B=@
2
4
2
F
2
A
0
=
(ii)
+
+
(1)( -3) + (4)(1)
I
5. Example.
(a) If A is a 2 ⇥ 3 matrix and B is a 3 ⇥ 4 matrix, describe AB
and BA, if possible.
4
34
A
2x3
the same
Not
Le
Ba
is
undefined
2x4 size of AB
(b) Let A be a 2 ⇥ 3 matrix and B be a 3 ⇥ 2 matrix, describe
AB and BA, if possible.
2x3
B
3
A
same
3x2
2x3
me
L
-
~
2x]
size
of AB
✓
1 2
(c) Let A =
1 3
BA, if possible.
:
1
:
3x3
◆
+
,
AB FBA
In general AB 6= BA.
size
and B =
✓
of
BA
2 1
0 1
◆
. Evaluate AB and
(b)
(i)
( 5)(zY)
Thus
A
=
1
4
=
(i)(3)
:
BA
6. Example.
✓
1
Let A =
1
1
2
0
0
2 3
1 0
, B=
, C = @ 0 1/2
0 A.
2 3
0 1
0
0 1/3
Evaluate the following expressions or explain why they are
not defined.
◆
✓
0
◆
(a) AB
(b) B 2A
(c) CAT
a)At these
b)
B
AB
is
undefined
(0 i))di)( + = )
=
-
-
:
(bi)(+ - - ) (i 2 =)
=
=
C)
CA
:
=
(
)() (i)
=
Check
7. Properties of Matrix Multiplication
If r and s are real numbers and A, B, C are matrices of the
appropriate sizes, then
(a) A(BC) = (AB)C
(b) A(B + C) = AB + AC
(c) (A + B)C = AC + BC
(d) r(sA) = (rs)A
(e) A(rB) = r(AB) = (rA)B
(f) (AB)T = B T AT
If A is an m ⇥ n matrix, then ImA = AIn = A.
8. Example.
✓
2
Let A =
3
◆
0
1
0
1 0
, B = @ 2 2 A, and C = @
3 1
Verify the property A(B + C) = AB + AC.
A(b x)
+
=
2 3
1 2
(3 _
1
1
2
1
2
0 A.
2
2)() (i)
=
ABAC-(i(y) Gj) (2)
+
=
9. Example.
✓
0
◆
1
1
4 A.
2
2
1 2 3
Let r = 2, A =
, and B = @ 1
2 0 1
0
Verify the property A(rB) = r(AB).
A(r)
r(AB)
=
=
(i)()(v)
( )()
10. Example.
✓
1
Let A =
2
◆
0
1
0 1
and B = @ 2 2 A .
3 1
Verify the property (AB)T = B T AT
(AB)
BAT
=
=
3 2
1 3
(153)
(P2 ? )
(i) ( 3)
:
-
Suppose that A is a square matrix (n ⇥ n). If p is a positive
integer, then we define the powers of a matrix as follows:
Ap = AA
· · · A}
| {z
p factors
If A is a n ⇥ n matrix, we also define A0 = In.
For nonnegative integers p and q,
ApAq = Ap+q
and
(Ap)q = Apq .
It should be noted that (AB)p 6= ApB p for square matrices in
general.
11. Definition.
A square matrix A = (aij ) for which every term off the main
diagonal is zero, that is, aij = 0 for i 6= j, is called a diagonal
matrix.
12. Example.
The following matrices are diagonal.
A
:
(o)
and
B
=
(
13. Theorem.
If A is a diagonal matrix, An is a diagonal matrix obtained
by taking the nth power of each diagonal entry, where n is a
positive integer.
14. Example.
0
1
2 0 0
Let A = @ 0 3 0 A. Find A2.
0 0 5
Let
A
(0000)
=
3
then A
,
() =
15. Definition.
A matrix A = (aij ) is called symmetric if AT = A. That is, A
is symmetric if it is a square matrix for which aij = aji.
Remark:
If matrix A is symmetric, then the element of A are symmetric with respect to the main diagonal of A.
16. Example.
A
B
C
:
=
=
(5)
(i)
is
symetric
not
symmetric
(32)
Es
notsquaremai
a
:)
is symmetric
17. Theorem.
Suppose A and B are symmetric matrices, then
(a) AT is symmetric.
(b) A + B and A
B are symmetric.
(c) cA is symmetric, for any scalar c.
18. Example.
Let
0
1
1 2 3
A=@2 4 5A
3 5 6
Find A + B and A
A+B
A B
-
B.
and
0
B=@
=
(i)
=
(i)
0
1
0
1
5
8
1
0
8 A.
7
2.3 Invertibility and Elementary Matrices.
Matrix algebra provides tools for manipulating matrix equations and creating various useful formulas in ways similar to
doing ordinary algebra with real numbers. This section investigates the matrix analogue of the reciprocal, or multiplicative
inverse, of a nonzero number.
Recall that the multiplicative inverse of a number such as 5 is
1/5 or 5 1. This inverse satisfies the equation
5 1·5=1
and 5 · 5 1 = 1.
The matrix generalization requires both equations since matrix
multiplication is not commutative. Also, we will not use the
slanted-line notation (for division) with matrices.
The existence of an inverse for matrices will allow us to solve,
under certain conditions, systems of the form Ax = b, the same
way the multiplicative inverse c 1 of a number c is used to solve
cx = b by writing x = c 1b.
1. Definition.
An n ⇥ n matrix is called invertible if there exists an n ⇥ n
matrix B such that
AB = BA = In.
The matrix B is called an inverse of A. If there exists no
such matrix B, then A is called singular (or non invertible).
Remark:
If AB = BA = In, then A is also an inverse of B.
2. Example.
Let A
=
(2)
and
B
:
(i =)
Since AB BA Iz , B is an inverse of A and A is invertible
=
=
3. Theorem.
An inverse of a matrix, if it exists, is unique. That is, if B
and C are both inverses of the matrix A, then B = C.
Note:
We will write the inverse of A, if it exists, as A 1. Thus,
AA 1 = A 1A = In.
4. Theorem.
✓
◆
a b
is invertible if ad bc 6= 0. then the
c d
inverse A 1 is such that
✓
◆
1
d
b
A 1=
.
c a
ad bc
The matrix A =
5. Example
✓
Let A =
first
Then ,
,
◆
1 2
3 4
ad-be
(1) (4) (3) (2)
-
=
ad-be
Therefore A
,
. Find A 1 if it exists.
=
=
-
:
-
2=
0 , so
A exists
(b) (i)
[and
=
- ) (i)
=
6. Example
✓
Let A =
◆
1 2
2 4
We have
. Find A 1 if it exists.
ad-bc
Hence A is
(1) (4) (2) (2)
-
=
0
.
So A has
inverse
no
singular matrix
a
,
=
The next theorem will give us a method for solving equation
of the form Ax = b as well as a test for matrix invertibility.
7. Theorem.
If A is an n ⇥ n matrix, then A is invertible if and only if the
linear system Ax = b has a unique solution, given by
x = A 1b,
for every n ⇥ 1 matrix b.
8. Example.
Solve the system
x1 + 2x2 = 3
3x1 + 4x2 =
1
In matrix form we can write
,
(ii)(x) (3)
Since A
(1)
=
A
with
=
,
=
(si)
the solution is given
( *) ( =i) ( i) (z)
=
=
-
Therefore
,
X
=
,
-) and
and
X2
:
5
b
=
(3)
by
:
A
+
5
or
9. Properties of the Inverse
(a) If A is invertible, then A 1 is invertible and
(A 1) 1 = A.
(b) If A and B are invertible, then AB is invertible and
(AB) 1 = B 1A 1.
(c) If A is invertible, then
(AT ) 1 = (A 1)T .
(d) If A is invertible, then kA is invertible for any nonzero
scalar k, and
1
(kA) 1 = A 1.
k
10. Example.
✓
◆
1 2
If A =
, verify the property (AT ) 1 = (A 1)T .
3 4
If
A
(t)"
=
(AT)"
=
=
(4)
(4)
,
then
A
Also, At
=
=
(1)
(hi)
and
and
(i =)
=
11. Theorem.
If A1, A2,. . . , Ar are n⇥n invertible matrices , then A1A2 · · · Ar
is invertible and
(A1A2 · · · Ar ) 1 = Ar 1Ar 11 · · · A2 1A1 1.
12. Example.
✓
1 4
2 5
Let B =
◆
.
(a) Suppose A satisfies the equation 2AT + I = B. Find A.
(b) Find B 1.
a) If
ZA"
+
1
=
B , then LAT
=
B-I
At
or
= (B-I)
=
A
:
k(y) (b %)) = (2) (ii)
=
-
Transpose At again
Therefore
b)
B"
Bi
A
=
:
(Al)"
=
(22)
(1(5) E2(4)(2iY) = (ii)
.I
13. Elementary Matrices.
(
A
)
Also ,
Es
=
(000RR-
If we form the product
EA
,
we
=
E
obtain
(0)) (62 I
4 5
t
=
=
14. Theorem.
Let A be an m ⇥ n matrix. We can perform any elementary
row operation on A by first performing that same operation
on the identity matrix Im, to get a matrix E, and then performing the product EA.
15. Definition.
A matrix obtained from an identity matrix Im by performing
a single elementary row operation is called an elementary
matrix.
16. Example.
By usingIR-> R2 On Es we have
,
Es
=
(0 00) .
By Using
E
:
-
=
+
(0)
By using R2
E
3Rz R- R ,
<-
-
on
Es
,
)
-,
,
(000) (08)
-
En , Es
are
:
Rs on Is we have
=
018
E
have
we
elementary
matrices
t
t
17. Example.
Find an elementary matrix E such that EA = B, where
0
1
0
1
2 3 2
2 3 2
A = @ 1 4 7 A and B = @ 1 4 7 A .
3 0 4
1 6 0
A
(2)
=
-
24 + 43 24
-
-
(4)
>
If we perform this operation on Is
Is
=
(098)
0
-
[R+ Rs ->
Ry
-
I
>
we
have
( 8)
!
=
z
20
-
Then, we form the product
ta
=
(000)
=
18. Theorem.
Every elementary matrix is invertible, and the inverse is
also an elementary matrix.
19. Example.
If
0
1
1 0 0
E1 = @ 0 0 1 A
0 1 0
and
find E1 1 and E2 1.
ti
=
0
1
1 0 0
E2 = @ 0 5 0 A ,
0 0 1
(089)
(%
E
=
,
010
I
ti
=
0
00
2.4 The Inverse of a Matrix.
In this section, we will see the important connection between invertible matrices and row operations. This will lead to a method
for computing inverses.
Since performing an elementary row operation on a matrix A is
the same as multiplying A on the left by an elementary matrix
E, the transformation of A into reduced row echelon form can
be accomplished by a series of matrix multiplications. Thus if
R is the reduced row echelon form of A, there is a sequence of
elementary matrices E1, E2, . . . , Ek such that
R = Ek Ek 1 · · · E2E1A.
(1)
The subscripts are in reverse order because we are performing
the operations corresponding to E1 first to obtain E1A, then the
operation corresponding to E2 to obtain E2E1A, and so on.
1. Example.
A
=
(d) I
By using operations I on Is we have
,
=
(0%8) (0 8)
-
=
=
By using operation &
On
Is
,
we
have
(0) (0 0)
1
=
4
00
00
=
=
-
-
20
1
00
=
2
I
Therefore,
R
=
(600) ())))
=
:At
2. Theorem.
A square matrix A is invertible if and only if its reduced row
echelon form R is an identity matrix.
Remark:
Using (5.1), if A is invertible, R = Im, so
Im = Ek Ek 1 · · · E2E1A.
We can multiply both sides by A 1, to get
or
ImA 1 = Ek Ek 1 · · · E2E1AA 1
A 1 = Ek Ek 1 · · · E2 E1 Im .
Therefore, the same procedure that reduces A to Im can be
applied to reduce Im to A 1.
If we place A and I side-by-side to form an augmented matrix (A|I), then row operations on this matrix produce identical operations on A and on I. Either there are row operations
that transform A to I and I to A 1or else A is not invertible.
3. Algorithm for Finding A 1.
Row reduce the augmented matrix (A|I). If A is row equivalent to I, then (A|I) is row equivalent to (I|A 1). Otherwise,
A does not have an inverse.
4. Example.
Find the inverse of the matrix
0
1
1 1 1
A = @ 0 2 3 A.
5 5 1
EVE
=
)
2301
5R , + Rs
-
7
Ry
(d)-EBR
-3 R , - R,
+
~
80
(15000
111
-
S=
-
4
-
5
(d, (
-RR
5. Example.
Find the inverse of the matrix
0
1
1 2 1
A = @ 2 2 0 A,
3 0 1
if it exists.
A F
=
(d)
(
-
&
24, + Ez
GRB
(↳I
We will never be able to get the identity matrix on the left hand side
Therefore A
,
is not
invertible
(At doesn't exist)
6. Example.
✓
2 3
1 0
first reduce A to
I and
Express A =
we
R,
Applying
(3)
%
=
to A and I
EA
where
,
(6)
<
(60) Es(EEA)
EstzE A
=
I
and
,
=
I
where
=
,
I
E
and
,
so
down the elementary matrix at each stage
(d)
where
,
Applying ER2-R to Ent A
Hence
=
.
R2 to E,A and
G(t A)
=
E
as a product of elementary matrices.
write
7 R2
[P+ R
Applying
◆
gives
(2i)
gives
Ey
=
(o.)
,
(Estut)" (Estet ) A (Estt )"I
=
,
IA
A
A
with
Ei
=
=
:
=
.
(Estt )
,
(Ett )
,
E 't'ts"
(i) En (2) Es (03)
=
=
,
,
(Check)
We can now adapt Theorem 15 Section 1.7 to square matrices and add more statements.
7. Theorem.
The following statements about an n ⇥ n matrix A are equivalent. That is, if one is true, then so are the others, and if
one is false, then so are the others.
(a) A is invertible.
(b) The reduced row echelon form of A is In
(c) The columns of A are linearly independent.
(d) The equation Ax = b is consistent for every vector b in
Rn .
(e) The nullity of A is zero.
(f) The rank of A is n.
(g) The only solution of Ax = 0 is 0.
(h) The span of the columns of A is Rn.
(i) There exists an n ⇥ n matrix B such that BA = In.
(j) There exists an n ⇥ n matrix C such that AC = In.
(k) A is a product of elementary matrices.
2.7 Linear Transformations and Matrices.
Up to now, we have been thinking of an m ⇥ n matrix A simply
as an array of numbers. However, there are other ways to view
matrices. For example, we can think of A as representing a sort
of input/output device.
matrix A
v in Rn
✓
1 2 3
2 1 2
Av in Rm
◆
Let A be the matrix A =
. Then, if we input the vector
0
1
4
v = @ 1 A in R3, we get as output the vector
2
0
1
✓
◆
✓ ◆
4
1 2 3 @
8
Av =
1A=
2 1 2
11
2
in R2. The matrix A is like a function from R3 to R2. In that case,
A will be called a transformation from R3 to R2.
1. Definition.
A transformation (or mapping) T from Rn to Rm is a rule
that assigns to each vector x in Rn a vector T (x) in Rm. The
set Rn is called the domain of T, and Rm is called the codomain of T. We use the notation T : Rn ! Rm. For x in Rn,
the vector T (x) in Rm is called the image of x. The range of
T is defined to be the set of images T (x) for all x in Rn. The
transformation T : Rn ! Rn is called an operator on Rn.
2. Example.
(a) Let T R > RR
defined
-
:
T(x) (*)
The
=
for example
+(2)
,
=
by
domain is
R
and
(2) (i)
,
So
codomain
is
13
is the image
of
(2) by i
the
Clinear transformation
(b) Let
T R
<
:
R2 defined
i( ) (
transformation whose domain
X, +
XzXy
This
is
X
xz
and
codomain
=
for example
by
,
+
T( ) (3)
=
=
a
is
R2
(non linear transformation)
is
123
3. Definition.
A linear transformation T : Rn ! Rm is defined by equations of the form
0
1 0
1
x1
a11x1 + a12x2 + · · · + a1nxn
B
C B
C
B x2 C B a21x1 + a22x2 + · · · + a2nxn C
T B .. C = B ..
...
... C
@ . A @ .
A
xn
am1x1 + am2x2 + · · · + amnxn
or
0
10
1
0
1
a11 a12 · · · a1n
x1
x1
B
CB
C
B
C
B a21 a22 · · · a2n C B x2 C
B x2 C
Ax = B ..
B
C = T B .. C .
...
... C
@ .
A @ ... A
@ . A
am1 am2 · · · amn
xn
xn
The matrix A is called the standard matrix for the linear
transformation T , and T is called multiplication by A and
is denoted by TA(x) = Ax.
4. Example.
Define a linear transformation T : R2 ! R3 by
0
1
✓
◆
x1 3x2
x1
T
= @ 3x1 + 5x2 A .
x2
x1 + 7x2
✓
◆
2
Find the image of x =
by T .
1
Standard
We have Ax
Let
*
(2)
=
,
then
Here the domain
,
+
is
(z)
=
Ta()
A-()
I
and
=
matrix of
(3)()
(3)() (i) ( (
=
the
=
codomain
is
IR?
:
5. Example.
Let A
or
by
=
(sy)
,
then
Ta
AR
:
12
is
given
by
Ta()
=
+(*) (bi)(* ) (i)
=
=
6. Definition.
Let T0 : Rn ! Rm be described by T0(x) = 0, the zero vector
in Rm, for every vector x in Rn. This transformation is called
the zero transformation from Rn to Rm. If O is the m ⇥ n
zero matrix, then T0(x) = Ox = 0.
7. Definition.
Let T1 : Rn ! Rn be described by T1(x) = x, for every vector
x in Rn, then T1 is called the identity operator. If In is the
n ⇥ n identity matrix, then T1(x) = Inx = x.
8. Theorem.
A transformation T : Rn ! Rn is linear if and only if
(a) T (u + v) = T (u) + T (v)
(b) T (cu) = cT (u)
for all vectors u and v in Rn and all scalar c.
Remark:
If T is a linear transformation, then
(a) T (0) = 0
(b) T (c1v1 + c2v2 + · · · + cpvp) = c1T (v1) + c2T (v2) + · · · + cpT (vp)
for all vectors v1, v2, . . . , vp in Rn and all scalar c1, c2, . . . , cp.
Ax
9. Example
Show that T
✓
from R2 to R2.
x1
x2
◆
✓
=
3x1 + x2
7x1
◆
is a linear transformation
(a) Show that T(n =) T(u) T(v)
That is
,
+
=
+
T((m) (vi) T(u) (2)
for all rectors i
=
+
=
+
(c)
and
=
+
()
in
1
T((u) (v) T(u) [
+
=
=
=
=
(u) (U)
:
T(u) +(v)
+
(b) Show that T((u)
That is
,
=
< T(u)
+(2(u)) cT(u)
=
for all Vectors
u
=
(4)
in
11
T((u)) +(i) (c) c
=
=
CT(u)
Therefore T is linear
,
=
=
10. Theorem.
Every linear transformation from Rn to Rm is a matrix transformation, and conversely, every matrix transformation from
Rn to Rm is a linear transformation.
11. Theorem.
Let T : Rn ! Rm be a linear transformation, and e1, e2, . . . , en
be the standard basis vectors for Rn. Then, the standard
matrix for T is an m⇥n matrix whose jth column is the vector
T (ej ):
(T (e1) T (e2) · · · T (en)).
Remark:
✓
◆
✓ ◆
1
0
For R2, e1 =
, e2 =
.
0
1
0 1
0 1
0 1
1
0
0
For R3, e1 = @ 0 A, e2 = @ 1 A, e3 = @ 0 A.
0
0
1
12. Example.
Let T : R2 ! R3 be a linear transformation T
0
1
u1 + u2
@
A. Find the standard matrix for T .
u1
2u1 u2
✓
u1
u2
◆
=
We have
T(e)
=
+
(b)
(1)and
=
Thus the matrix is given by
,
+
(e)
=
(T(e) T(e])
+
(i)
=
(g)
(2)
=
Now, we are going to study are geometric transformations
like reflections, rotations, and projections.
13. Example.
Find the standard matrix for the dilation T (x) = 3x, for x in
R2 .
Using Theorem I ,
T(e)
=
(0) 3(0) (b)
+
=
Thus the matrix is given
,
=
by
:
and
(303)
i(e)
=
+
(i) 3(i) (3)
=
=
14. Example.
Let T : R2 ! R2 be the linear transformation that reflects
the points with respect to the x-axis.
(a) Find the standard matrix for T .
✓
◆
2
(b) Find the reflection of
about the x-axis.
3
(a) first Method (Theorem 11)
Y
-
i(ei) e
I
=
Eri
Tei) 1
+
i(e)
-
by
(a) Second Method
*
Th
+
(b)
X
The matrix is given
(b)
=
(3) (3)
=
=
-
E
=
(i)
(0)
T(Y) (Y)
=
·
=
(ox= Y)
( -9) (i)
15. Example.
Find the standard matrix
for the orthogonal projection on
0
1
1
the xy-plane. Find T @ 3 A.
5
5
/T()
i() ()
=
=
<
(x y 0)
,
,
Co)()
matrix
i
(j) (j)
=
This is a list of the most common transformations used in
this course. You may refer to this list when solving problems
from assignments, midterms or finals.
2.8 Composition and invertibility of Linear
Transformations.
Recall from Section 1.6 that if V is a set of vectors in Rn and
SpanS = V , then we say that S is a generating set for V or
that S generates V . We can use the standard matrix of a linear transformation T to find a generating set for the range of T
according to the following result:
The range of a linear transformation equals the span of the columns of its standard matrix.
1. Example.
Let T : R2 ! R3 defined by
T
✓
x1
x2
◆
0
1
x1 3x2
= @ 3x1 + 5x2 A .
x1 + 7x2
Determine a generating set for the range of the linear transformation T .
The standard matrix
A generating
of
set of the
((i) (13
.
T is
range
of
(2)
T is
given
by
The first fundamental property of linear transformations is
given by the following definition.
2. Definition.
A transformation T from Rn to Rm is said to be onto if its
range is all of Rm; that is, if every vector in Rm is an image.
The following theorem gives a practical way of checking whether a linear transformation is onto or not.
3. Theorem.
Let TA be a linear transformation from Rn to Rm with an m⇥n
standard matrix A. The following conditions are equivalent:
(a) TA is onto; that is, the range of TA is Rm.
(b) The columns of A form a generating set for Rm.
(c) rank A = m.
4. Example.
Determine if the linear transformation T : R3 ! R3 defined
by
0
1 0
1
x1
x2 + 3x3
T @ x2 A = @ x1 + x2 + x3 A
x3
4x1 + 2x2 + 2x3
is onto.
The
RREf of the standard
A
(i I
=
The codomain
is
R3
A
matrix
(0 (
8
O
.
0
I
(m 3) and3 Therefore T is onto
=
,
(check
:
,
.
↑
How many non-zero
rows
there
are
,
The second fundamental property of linear transformations
is given by the following definition.
5. Definition.
Let T : Rn ! Rm be a linear transformation. Then, T is
one-to-one, if
T (x1) = T (x2) implies that x1 = x2
for all vectors x1, x2 in Rn. That is, T is one-to-one if every
vector w = T (x) in Rm is the image of exactly one vector x in
Rn .
Remark:
T is also one-to-one if x1 6= x2 implies T (x1) 6= T (x2).
6. Example.
Let T : R2 ! R2 be the orthogonal projection on the x-axis.
Show that T is not one-to-one.
·
,
·
T(f) T()
=
T(*,)
=
T(z) but E
7. Example.
Let T : R3 ! R2 be given by
0 1
0 1
✓
◆ x
x
1 2 3 @ A
T@yA=
y .
4 5 6
z
z
Show that T is not one-to-one.
We see that
+(=)
(+ =j)(j) (3)
=
=
and
+
(j)
( + 55)() (3)
=
=
Two different vectors in R
have the same image so
,
T is not
one-to-one
8. Definition.
Let T : Rn ! Rm be a linear transformation. The null space
of T is the set of all x in Rn such that T (x) = 0.
Using this definition and with the following result, we have
another way of showing that a transformation is one-to-one.
A linear transformation is one-to-one if and only if its null
space contains only 0.
9. Example.
Suppose that T : R3 ! R3 defined by
0
1 0
1
x1
x1 + 2x2 3x3
T @ x2 A = @ x1 2x2 + x3 A .
x3
5x1 2x2 3x3
Find a generating set for the null space of T .
The standard matrix is
A
=
(2 I
5
-
2
3
-
To find the null
of T we solve Ax =
,
space
The augmented
matrix of this system
is :
& I andits is
RRE
The general solution
&XI
is
Every solution has the
form
() (E) (i)
=
=
Then a generating
*
set
Eli
for the hull
space of T is
X,
=
X
Xz X3
=
X 3 is
free
10. Theorem.
Let TA : Rn ! Rm be a linear transformation with an m ⇥ n
standard matrix A, then the following statements are equivalents:
(a) TA is one-to-one.
(b) The null space of TA consists only of the zero vector 0.
(c) The columns of A are linearly independent.
(d) rank A = n.
11. Example.
Determine whether the linear transformation of the previous example, T : R3 ! R3, defined by
0
1 0
1
x1
x1 + 2x2 3x3
T @ x2 A = @ x1 2x2 + x3 A
x3
5x1 2x2 3x3
is one-to-one.
The standard matrix
A
(3)
Then rank A
=
2
.
is
and its
Ret is
(07)
0
-
00
0
Since the domain of T is
I , (n 3) the
=
,
transformation is not one-to-one
Combining Theorem 3 of this section with Theorem 6 Section
1.6, we have:
12. Theorem.
If TA : Rn ! Rm is such that TA(x) = Ax for all vector x in Rn
and with A an m ⇥ n matrix, then the following statements
are equivalents:
(a) TA is onto; that is, the range of TA is Rm.
(b) The columns of A form a generating set for Rm.
(c) The equation Ax = b has at least one solution, that is,
Ax = b is consistent for each vector b in Rm.
(d) rank A = m.
Combining now Theorem 10 of this section with Theorem 15
Section 1.7, we obtain:
13. Theorem.
If TA : Rn ! Rm is such that TA(x) = Ax for all vector x in Rn
and with A an m ⇥ n matrix, then the following statements
are equivalents:
(a) TA is one-to-one.
(b) The null space of TA consists only of the zero vector 0.
(c) The equation Ax = b has at most one solution for each
vector b in Rm.
(d) The columns of A are linearly independent.
(e) rank A = n.
The following discussion is about the composition of linear
transformations.
14. Theorem.
Let TA : Rn ! Rp and TB : Rp ! Rm two linear transformations, then the transformation (TB TA)(x) = TB (TA(x)) from
Rn to Rm is called the composition of TB with TA.
Remark:
• The transformation TA is followed by TB .
• If TA(x) = Ax and TB (x) = Bx, then
(TB
TA)(x) = TB (TA(x)) = TB (Ax) = B(Ax) = BAx.
So, the standard matrix of TB
TB
TA is BA and
TA = TBA.
The standard matrix for the composition TB TA is the product of the standard matrices of TB and TA.
15. Example.
Find the standard matrix for the following composition:
⇡
A rotation of ✓ =
(60o) followed by a reflection about the
3
x-axis.
T TBoTA where It is the rotation and is the reflection
=
(
Cost:
cos-sin
A
=
and
Sin
Bar
(i) ) =
i
:
(ii)
Remark:
In general, TB
TA 6= TA TB because
BA 6= AB.
16. Theorem.
Let TA : Rn ! Rp, TB : Rp ! Rq and TC : Rq ! Rm three
linear transformations, then the transformation TC TB TA
is defined by (TC TB TA)(x) = TC (TB (TA(x))) from Rn to Rm.
Remark:
• The transformation TA is followed by TB which is, in turn,
followed by TC .
• If TA(x) = Ax, TB (x) = Bx, and TC (x) = Cx, then
(TC
TB
TA)(x) = CBAx.
So, the standard matrix of TC
TC
TB
TB
TA is CBA and
TA = TCBA.
17. Example.
Find the standard matrix for the following composition:
⇡
A rotation of ✓ =
(45o) followed by a dilation with factor
4
k = 2, followed by a reflection about the y-axis.
T
A
-
=
TcoTBOTA where Ta is the rotation
,
Ti is the dilation and i is the reflection
= =)
,
c
=
(i)
(i)()() (i) (i) (i)
:
:
18. Definition.
If TA : Rn ! Rn is a one-to-one linear operator, then TA 1 :
Rn ! Rn defined by TA 1(x) = A 1x is also a linear operator
called the inverse of TA.
Remark:
• (TA TA 1)(x) = TA(TA 1(x)) = AA 1x = Ix = x.
(TA 1 TA)(x) = TA 1(TA(x)) = A 1Ax = Ix = x.
• The standard matrix for T 1 is the inverse of the standard
matrix for T .
Let us finally combine Theorem 12 and Theorem 13.
19. Theorem.
If TA : Rn ! Rn is such that TA(x) = Ax for all vector x in Rn
and with A an n ⇥ n matrix, then the following statements
are equivalents:
(a) A is invertible.
(b) The equation Ax = b has a unique solution for each vector
b in Rn.
(c) The columns of A are a linearly independent generating
set for Rn.
(d) TA is one-to-one.
(e) TA is onto.
(f) rank A = n.
20. Example.
Consider the linear operator T : R3 ! R3 defined by the
equation
0
1 0
1
x1
2x1 x2
T @ x2 A = @ x1 + x2 + x3 A
x3
2x2 x3
(a) Show that T is one-to-one.
(b) Show that T is onto.
(c) Find T 1.
(a) The standard matrix for T is
A
(4)
=
Then rankA
(b)
Since
(C) Using
A
=
It's RRE
(Check)
3 Since the domain of T is
.
rank
A
=
3
Theorem 19 A
,
and
is
=I
since
(0o
IR , (n 3) the transformation is one-to-one
=
,
the codomain of T is
IR (min 3) the transformation T is onto
=
invertible. Therefore At exists and
Thus ,
() =
tes
answer
like this